About this Author

College chemistry, 1983

The 2002 Model

After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: derekb.lowe@gmail.com
Twitter: Dereklowe
|
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

July 24, 2015
Posted by Derek
If you're a Biogen shareholder, you don't need me coming along to tell you that this has been a bad week. The Alzheimer's antibody news was just the warmup for the company's earning numbers, which made no one happy.
So what's going on over there? Reality, I'd say. The drug industry is ferociously competitive, and no one's earnings are safe. Clinical trials are still coming in with about a 10% rate of success overall, which is the sort of risk level that would send a lot of other industries fleeing in terror. Time and chance happeneth to them all. (And no, I'm not religious at all, but a lot of Ecclesiastes is just good common sense).
So yeah, Biogen as I write this is down about $70 a share, a solid 18% whacking. By the month, by the year-to-date, by the previous year, you're probably not happy if you've been holding the shares. But over the last five years, even with today's debacle, Biogen has beaten all the indices savagely. The five-year NASDAQ is up about 126%, and the five-year S&P 500 is up 90%. Note: earlier figure was incorrect (typed something wrong into the database!), and these figures, as noted in the comments, do not reflect dividends. But then, they don't reflect taxes on those dividends, either. . . Biogen is up 485% over that span, and you know what? Hardly anything ever goes that well in this business for that long, on that large a scale. That's a terrific run.
So if you're a Biogen shareholder right now, sure, you're wondering about the company's earnings prospects, its pipeline, whether or not it's going to do some sort of acquisition (rumors are out there about Isis, and probably others). Worthy questions, and I don't know the answer to any of 'em. If you bought the company's stock back early this year on the basis of (say) those Phase I Alzheimer's results, well. . .you know what happens, most all the time, when you live by the sword, right? If you didn't, well, you do now. But if you've been a longer-term shareholder, you really don't have much to complain about.
Comments (4)
+ TrackBacks (0) | Category: Business and Markets
Posted by Derek
I've been meaning to link to this piece by Wavefunction, "The fundamental philosophical dilemma of chemistry". You may be wondering what that is, but he's got a good candidate: the extreme difficulty of doing controlled experiments at the molecular level.
Much of chemistry is about understanding the fundamental forces that operate within and between molecules. These forces come in different flavors: strong covalent bonds, weak and strong hydrogen bonds, electrostatic interactions, weak multipolar interactions, hydrophobic effects. The net interaction or repulsion between two molecules results from the sum total of these forces, some of which may be attractive and others might be repulsive. Harness these forces and you can control the structure, function and properties of molecules ranging from those used for solar capture to those used as breakthrough anticancer drugs.
Here’s how the fundamental dilemma manifests itself in the control of all these interactions: it is next to impossible to perform controlled experiments that would allow one to methodically vary one of the interactions and see its effect on the overall behavior of the molecule. In a nutshell, the interactions are all correlated, sometimes intimately so, and it can be impossible to change one without changing the other.
That's for sure. The same problem works its way all through organic chemistry - we can't have aromatic rings without them being flat, for example. Fluorine is simultaneously small and extremely electron-withdrawing, and those properties can't be separated cleanly. The way that carbon bonds to other carbons, the polar character of a nitro group, the hydrogen-bonding propensity of OH substituents, the shape of a nitrile - all these things (and uncountable more examples) come as a package, with size, electron density, bond strength and many other variables intimately tangled. I'm not going to get a new nitro group, and my chemical wish list will remain forever unfulfilled. Large parts of the menu come as combination plates, and no substitutions are allowed.
This is why we get in endless discussions over how to make a molecule bind to some protein more tightly - we can't do the ideal clean experiments to see what's really going on:
It is therefore very hard, if not impossible, to pin down a change in binding affinity resulting from a single kind of interaction with any certainty, because changing a single interaction potentially changes all interactions; it is impossible to perform the truly controlled experiment. Sometimes these changes in other interactions can be tiny and we may get lucky, but the tragedy is that we can’t even calculate with the kind of accuracy we would like, what these tiny increments or reductions might be. The total perturbation of a molecule’s various interactions remains a known unknown.
Of course, looked at from a practical perspective, this is what keeps a lot of us (precariously) employed. In the end, there are so many dependent variables that the only thing to do is try a whole range of things, and hope that something becomes clear enough to work with. And that's just for binding - when you get downstream to pharmcokinetics, toxicology, and so on (whole-animal issues), the fundamental variables get so tangled that no one even dreams of unraveling them. There's nothing for it but brains, luck, and plenty of hard work, dang it all.
Comments (11)
+ TrackBacks (0) | Category: Chemical News
July 23, 2015
Posted by Derek
This is not going to make anyone happy at the National Institute of Standards and Technology: a surreptitious meth lab recently blew up inside one of the buildings. So far, there have been no arrests, but I can safely predict that if they find someone, it'll be a person with an abnormal amount of nerve. . .
Comments (11)
+ TrackBacks (0) | Category: Chemical News
Posted by Derek
Good news out in San Diego: Lilly has announced that they're expanding their research site there, adding up to 130 positions with a focus on immunology.
The area really needs some of this. San Diego's biopharma scene has, by all accounts I've heard (and from what I've seen personally) been in decline over the last few years - relative decline at the very least, and probably on the absolute scale as well. There are a lot of excellent people out there, and a lot of good work being done. I hope that this is a sign of revival.
Comments (10)
+ TrackBacks (0) | Category: Business and Markets
Posted by Derek
Atomic force microscopy, a technique that has given us images of individual atoms and molecules, has racked up another success. A collaboration between the IBM-Zürich group (who have done so much in this area) and a group at the University of Santiago de Compostela (in Spain) has determined the structure of a reactive intermediate, and it's not what one would have thought.
They're working from a di-iodo derivative of the flat structures shown, and that turns into what can be variously drawn as a diradical (at top), an aryne (in the middle) or a cumulene (at bottom). You can change one into another by just moving electrons around, but the real species is probably a lot more like just one of them: but which one? This is not an easy question to answer by traditional physical organic chemistry, at least at this level of detail, but what if you could just reach down, pluck the iodines off a single molecule, and look at the result?

That's the bizarre question that AFM lets you ask. The microscope tip (at high voltage) was used to break the iodo bonds, and then the isolated molecule was imaged (shown). Comparing bond lengths and angles, it looks a lot more like a cumulene than the other alternatives. It's possible that being adsorbed onto a surface alters things as compared to a solution reaction, but under these conditions, a cumulene is apparently what you get.
And as that Chemistry World article says, this same technique can now be used for many other mechanistic questions. For those of us who grew up, scientifically, with mental pictures of fleeting reactive intermediates, things that could only be speculated on by watching the indirect evidence they leave behind. . .well, this is a bit spooky. But AFM images have always had that effect on me. Eventually, this will come to seem normal. And I wonder how far that will go? Can atomic force microscopy ever become a standard analytical technique - want to know a structure, just run the AFM tip over it? We're a long way from that now (you'll notice that the great majority of these sorts of papers come from just that IBM lab), but ruling out advances instrumentation is not the way to bet. I'm not expecting a walk-up instrument any time soon, but this looks like far too useful and powerful a technique to keep down for too long.
Comments (13)
+ TrackBacks (0) | Category: Analytical Chemistry
July 22, 2015
Posted by Derek
OK, we have some Alzheimer's data to talk about this morning. Biogen's antibody aducanumab, about which people have been wildly enthusiastic, showed very little effect on mental decline at a 6mg dose, the company reported today. Note that the Phase I data that got all the attention was at 3mg and 10mg (with better results at the higher dose), but that the 3mg dose was still positive.
That, though, was a smaller and less powered trial. And the first thing that has to be learned from watching clinical research (especially for a disease like Alzheimer's) is that you cannot draw conclusions until you see a large, well-run data set. Ignore this advice at your peril. The list of promising-looking Alzheimer's ideas that have evaporated on contact with a larger trial is long and terrible.
What's interesting is that aducanumab did seem to show the expected reduction in amyloid, which makes a person wonder (yet again) what it takes to draw that connection, assuming that it can be drawn. Biogen's getting ready to go into a big Phase III (2700 patients), and that, of course, is where we'll see what's actually going on. If anything.
Meanwhile, Eli Lilly has released more data from the extended trial of their own antibody, solanezumab. That one's gotten a lot of attention over the last few years as well (especially recently), as the company continues to develop it in the face of not-all-that-compelling clinical results. And by gosh, today's data are. . .not all that compelling. The company claims that they're seeing more effect in the patients who started the therapy earlier, but (as that link from Adam Feuerstein shows), not everyone is buying that interpretation. The effect they're seeing may well be clinically meaningless.
Lilly is already going on with another Phase III in mild, early Alzheimer's patients, chasing what they see as a real result and trying to make the most of it. With one hand, I cheer them on - Alzheimer's is an awful disease, we can't do a damn thing for it, and a new therapy is desperately needed. It's actually sort of inspiring to see a company put so much money on the line in an attempt to do something about it. But with the other had, I'm wiping my brow as I shake my head. I've never been able to convince myself that solanezumab is much good. I think that marginal Alzheimer's drugs are far, far more likely to flop than they are to hang on and become the first-in-class that companies dream of. And I wish that weren't so.
Comments (76)
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
Posted by Derek
There's a new paper on chemical probes out in Nature Chemical Biology, and right off, I have to disclose a conflict of interest. I'm a co-author, and I'm glad to be one. (Here's a comment at Nature News, and here's one at Science). The point of the article is that (1) many of the probe compounds that are used in the literature are inappropriate at best, and junk at worst, and (2) it's time to clean up this situation.
How bad is it? Try these examples out:
. . .For instance, LY294002 was originally described in 1994 as a selective inhibitor of PI3 kinase and remains advertised as such by nearly all vendors. Yet by 2005, it was already clear that the compound inhibited many other proteins at the concentrations used to inhibit PI3 kinase. In the meantime, a large number of more selective and more well-characterized PI3 kinase inhibitors have become available.
The availability of these new inhibitors certainly obviated the need for LY294002 as a chemical probe, and it should be discarded as a selective research tool. Yet a search
of Google Scholar in 2014–2015 alone for ‘LY294002 and PI3 kinase’ returned ~1,100 documents.
And why not? You can still find people using staurosporine as a PKC inhibitor, even though it's a kinase blunderbuss. Similarly, dorsomorphin is not a good choice to inhibit AMPK signaling, and chaetocin is a terrible excuse for a selective histone methyltransferase probe. I've written about others on this blog, as bad or worse.
But these things are all over the literature. People can't keep up, or don't, with the literature showing that these compounds (and many others) are problematic, and the suppliers keep selling them. Far too many researchers look something up in the catalog, see it listed as a "selective XYZ inhibitor", and believe every word. Both the suppliers and the investigators are at fault, and the result is that the scientific literature ends up with garbage piles and oil slicks floating all over it.
Good probe compounds are not easy to find. Seeing one in somebody else's paper places you at the mercy of their literature-searching skills if you don't do some checking of your own. Ordering one up from a catalog proves nothing more than that company's ability to sell it to you. To try to remedy this situation, this new paper also includes the launch of a web site, a wiki-based compendium of validated probes. The hope is that this will become a resource that everyone can turn to, a one-stop-shop that will save a lot of time, money, effort, and frustration.
It has only a few compounds in it as of this morning, but I plan to send in some suggestions of my own this week. (One of those is for a separate list of probes that are Not Recommended, so that people can find those as well). The plan is to put up editing functions soon so that people can do this themselves. I encourage people to send in feedback - this is an opportunity to try to fix a number of longstanding problems in the literature, and without something like this, these problems will only get worse.
Ideally, I'd like to see references to the site in the supplier catalogs, and attention paid to its listings by reviewers and authors alike. The excuses for using worthless chemical probes have never been good ones, and with any luck, there eventually won't be any such excuses left at all.
Comments (22)
+ TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News | Drug Assays
July 21, 2015
Posted by Derek
Replication of scientific results is a big deal these days, as anyone following the scientific literature is aware. Actually, you don't have to even be doing that - plenty of reports have made their way into the lay press about trouble with reproducibility. There are a number of efforts underway to both reproduce published research and to estimate how large the problem really is.
But there's a new paper that suggests that plain replication won't fix the underlying defect: publication bias. As long as you have to show positive results to have a good chance of getting your paper published, the literature will be skewed. And unless the replication studies have statistical power better than the original work, they're likely to just cloud up the issue even more. Replication studies will have publication bias, too - at the moment, they're hot, but that won't always be the case.
I agree with the authors that systematic publication bias is a big threat to scientific research. Let's start off down at the retail level, experiment by experiment. I think that if most of us (both in industry and academia) look back on our work, we'd find that the majority of the experiments we've done in our careers have never been published anywhere at all. I'm sure that's true in my case. To be sure, many of them aren't of much interest, but their value is non-zero, too.
Imagine a world - not our own, for sure - where every chemistry notebook is tied to some central, searchable repository of data. Those of us who work in the drug industry already experience a tiny bit of this, with electronic notebook systems. We can indeed search every experiment that someone in the company has committed to a notebook (and you'd better be committing them all to the notebook, if you know what's good for you). A hypothetical worldwide notebook infrastructure would be something to see; the number of compounds in it would be many, many times those found in Chemical Abstracts. A lot more chemistry has been done than anyone knows about.
But as you go up the scale from "Hey, I made this compound once" or "Hey, I tried this reaction, and that time it didn't seem to work", publication bias becomes even more of a killer. People decry (and rightly) the way that drug companies may decline to publish negative results on their own experimental compounds (although keep in mind, clinical trials do fail, publicly, and the requirement to register them is a big step in getting rid of this problem). But professors decline to publish things, too, even though the effect on public health isn't so potentially large. It could be worth knowing that Professor Y's group tried to find a stereoselective way to make Cycloaddition X work with better stereochemistry, and failed. But no one ever will - the time and effort spent by Professor Y to write up those results will almost certainly be wasted, because no one will publish the paper, and it would be perceived as doing no credit to the group even if it were. There are only so many hours in a day, particularly when it's grant-renewal or tenure-decision time.
Such bias really starts to hurt for the bigger results and the claimed breakthroughs, and that's what most people are thinking about when they think about a reproducibility problem. The advice given in that Retraction Watch post is sound: only conduct studies that are well-powered, statistically. It's harder, longer, and more expensive to do it that way. But your chances of producing something that can be believed in are far higher. The problem is, too many people are more concerned with producing something that can make a big splash on everyone's list of publications. . .
Comments (23)
+ TrackBacks (0) | Category: The Scientific Literature
Posted by Derek
Unless you've had to take care of an NMR facility, you might not have realized how many large chunks of ferromagnetic material might be moving around close to your building, and how much stray radio-frequency noise is banging around. Here's a story on the University of Minnesota, where a research building sits right next to a light rail line, and I can easily believe that they're having problems. A lot of folks in Cambridge and Boston can tell you stories about the trains (above and below ground) and their effects on NMR experiments.
It's not just electromagnetic effects, of course. Good old vibration will hose things up, too, since a high-field NMR magnet needs very precise positioning of the sample and the probe for a sharp spectrum. That's why the big magnets are always sitting on top of very expensive vibration-damping legs, and the bigger the magnet, the more impressive the technology that goes into canceling out the shakes. But radio noise is a real killer. The more machines you have, and the more nuclei you observe, the higher your chances of picking up police radios, having your observed frequencies wander into the commercial FM band (good luck there), and who knows what else.
Sometimes you can get around a specific problem by running your NMR at a bit less than its rated strength, which shifts the corresponding RF-observation windows. That seems like a shame at first (after all, you certainly paid for a 400 MHz magnet or what have you), but it's a lot better to have a clean spectrum at 380 than it is to have unpredictable crap at 400. Good luck to the folks at Minnesota with their light-rail problems, though. At least they know where the noise is coming from!
Comments (22)
+ TrackBacks (0) | Category: Analytical Chemistry
July 20, 2015
Posted by Derek
I've bemoaned the Axovant IPO here a couple of times already. Via Jean Fonteneau on Twitter (his full Axovant coverage is here), we have the chart at left. In case you were wondering how "research" and "recommendations" are done on a stock like this, well, this should provide as clear an example as anyone could want. What a lovely business.
Comments (22)
+ TrackBacks (0) | Category: Alzheimer's Disease
Posted by Derek
Here's a case that total synthesis is still a big part of the chemistry world, as shown by an analysis of papers with that phrase in their title. This follows up on a post here from a couple of years back (based on this one from the Baran group at Scripps), which looked at the same phrase as found in papers in the Journal of the American Chemical Society.
The two graphs make an interesting counterpoint. The JACS one shows a definite decline starting around 2000, while the whole-literature one shows a steady increase through that period, topping out at around 500 articles/year in 2011 and remaining steady after that. So what's going on? There are several possibilities, most of them not mutually exclusive.
(1) The editors at JACS may well have made a decision in 2001 to publish fewer total synthesis papers than they used to. That's an artificial distinction, in one way, but it's also worth remembering that "What JACS publishes" is supposed to be, ideally, "What's important in the entire field of chemistry". That ideal is filtered through a human editorial staff and their biases, but that's what they're aiming at.
(2) Similarly, it's possible that the first graph in Naturalproductman's post was affected by some adjustment at Pubmed in 1999 or so. I don't know how to find out what journals were indexed by them for any given year, but they might have added a bolus of more chemistry-centric titles around that time. Usefully, he also has a graph from SciFinder data, which shows a rise in the 1970s and 1980s that the Pubmed chart doesn't pick up, suggesting to me that there is an effect based on the journals being indexed. (The SciFinder graph does, however, seem to show that second takeoff around 2000). Which brings me to my next point. . .
(3) These new charts probably needs to be corrected for the total number of papers being published. My understanding is that the sum of scientific publications has been growing (rather wildly), and the increase that Naturalproductman's plots show may be partly an effect of the number of new journals, especially the online-only ones. A plot of total synthesis papers as a percentage of total chemistry papers would be harder to do, but I think that might cancel out quite a bit of the rise. I find it hard to credit that there was a sudden surge in total synthesis papers starting in 1999.
(4) Building on that point, it would also be interesting to see where all these total synthesis papers are showing up. Are they holding steady in the higher-tier journals, or (as the JACS graph might suggest) are they moving downmarket? Plots of this sort would be interesting for scientific topics in general, but I don't recall seeing many, partly because they would presumably be a fair amount of work to produce.
I'm not convinced (or not yet) that total synthesis is in that healthy a shape, but I'd welcome more data. If anyone knows a good way to produce the plots I mention above, please pass them along (or feel free to try them yourself an send along the results!)
Comments (12)
+ TrackBacks (0) | Category: Chemical News
Posted by Derek
I'm back! That's probably the longest stretch where I haven't blogged anything since about 2002, and it did feel strange at times. But I have more details on what's coming up for "In the Pipeline". The entire site - comments, archives, and all - is indeed moving, to Science Translational Medicine. They're revamping things over there, and part of that includes adding this blog to their mix. I've been in discussion with them for some time about the whole process, and I think it's going to work out really well.
Here are some of the main points, which should answer some of the likely questions: (1) all the archived posts (and comments) are moving over. After thirteen years, there's an awful lot of content around here, and (for better or worse!) none of it is going to be lost. (2) The blog will continue to be editorially independent - my posts are not going through any review by Science; I'll be flailing away at the keyboard just as I always have. (3) The focus of the blog won't be changing, either. I realize (and so does STM) that I'm in sort of an odd position, working in the drug discovery business and writing about it publicly at the same time, and that's one of the big reasons they invited me over. (4) Comments will, of course, be enabled at the new location. The commenting community around here is really a big part of the site, and nothing will be done to mess around with that. (5) The plan is for the corante.com domain to automatically redirect any Pipeline traffic to the new URL(s). Update: I forgot to mention (6) - the blog will be open-access, and not behind any sort of paywall.
As for when the big switchover takes place, I don't quite have a firm date, but sometime in August would be a good bet. We want to make sure that everything over at the new site has been banged on and shaken from several directions before launching it, and the STM folks have a lot of other things they're fixing up in addition to this blog. I also need to be sure that the exit from the corante.com domain is as clean as possible on this end. As soon as we've got a date, I'll announce it, naturally.
So that's the story. I'm looking forward to another phase of "In the Pipeline", and I hope that everyone makes the migration with me!
Comments (34)
+ TrackBacks (0) | Category: Blog Housekeeping
July 7, 2015
Posted by Derek
Things have been busy around here, and I have several topics stacked up to discuss that I haven't been able to get to yet. But they'll have to wait a bit. I wanted to let everyone know that the rest of this week, and all of next week, the site will be going quiet.
That's partly to take a bit of summer break, but there will also be work going on behind the scenes. On my return, the move of "In the Pipeline" to a completely new site will be imminent. Part of the reason for this hiatus is to make sure that all the archives, etc. are in good shape, and to get caught up on that sort of thing. I'll have many more details on Monday the 20th, when regular blogging resumes. It'll resume at this same address, but I (and you) won't be here for very much longer after that. The new platform will be very visible, very well-trafficked, and technically up to date and supported in every way, and I look forward to the changeover.
If something monumental happens in the next two weeks, I'll emerge with commentary on it, but otherwise, I don't expect to post again until the 20th. See you then!
Comments (35)
+ TrackBacks (0) | Category: Blog Housekeeping
Posted by Derek
Add another potential target to the longevity list: this paper in Cell (open access, actually) provides evidence that the well-known Ras-ERK-ETS pathway is also involved in lifespan. This is work in Drosophila, which is one of the usual places to look for this sort of thing.
Figure 6 in the paper proposes a way to tie several longevity targets together - insulin signaling, PI3K/AKT, these current Ras/ERK results, and Aop-Foxo. Do any of these apply to mammals? The authors think they may well:
. . .This role of cAMP/PKA in aging may be conserved in mammals, as disruption of adenylyl cyclase 50 and PKA function extend murine lifespan (Enns et al., 2009; Yan et al., 2007). However, cAMP/PKA are not generally considered mediators of Ras function in metazoa. Instead, our data suggest that signaling through Erk and the ETS TFs mediates the longevity response to Ras. Interestingly, fibroblasts isolated from long-lived mutant strains of mice and long-lived species of mammals and birds show altered dynamics of Erk phosphorylation in response to stress (Elbourkadi et al., 2014; Sun et al., 2009), further suggesting a link between Erk activity and longevity. Importantly, the ETS TFs are conserved mediators of Ras-Erk signaling in mammals (Sharrocks, 2001). Investigation of the effects of Ras inhibition on mammalian lifespan and the role of the mammalian Aop ortholog Etv6 are now warranted.
This work in fruit flies relied on trametinib, an MEK inhibitor used in oncology, and you would have to wonder what its effects would be in humans who don't have metastatic melanoma. It would seem certain that no one in that position has ever taken it since its Phase I trials (and those must not have been for very long). The authors strongly suggest taking a look at this, and it's going to be interesting to see if someone takes them up on it.
Comments (9)
+ TrackBacks (0) | Category: Aging and Lifespan | Cancer
July 6, 2015
Posted by Derek
Cryo-electron microscopy has been scoring some real successes lately as a structural biology technique. Anything that provides protein structures without having to crystallize proteins is of immediate interest, of course, and I think we can expect a lot more work in this area. Here's a review on the current state of the art, for those who are into this sort of thing. I'd say that right now, getting solid high-resolution structures of random unknown proteins via EM is still an edge-of-what's-possible technique, but it's nowhere near as far out on the fringe as it used to be. Worth keeping an eye on.
Comments (6)
+ TrackBacks (0) | Category: Analytical Chemistry
Posted by Derek
Readers who have worked in the NJ pharma world will be familiar with the big research campus in Summit. I go back far enough to remember it from my first round of job interviews, when it was still Ciba-Geigy. (I was on my post-doc in Germany at the time, and I'd already been asked if I would consider a job in Basel. I think my reasoning was that New Jersey was just about as much a foreign country to me, so they flew me back there for another round). Summit then became a Novartis site, but that one was eventually dwindling, as I recall, in favor of other locations. Schering-Plough picked it up in 2000 and put a good amount of money into it, but when Merck bought them, the site was closed completely in 2013.
Now Celgene has bought it, as part of their expansion over the last few years, and I'm sure that the town of Summit (and many other folks in New Jersey) are glad to hear it. The state's pharma industry has been in undeniable decline for some years now - something that would have been nearly unthinkable back when I was interviewing at Ciba-Geigy. I'm always glad to see a research campus being used for its intended purpose, rather than being bulldozed or just left empty, unsold, and unused. We have too many of those already!
Comments (22)
+ TrackBacks (0) | Category: Business and Markets | Business and Markets | Drug Industry History
July 2, 2015
Posted by Derek
Oh, man. Here's another example of an old, sad story - just a little fakery at the beginning, and here's what it leads to:
Government prosecutors said (Dong-Pyou) Han's misconduct dates to 2008 when he worked at Case Western Reserve University in Cleveland under professor Michael Cho, who was leading a team testing an experimental HIV vaccine on rabbits. Cho's team began receiving NIH funding, and he soon reported the vaccine was causing rabbits to develop antibodies to HIV, which was considered a major breakthrough. Han said he initially accidentally mixed human blood with rabbit blood making the potential vaccine appear to increase an immune defense against HIV, the virus that can cause AIDS. Han continued to spike the results to avoid disappointing Cho, his mentor, after the scientific community became excited that the team could be on the verge of a vaccine.
He's now been sentenced to 4 1/2 years in prison for faking research reports, and to repay the NIH $7.2 million in misused grant money. This was an extensive program of faked results (see this post at Retraction Watch from 2013, when the Office of Research Integrity made its report on the case). This went on for years, with the results - presented at multiple conferences in the field - being the basis for an entire large research program.
How someone ends up in this position, that's what you wonder. But it's a classic mistake. Fred Schwed, in Where Are the Customer's Yachts?, laid out the equivalent situation in investing. I don't have the exact quote to hand, but it was something like "They got on the train at Grand Central Station - they were just going uptown to visit Grandma. But the next thing they knew, they were making 80 miles an hour, at midnight, through Terre Haute, Indiana". In a more somber key, Macbeth experiences the same feeling in Act 3, scene 4: "I am in blood. Stepped in so far that, should I wade no more, returning were as tedious as go o'er." It's such an old trap that you'd think that people would be looking out for it more alertly, but I supposed that the people who fall into it never think that it'll happen to them. . .
Comments (29)
+ TrackBacks (0) | Category: Infectious Diseases | The Dark Side
Posted by Derek
In case you were wondering, you can add "MAO-B inhibition" to the long, long list of Things That Don't Do Any Good For Alzheimer's. I'm not sure how much hope anyone had for that program (at either Roche or Evotec), but the potential payoff is so huge that a lot of marginal ideas get tried. At least this was in Phase II, and not Phase III; there's always that. . .
Comments (19)
+ TrackBacks (0) | Category: Alzheimer's Disease
Posted by Derek
Chris Viehbacher, ex-Sanofi, has reappeared at a $2 billion dollar biotech fund.
Viehbacher is clear, though, that Gurnet will be founding companies as well as looking outside the red-hot fields like oncology. To find value these days, you have to look outside of the trendiest fields, he says. And you're also not going to find much in the way of innovation at huge companies like Sanofi.
"My conclusion is that you can't have truly disruptive thinking inside big organizations," says Viehbacher. "Everything about the way a big organization is designed is about eliminating disruption."
In Viehbacher's view, Big Pharma is still trying to act in the way the old movie studios once operated in Hollywood, with everyone from the stars to writers and stunt men all roped into one big group. Today, he says, movie studios move from project to project, and virtually everyone is a freelancer. In biopharma, he adds, value is found in specializing, and "fixed costs are your enemy."
He's right about that disruption problem at big companies, although he raised eyebrows when he said something similar while still employed at a big company. (Sanofi tried to put those comments in the ever-present "broader context" here). A large organization has its own momentum, but even if its magnitude is decent, its vector is pointed in the direction of keeping things the way that they are now. To be sure, that requires finding new drugs - it's a bit of a Red Queen's race in this business - but a lot of people would be fine if things just sort of rolled along without too many surprises or changes.
If that was ever a good fit for this industry, it isn't now. That makes it nerve-wracking to work in it, for sure, because if you feel that your job is really, truly safe then you're wrong. There are too many unpredictable events for that. I was involved in an interesting conversation the other day about investors in biopharma (and how passionately irrational some of the smaller ones can be), and we agreed that one reason for this is the large number of binary events: the clinical trial worked, or it didn't. The FDA approved your drug, or it didn't. You made your expected sales figures, or you didn't. And those are the expected ones, with dates on the calendar. There are plenty of what's-that-breaking-out-of-the-cloud-cover events, too. Trial stopped for efficacy! Trial stopped for tox! Early approval! Drug pulled from the market! It's like playing a board game with piles of real money (and with your career).
So Viehbacher's right on that point. But I part company with him on his earlier comments (basically, that if he was going to get anything innovative done at Sanofi, that he was going to have to go outside, because no one who wanted to innovate was working at a company like that in the first place). Even large companies have good people working at them - believe it or not! And some of them even have good ideas, too. But it can be harder for them to make headway in a large organization, he is right about that.
Comments (38)
+ TrackBacks (0) | Category: Business and Markets | Who Discovers and Why
July 1, 2015
Posted by Derek
Longtime readers might recall that every so often I hit on the topic of the "dark matter" of drug target space. We have a lot of agents that hit G-protein coupled receptor proteins, and plenty that inhibit enzymes. Those, though, are all small-molecule binding sites, optimized by evolution to hold on to molecules roughly the size that we like to make. When you start targeting other protein surfaces (protein-protein interactions) you're heading into the realm where small molecules are not the natural mediators, and things get more difficult.
But all of those are still proteins, and there are many other types of biomolecules. What about protein/nucleic acid interactions? Protein/carbohydrate interactions? Protein-lipid targets? Those are areas where we've barely even turned on the lights in drug discovery, and past them, you'd have to wonder about carbohydrate/carbohydrate systems and the like, where no proteins are involved at all. None of these are going to be straightforward, but there's a lot to be discovered.
I'm very happy to report on this new paper from the Cravatt group at Scripps, which makes a foray into just this area. A few years ago, the group reported a series of inhibitors of monoacylglycerol lipase, as part of their chemical biology efforts on characterizing hydrolases. That seems to have led to an interest in lipid interactions in general, and this latest work is the culmination (so far) of that research path. It uses classic chemical-biology probes that mimic arachidonyl lipids and several other classes (oleoyl, palmitoyl, etc.). Exposing these to cell proteomes in labeling experiments shows hundreds and hundreds of interactions taking place, the great majority of which we have had no clue about at all. The protein targets were identified by stable-isotope labeling mass spec (comparing experiments in "light" cells versus "heavy" ones carrying the labels), and over a thousand proteins were pulled in with just the two kinds of arachidonyl probes they used (with some overlap between them, but some unique proteins to each sort of probe - you have to try these kinds of things from multiple directions to make sure you're seeing as much as possible).
As well as including many proteins whose functions are unknown, these lists were substantially enriched in proteins that are already drug targets. That should be enough to make everyone in the drug discovery business take a look, but if you're looking for more, try out the next part. The team went on to do the same sort of lipid interaction profiling after treatment of the cells with a range of inhibitors for enzymes involved in such pathways, and found a whole list of cross-reacting targets for these drugs that were unknown until now.
They then turned their attention to one of the proteins that was very prominent in the arachidonyl profiling experiments, NUCB1 (function unknown, but apparently playing a major role in lipid processing and signaling). Taking the arachidonyl probe structure and modifying it to make a fluorescent ligand led to a screening method for NUCB1 inhibitors. 16,000 commercial compounds were tested, and the best hit from this led to a series of indole derivatives. These were taken back around in further labeling experiments to determine the actual site of binding on NUCB1, and they seem to have narrowed it down (as well as gotten a start on the specific binding sites of many of the other protein targets they've discovered). There are also profiles of cellular changes induced by treatment with these new NUCB1 inhibitors, along with hypotheses about just what its real function is.
Holy cow, is this ever a good paper. I've just been skimming over the details; there's a lot more to see. I strongly recommend that everyone interested in new drug targets read it closely - you can feel a whole landscape opening up in front of you (thus the title of this post). This is wonderful work, exactly the kind of thing that chemical biology is supposed to illuminate.
Comments (9)
+ TrackBacks (0) | Category: Chemical Biology | Drug Assays
June 30, 2015
Posted by Derek
When you look at the stock charts of the major pharma companies, there's not a lot of excitement to be had. Until you get to Eli Lilly, that is. Over the last year, the S&P 500 is up about 5%, and most of the big drug stocks are actually negative (Merck -0.4%, Sanofi down 6%, J&J down 7%, AstraZeneca down 13%). Pfizer pulled away from the index in February, and has held on to that gain (up 13% from a year ago), but Lilly - those guys were doing about as well as Pfizer until the last month or two, but have just ratcheted up since then, for a 1-year gain of over 32%. Why them?
It's all Alzheimer's speculation, as this Bloomberg piece goes into. And as has been apparent recently, Alzheimer's is getting a lot of speculation these days. BIogen really revved things up with their own early-stage data a few months back, and since then, if you're got an Alzheimer's program - apparently, any Alzheimer's program whatsoever - you're worth throwing money at. Lilly, of course, has been (to their credit) pounding away at the disease for many years now, expensively and to little avail. One of their compounds (a gamma-secretase inhibitor) actually made the condition slightly worse in the treatment group (more here), while their beta-secretase inhibitor failed in the usual way. But they've also been major players in the antibody field. Their solanezumab was not impressive in the clinic, except possibly in the subgroup of early-stage patients, and Lilly (showing a great deal of resolve, and arguably some foolhardiness) has been running another Phase III trial in that population.
They also extended the existing trial in that patient group, and are due to report data on that effort very soon - thus the run-up in the company's stock. This is going to be very interesting, for sure - it would be great for Alzheimer's patients (and for Lilly) if the results are clearly positive, but that (sad to say) is the least likely outcome. (I'm not just being gloomy for the sake of being gloomy - Alzheimer's antibodies have had a very hard time showing efficacy under any circumstances, and the all-mechanisms clinical success rate against the disease is basically zero). The same goes, of course, for the new Phase III trial itself. Things could well come out clearly negative, with the possible good results from the earlier trial evaporating the way subgroup analyses tend to when you lean on them. Or - and this is the results I fear the most - there could be wispy sorta-kinda hints of efficacy, in some people, to some degree. Pretty much like the last trial, after which Lilly began beating the PR drums to make things look not so bad.
The reason I think that this would be the worst result is that there is so much demand for something, for anything that might help in Alzheimer's that there would be a lot of pressure on the FDA to approve Lilly's drug, even if it still hasn't proven to do much. And this latest trial really is its best chance. It's in exactly the population (the only population) that showed any possible efficacy last time, so if the numbers still come out all vague and shimmery under these conditions, that's a failure, as far as I can see. No one wants to be in the position of explaining statistics and clinical trial design to a bunch of desperate families who may be convinced that a real Alzheimer's drug is being held up by a bunch of penny-pinching data-chopping bureaucrats.
And this brings us to TauRx. I still get mail about them, seven years after they made big news with a methylene-blue-based Alzheimer's therapy program. When last heard from, they were in Phase III, with some unusual funding, but there were no scientific results from them for a while. The company, though, has published several papers recently (many available on their web site), talking about their program.
Here's a paper on their Phase II results. It's a bit confusing. Their 138 mg/day dose was the most effective; the higher dose was complicated by PK problems (see below). When you look at the clinical markers, it appears that the "mild" Alzheimer's patients were hardly affected at all (although the SPECT imaging results did show a significant difference on treatment). But the "moderate" Alzheimer's treatment group showed several differences in various cognitive decline scores at the 138mg/day dose, but no difference in SPECT at all. Another paper, from JBC talks about compound activity in various cell models of tau aggregation. And this one, from JPET, is their explanation for the PK trouble. It appears that the redox state of the methylene blue core has a big effect on dosing in vivo. There are problems with dissolution, absorption (particularly in the presence of food), and uptake of the compound in the oxidized (methylene blue) state (which they abbreviate as MTC, methylthioninium chloride), but these can be circumvented with a stable dosage form of the reduced leuco compound (abbreviated as LTMX). There's apparently a ph-dependent redox step going on in gastric fluid, so things have to be formulated carefully.
One of the other things that showed up in all this work was a dose-dependent hematological effect, apparently based on methylene blue's ability to oxidize hemoglobin. It's not known (at least in these publications) whether dosing the reduced form helps out with this, but it's potentially a dose-limiting toxicity. So here's the current state of the art:
Although we have demonstrated that MTC has potential therapeutic utility at the minimum effective dose, it is clear that MTC has significant limitations relative to LMTX, which make it an inferior candidate for further clinical development. MTC is poorly tolerated in the absence of food and is subject to dose-dependent absorption interference when administered with food. Eliminating the inadvertent delayed-release property of the MTC capsules did not protect against food interference. Therefore, as found in the phase 2 study, MTC cannot be used to explore the potential benefit of higher doses of MT. Nevertheless, the delayed-release property of the MTC capsules permitted the surprising discovery that it is possible to partially dissociate the cognitive and hematologic effects of the MT moiety. Whether the use of LMTX avoids or reduces the undesirable hematologic effects remains to be determined. . .
The Phase III trials are ongoing with the reduced form, and will clearly be a real finger-crossing exercise, both for efficacy and tox. I wish TauRx luck, though, as I wish everyone in the AD field good luck. None of us, you know, are getting any younger.
Comments (16)
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Assays | Pharmacokinetics | Toxicology
June 29, 2015
Posted by Derek
A reader sent along this link to an article at the New York Review of Books on the relentless emphasis on STEM jobs. The viewpoint of its author, Andrew Hacker, was preordained: he's a political scientist who started a controversy about ten years ago with an editorial wondering if mathematical education (we're talking up to the level of algebra) is even necessary or desirable. So he's not going to be a big booster of any push into science or engineering.
But keeping those biases in mind, he does take a useful tour through what I see as the error at the other end of the spectrum. I'm not ready to say (along with Hacker) that gosh, hardly anyone needs algebra, let alone anything more advanced. But I'm also not ready to say that we've got a terrible shortage of anyone who does know such things. That link quotes Michael Teitelbaum, and the NYRB article is partly a review of his Falling Behind , which is a book-length attempt to demolish the whole "STEM shortage" idea. He also notes another book:
James Bach and Robert Werner’s How to Secure Your H-1B Visa is written for both employers and the workers they hire. They are told that firms must “promise to pay any H-1B employee a competitive salary,” which in theory means what’s being offered “to others with similar experience and qualifications.” At least, this is what the law says. But then there are figures compiled by Zoe Lofgren, who represents much of Silicon Valley in Congress, showing that H-1B workers average 57 percent of the salaries paid to Americans with comparable credentials.
Norman Matloff, a computer scientist at the University of California’s Davis campus, provides some answers. The foreigners granted visas, he found, are typically single or unattached men, usually in their late twenties, who contract for six-year stints, knowing they will work long hours and live in cramped spaces. Being tied to their sponsoring firm, Matloff adds, they “dare not switch to another employer” and are thus “essentially immobile.” For their part, Bach and Warner warn, “it may be risky for you to give notice to your current employer.” Indeed, the perils include deportation if you can’t quickly find another guarantor.
Here's Matloff's page on the subject, and his conclusions seem (to me) to ring unfortunately true. I can't come up with any other way to square the statements and actions of (to pick one example) John Lechleiter, CEO of Eli Lilly. So I'm in an uncomfortable position on this issue: I am pro free-trade, and philosophically I'm pro-immigration (especially the immigration of the sorts of talented, hard-working people that all these US companies want to bring in). That philosophical leaning of mine, though, is predicated on these people being able to pitch in to a growing economy, but not if they're just being used as a means to dump existing workers in favor of cheaper (and more disposable) replacements. And I hate sounding like a nativist anti-immigration yahoo, and I similarly hate sounding (at another end of the political spectrum) like some kind of black-bandanna-wearing anti-corporate agitator. (As mentioned above, I'm also not happy about finding myself in some agreement with some guy whose other positions include the idea that algebra should be dumped from schools as a useless burden). I look around, and wonder how I ended up here. Strange times.
Comments (64)
+ TrackBacks (0) | Category: Business and Markets
Posted by Derek
Bruce Booth has a long post on external R&D in biopharma. He's mostly talking about some of the newer ways to do that, rather than traditional deals and outsourcing. These include larger companies partnering with VC firms to launch smaller ones, large investments in the smaller players with specific rights to buy some of the successes, etc. But the larger players have to be able to keep their hands off:
That said, a number of large companies have also been attempting to do this on their own, without venture involvement at last initially; as far as I can tell, these have had limited “success” to date. GSK’s experiment with Tempero Pharmaceuticals is a good example: founded around great science, the idea was to create a standalone biotech with its own governance that GSK could leverage for Th17 projects in the future. Unfortunately, although the research programs advanced, the company appears to have been unable to escape the gravitational pull of the GSK organization – accessing internal research infrastructure led to conformity, financial costs were all consolidated leading to compliance and internalization, and its employees were eventually just integrated back into GSK.
Then you have the corporate-backed venture capital operations that many companies have set up. People are arguing about the direct benefits that these investment groups provide, but there's little doubt that they help keep the whole ecosystem of small company formation going, and that's definitely worthwhile.
The various precompetitive consortia are another aspect. I've wondered how some of these are going, myself - there's not a lot of hard information yet for some of them. And finally, there are the attempts by several companies to set up their own "skunk works" type groups, apart from the main organization. To my eye, these have even more risk of being swallowed back up by the main company's organizational style and attitude than those officially launched companies (like the GSK example above). It's not just the drug industry - plenty of other sectors have seen attempts at "With us but not of us" branches (e.g., Saturn and GM), and it's very hard to do.
Bruce is looking on the bright side, though:
By bringing high doses of innovative creativity from the “periphery” – via the above-mentioned biotech experiments enabled by external innovation – a leadership team can inoculate their R&D organization’s culture with different strains of thinking, different intellectual antigens to prime new ways of doing things. Simple strategic proximity and openness can afford real opportunities for this interaction if done at significant scale, where the “periphery” achieves a meaningful mindshare (and budgetary support) of the organization.
The "significant scale" part is a key, I'd say. I think that many of the failures in these approaches have been when a company wants to do something different, but not, you know, really all that different. Just different enough for Wall Street to like them again, or different enough so that you can go to the CEO (or the board) and tell them how innovative you've been in shaking things up. But if you're not unnerved and excited, wondering what's going to happen next, maybe hopeful and maybe somewhat scared, then thing haven't been shaken up. Those emotions, and the mental attitudes that go along with them, are part of the small-company secret sauce that you're trying to get ahold of. Without them, you haven't accomplished what you're trying to accomplish, but not everyone really wants them as much once they've started to experience them.
In order to capture the tangible and intangible value from external R&D models, organizations have to overcome a set of established, pervasive, and frequently corrosive mental models that prevent successful engagement in the ecosystem. These are challenging to unwind and impair many organizations today. . .
. . .“Protecting our interests”. This is one of the most pernicious of mental models that renders many Pharma groups incapable of creative external R&D, and is based in the paranoia that everyone is out to screw you. Lawyers are paid to be conservative, think about every scenario, extract every protection possible, and create piles of paperwork. I’m convinced that Pharma’s corporate deal lawyers suffocate more creative deals than they are able close – they are the ultimate “Deal Prevention Officer” inside of many companies.
The post goes on to list several more of these - go over and have a look, and if you work at (or have worked at) a large company, you'll recognize them. Getting around or over these, as Bruce says, is essential. But no one quite has a defined set of steps for doing that (despite many consultants who will sell you just such a list). In the worst organization, that paranoia mentioned above, that everyone is out to screw you, has infected the employees in their dealings with their own upper management. And if things have progressed that far, you're going to have trouble reinvigorating the R&D by any means whatsoever.
But for organizations that can make the leap, the sorts of models described in the post are definitely worth a look. It's too early, in most of the cases, to say how the returns are on them, but it's a good sign that several companies have taken serious attempts at doing things differently.
Comments (20)
+ TrackBacks (0) | Category: Business and Markets
June 26, 2015
Posted by Derek
Here's a good overview of phenotypic screening from a group at Pfizer in Science Translational Medicine. It emphasizes, as it should, that this is very much a "measure twice, cut once" field - a bad phenotypic screen is the worst of both worlds:
The karyotype of a cell represents one of its most fundamental and defining characteristics. A large number of tumor-derived cell lines display substantial genetic abnormalities, with some extreme examples bearing in excess of 100 chromosomes as opposed to the expected 46. By that measure, the widely used human monocytic THP-1 cell line would fare well considering its overall diploid character. Nonetheless, triploidy is observed for four chromosomes and monoploidy for another, along with the entire deletion of chromosome X and substantial chromosomal rearrangements. A simple question pertains: Is this a monocyte? In other words, can we expect a faithful representation of all of the functions of a primary human monocyte from such a cell?
Using primary tissue from human patients has its own problems - availability, variation from batch to batch, limited useful lifetime in culture - but those are (in most cases) worth living with compared to the limitations of too-artificial cell lines. The authors also emphasize care in picking what ways you'll stress or stimulate the cells to mimic a disease state, and making sure that the assay readouts are as closely matched as possible to clinical end points.
The track record of gene expression readouts such as reporter gene assays is lackluster with respect to phenotypic drug discovery; no recent (>1998), first in class, small-molecule drug has originated from such an assay. A potential explanation is that mechanisms influencing gene expression represent only a fraction of all mechanisms affecting a given phenotype. . .An in-house study aimed at discovering previously unknown mechanisms leading to the up-regulation of apolipoprotein E (ApoE) secretion compared confirmed hits obtained in the same cellular system using reporter gene and enzyme-linked immunosorbent assay readouts. Although the reporter gene assay successfully identified compounds that provide large increases in ApoE secretion, it missed half of the overall hit set. . .
None of these recommendations are easy, and (from an impatient perspective) all they're doing is slowing down the implementation of your screen. Detail after detail, doubt after doubt! But your screening idea needs to be able to stand up to these, and if you just plunge ahead, you run a serious risk of generating a large amount of complicated, detailed, irrelevant data. The worse kind, in other words.
Every drug program, and every screen, rests on a scaffolding of assumptions. You'd better be clear on what they are, and be ready to justify them. In a target-directed screen, a big one is "We know that this is a key part of the disease mechanism", and (as the number of failures in Phase II show us), that's not true anywhere as often as we'd like. Phenotypic screening dodges that one, a big point in its favor, but replaces it with another big leap of faith: "We know that this assay recapitulates the human disease". You pays your money, and you takes your choice.
Comments (16)
+ TrackBacks (0) | Category: Drug Assays
Posted by Derek
I truly enjoyed this look at Dr. Robert David Perlmutter of "Grain Brain" fame, another branch of the same intellectual family tree as Drs. Mercola and Oz. Wonderful cures! Suppressed by evil forces! Under our noses all along! Exactly the opposite of the wonderful cures claimed by the same guy in the 1990s. . .uh, what? Fun stuff. But it won't convince the true believers; nothing will.
Comments (14)
+ TrackBacks (0) | Category: Snake Oil
June 25, 2015
Posted by Derek
I've heard from sources this morning that the folks at Bristol-Myers Squibb in Wallingford have received, out of the blue, one of those sudden sitewide meeting announcements that often portend big news. I'll leave the comments section of this post for updates from anyone with more info - I'll be out of communication for a while this morning at the ChemDraw event.
Update: OK, the press release has just come out. The company is going to open up a big new site in Cambridge, and here's the key part:
In Cambridge, Bristol-Myers Squibb scientists will focus on the company’s ongoing discovery efforts in genetically defined diseases, molecular discovery technologies and discovery platform chemistry in state-of-the-art lab space. In addition to relocating up to 200 employees from its Wallingford, Conn. and Waltham, Mass. sites, and a limited number from its central New Jersey locations, the company expects to recruit scientists from the Cambridge area. As part of this transition, the Waltham site is expected to close in early 2018. The existing site in Wallingford will also close in early 2018 with up to 500 employees relocating to a new location in Connecticut.
Comments (93)
+ TrackBacks (0) | Category: Business and Markets
Posted by Derek
You may recall the report of the synthetic analgesic tramadol as a natural product from Cameroon, and the subsequent report that it was nothing of the kind. (That's the paper that brought the surprising news that local farmers were feeding the drug to their cows). Now the first group (a team from Nantes, Lodz, and Grenoble) is back with a rebuttal.
They note that previous report, but also say that tramadol has been isolated from samples in a bioreserve, where human cattle grazing is prohibited. The rest of the paper goes on to analyze isolated tramadol samples by NMR, looking for variations in the 13C levels to try to come up with a biosynthetic pathway. Isotopic distribution is the way to do that, for sure - the various synthetic steps used to make a compound (and its precursors) can be subject to kinetic isotope effects, and over time, these can build up to recognizable signatures. An example of this is the identification of endogenous human testosterone versus the plant-derived material found in supplements.
The authors go over how the various structural features found in tramadol have also been noted in other natural products, and propose some biosynthetic pathways based on these and on the observed 13C ratios (which they report do vary from synthetic samples). Probably the strongest evidence is from the methyl groups, which show evidence of having been delivered by something like S-adenosylmethionine. Overall oxygen isotope ratios are also apparently quite different than commercial samples.
So the battle is joined! The confounding factors I can think of, off the top of my head, are possible differences in the synthetic routes (and thus isotope ratios) of the commercial material used here (from Sigma-Aldrich) and the material available in Cameroon. But then, the authors state here that their samples were obtained from a part of the nature reserve where people are not farming cattle. None of us are exactly in a position to judge that - I'm not going to the boonies of Cameroon to find out - but if they're right about that, it's also a good argument in their favor.
But the only way to really resolve this is to grow some African peach trees, feed them labeled precursors, and see if strongly labeled tramadol comes out the other end. This paper says that such an experiment is "not currently feasible", but I have to wonder if there's an arboretum somewhere that has such trees in it (and if such trees produce tramadol already). There will surely be another chapter to this story - or two, or three.
Comments (12)
+ TrackBacks (0) | Category: Analytical Chemistry | Natural Products
June 24, 2015
Posted by Derek
If you have a chance to stop by, Thursday the 25th is the "30th Anniversary of ChemDraw" event in Cambridge (MA). Here's the link - I'm going to reminisce a bit in the morning's program about the pre- and early post-ChemDraw days (as I have here on occasion). If you'd told me about this event back in 1985, I don't think I would have believed you.
Update: Prof. Dave Evans will be on hand to talk about the early days - here's his memoir of that period, in Angewandte Chemie.
Comments (11)
+ TrackBacks (0) | Category: General Scientific News
Posted by Derek
Here's another Big Retrospective Review of drug pipeline attrition. This sort of effort goes back to the now-famous Rule-of-Five work, and readers will recall the Pfizer roundup of a few years back, followed by an AstraZeneca one (which didn't always recapitulate the Pfizer pfindings, either). This latest is a joint effort to look at the 2000-2010 pipeline performance of Pfizer, AstraZeneca, Lilly, and GSK all at the same time (using common physical descriptors provided to a third party, Thomson Reuters, to deal with the proprietary nature of the compounds involved). The authors explicitly state they've taken on board the criticisms of these papers that have been advanced in the past, so this one is meant to be the current state of the art in the area.
What does the state of the art have to teach us? 812 compounds are in the data set, with their properties, current status, and reasons for failure (if they have indeed failed, and believe me, those four companies did not put eight hundred compounds on the market in that ten-year period). The authors note that there still aren't enough Phase III compounds to draw as many conclusions as they'd like: 808 had a highest phase described, 422 of those were still preclinical, 231 were in Phase I, 145 in Phase II, 8 were in Phase III and 2 in Phase IV/postmarketing studies. These are, as the authors not, not quite representative figures, compared to industry-wide statistics, and reflect some compounds (including several that went to market) that the participants clearly have left out of their data sets. Considering the importance of the (relatively few) compounds in the late stages, this is enough to make a person wonder about how well conclusions from the remaining data set hold up, but at least something can be said about earlier attrition rates (where that effect is diluted).
605 of the compounds in the set were listed as terminated projects, and 40% of those were chalked up to preclinical tox problems. Second highest, at 20% was (and I quote) "rationalization of company portfolios". I divide that category, myself, into two subcategories: "We had to save money, and threw this overboard" and "We realized that we never should have been doing this at all". The two are not mutually exclusive. As the paper puts it:
. . .these results imply that substantial resources are invested in research and development across the industry into compounds that are ultimately simply not desired or cannot be progressed for other reasons (for example, agreed divestiture as part of a merger or acquisition). In addition, these results suggest that frequent strategy changes are a significant contributor to lack of research and development success.
You think? Maybe putting some numbers on this will hammer the point home to some of the remaining people who need to understand it. One can always hope. At any rate, when you analyze the compounds by their physiochemical properties, you find that pretty much all of them are within the accepted ranges. In other words, the lessons of all those earlier papers have been taken on board (and in many cases, were part of med-chem practice even before all the publications). It's very hard to draw any conclusions about progression versus physical properties from this data set, because the physical properties just don't very all that much. The authors make a try at it, but admit that the error bars overlap, which means that I'm not even going to bother.
What if you take the set of compounds that were explicitly marked down as failing due to tox, and compare those to the others? No differences in molecular weight, no differences in cLogP, no differences in cLogD, and no differences in polar surface area. I mean no differences, really - it's just solid overlap across the board. The authors are clearly uncomfortable with that conclusion, saying that ". . .these results appear inconsistent with previous publications linking these parameters with promiscuity and with in vivo toxicological outcomes. . .", but I wonder if that's because those previous publications were wrong. (And I note that one such previous publication has already come to conclusions like these). Looking at compounds that failed in Phase I due to explicit PK reasons showed no differences at all in these parameters. Comparing compounds that made it only to Phase I (and failed for any reason) versus the ones that made it to Phase II or beyond showed, just barely, a significant effect for cLogP, but no significant effect for cLogD, molecular weight, or PSA. And even that needs to be interpreted with caution:
. . .it is not sufficiently discriminatory to suggest that further control of lipophilicity would have a significant impact on success. Examination of how the probabilities of observing clinical safety failures change with calculated logP and calculated logD7.4 by logistic regression showed that there is no useful difference over the relevant ranges. . .
So, folks, if your compounds most fit within the envelope to start with (as these 812 did), you're not doing yourself any good by tweaking physiochemical parameters any more. To me, it looks like the gains from that approach were realized early on, by trimming the fringe compounds in each category, and there's not much left to be done. Those PowerPoint slides you have for the ongoing project, showing that you've moved a bit closer to the accepted middle ground of parameter space, and are therefore making progress? Waste of time. I mean that literally - a waste of time and effort, because the evidence is now in that things just don't work that way. I'll let the authors sum that up in their own words:
It was hoped that this substantially larger and more diverse data set (compared with previous studies of this type) could be used to identify meaningful correlations between physicochemical properties and compound attrition, particularly toxicity-based attrition. . .However, beyond reinforcing the already established general trends concerning factors such as lipophilicity (and that none too strongly - DBL), this did not prove generally to be the case.
Nope, as the data set gets larger and better curated, these conclusions start to disappear. That, to be sure, is (as mentioned above) partly because the more recent data sets tend to be made up of compounds that are already mostly within accepted ranges for these things, but we didn't need umpteen years of upheaval to tell us that making compounds that weight 910 with logP values of 8 are less likely to be successful. Did we? Too many organizations made the understandable human mistake of thinking that changing drug candidate properties was some sort of sliding scale, that the more you moved toward the good parts, the better things got. Not so.
What comes out of this paper, then, is a realization that watching cLogP and PSA values can only take you so far, and that we've already squeezed everything out of such simple approaches that can be squeezed. Toxicology and pharmacokinetics are complex fields, and aren't going to roll over so easily. It's time for something new.
Comments (43)
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology
June 23, 2015
Posted by Derek
Here's a disturbing read for you: the author of this paper (Morten Oksvold, of Oslo University) sat down and did what none of us ever do. He chose three different journals in the oncology field, picked one hundred and twenty papers, at random, from their recent issues, and carefully looked every one of them over for duplications in the figures and data. On PubPeer, you can see what he found. Nearly a quarter of the papers had problems.
In case you're wondering, this proportion didn't vary significantly between the three journals, which were chosen at three different levels of prominence (as measured by impact factor). Time, chance, and figure duplication happeneth to them all. I should note that the duplication comes in several different flavors. The least concerning is the appearance of the same control experiments in more than one figure in a paper. One might wish for the controls to be run more than once - in fact, I'd most definitely wish for that - but the authors are not necessarily implying that these are separate experiments. (They're not dispelling any impression that they are separate, either). When the same control experiments (same gels) appear in more than one paper, that seems to be a further step down. The objects of the two papers are (presumably!) different, and there's even more reason to assume that the authors have, in fact, run this again and aren't just reusing the same control that looked so good that time. That's the problem - when you do this sort of thing, it makes a person wonder if there was only that one time.
There are plenty of less ambiguous cases, unfortunately. About half the cases are supposed to be from different experiments entirely. In both gel and microscope images, you can find examples of the same image representing what should be different things, and excuses run out at this point. It goes on. Oksvold then contacted the authors of all twenty-nine problematic papers to ask them about what he'd found. And simultaneously, he wrote the editorial staffs of all three journals, with the same information. What came of all this work? Well, "only 1 out of 29 cases were apparently clarified by the authors, although no supporting data was supplied", and he got no reply at all from any of the journal editors. Nice going, International Journal of Oncology, Oncogene, and Cancer Cell.
My take on all this is that this is a valuable study, with some limitations that haven't been appreciated by everyone commenting on it. Earlier this year, when the material started appearing on PubPeer, there were statements flying around that "25% of all recent cancer papers are non-reproducible!" This work doesn't show that. What it shows is that 25% of recent cancer papers appear to have duplicated figures in them (not that that's a good thing). But, as mentioned, at least half the examples are duplicated controls - correctly labeled, but reused. Even the nastier cases don't necessarily make the paper unreproducible. You'd have to dig into them and see how many of them affected the main conclusions. I'd guess that the majority of them do, but they don't have to - people can also cut corners in the scaffolding, just to get everything together and get the paper out the door. I am not defending that practice, but I don't want this study to be misinterpreted. It's worrisome enough as it is, without any enhancement.
I think what can be said, then, is that "25% of recent cancer papers have duplicated figures in them, which matter in some cases much more than others, since they appear for reasons ranging from expedience to apparent fakery". Not as catchy, admittedly, but still worth paying attention to. (More from Neuroskeptic here).
Comments (44)
+ TrackBacks (0) | Category: The Scientific Literature
June 22, 2015
Posted by Derek
Here's an odd thing, noted by a reader of this site. Organic Letters has a retraction of a paper in the Baldwin group at Oxford, "Biomimetic Synthesis of Himbacine".
This Letter has been retracted, as it was found that (a) spectra of the linear precursor, compound 14, differed when its synthesis was repeated and (b) spectra published for several compounds resulting from compound 14 (compounds 3, 4, and 20) were scanned from other papers.
Those other papers are the ones from the Chackalamannil et al. synthesis of himbacine, which took someone a fair amount of nerve. I will assume that Jack Baldwin did not scan in the spectra and claim them for his own. The other authors on the paper are Kirill Tcabanenko, Robert Adlington, and Andrew R. Cowley, for whom I can find no recent information. There's a story here, for sure, but I don't know its details. . .
Comments (34)
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Posted by Derek
I'd like to open up the floor for nominations for the Blackest Art in All of Chemistry. And my candidate is a strong, strong contender: crystallization. When you go into a protein crystallography lab and see stack after stack after stack of plastic trays, each containing scores of different little wells, each with a slight variation on the conditions, you realize that you're looking at something that we just don't understand very well.
Admittedly, protein crystallography is the most relentlessly voodoo-infested territory in the field, but even small-molecule crystals can send chills down your spine (as with the advent of an unwanted polymorph). For more on those, see here, here, and here, and this article. Once you start having to explore different crystallization conditions, it's off into the jungle - solvent (and a near-infinite choice of mixtures of solvents), temperature, heating and cooling rates along the way, concentration, stirring rate, size and material of the vessel - all of these can absolutely have an effect on your crystal formation, and plenty of more subtle things can kick in as well (traces of water or other impurities, for example).
To give you an idea, with a relatively simple molecule, fructose was apparently known for decades as the "uncrystallizable sugar". Eventually, someone sat down and brute-forced their way through the problem, making concentrated solutions and seeding them with all sorts of crystals of related compounds (another black art, and how). As I recall, the one nucleated with a crystal of pentaerythritol crystallized, giving the world the first crystalline fructose ever seen. Other conditions have been worked out since then (in crystallization there are always other possible conditions). But that's an example of the craziness. Does anyone have a weirder field or technique to beat it?
Comments (54)
+ TrackBacks (0) | Category: Life in the Drug Labs
June 19, 2015
Posted by Derek
How much should drugs cost? That question can be answered in a lot of different ways, and at many levels of economic literacy. But the Wall Street Journal is reporting on a new comparison tool from a group at Sloan-Kettering, the "Drug Abacus".
As you will have already guessed, under many plausible assumptions (such as a year of life being worth $120,000, with 15% taken off for side effects), the model reports that many cancer drugs are overpriced. The two worst are Blincyto (blinatomumab) and Provenge. On the other hand, the nitrogen mustard derivative Treanda (bendamustine) comes out as worth nearly three times as much as Teva is charging for it. Your own mileage will vary, of course: some people will regard a year of life as worth substantially more than $120K. Note that as you get closer to $200,000 for a year of life, the majority of the drugs in the calculator become relative bargains. Now, most people will find the whole process of arriving at any such figure to be distasteful and disturbing (an understandable emotional response that underlies a lot of wrangling about the drug industry, I think). As I mentioned in that post, the English language has an entirely different word for "customer of a physician" than it does for a customer of anyone else providing any other service.
The Drug Abacus is an attempt to bring the various factors of oncology drug prices into view: how much extra lifetime a given drug can provide, what the side effects (and thus quality of life) might be, and larger factors like how novel the medication is and what the overall population burden of that indication is. That mixture, actually, is one of the problems I have with this idea (which in principle I think is worth doing). An individual patient is not going to be deciding the price they're willing to pay for a new drug based on its overall effect on the population, or how it fits into the therapeutic landscape. They want to know if it will help them, personally, and by how much
Another one of the sliding factors in the model, development cost, I don't think should even be in there at all. Drugs should be evaluated by what they do, not how easy or hard they were to find and develop. That factor can end up being misused in many ways: people can complain that Drug X had a faster path through the clinic, so it should cost less, and companies might turn around and argue that Drug Y had to go back for another Phase III, so therefore it should cost more. Neither of these make sense. Each company has to look at its overall R&D spending and its success rate and adjust its pricing based on the whole picture.
As I did in that post I linked to a couple of paragraphs ago, I'm going to use the always-annoying car analogy. What would a "car abacus" site look like? The various sliders would represent things that people take into consideration when they buy a car: its utility, its stylishness, its repair and upkeep costs, its resale value and expected lifetime. I don't think that there would be a "development costs" slider, would there? Or one for the car's overall use to society? But no matter what, some cars would certainly come out looking overpriced, unless you attach a large value to something like "prestige" or "stylishness", and even then, I'm not sure that the numbers would come out right. Manufacturers, though, charge what people are willing to pay, and if some people are willing to pay what seems like an irrational cost for a car, then so much the better.
The same principle operates at a coffee shop. You'll be charged extra for some foamed milk in your coffee, or a shot of hazelnut syrup or some "pumpkin spice", if it's November. That extra price, it has to be noted, is way out of all proportion for the cost of ingredients or the extra labor involved. The coffee shop is differentiating its customers, looking for the ones who have no objection to paying more for something different, and making sure to offer them something profitable when they come along. (And a chi-chi coffee shop, to extend the analogy, starts differentiating its customers even before they walk in the door, by the way it's decorated and the signals it sends).
But purchasing better health is, in the end, not quite like purchasing a car or even a FrappaMochaLatteChino. Thinking of it in those terms, I believe, can illuminate some aspects that otherwise get obscured, but in the end it's a much more basic and personal decision. And as for personal decisions, Milton Friedman divided expenditures into four categories: when we spend our money on ourselves, when we spend our own money on other people, when we spend other people's money on ourselves, and when we spend other people's money on other people. Since he was famously libertarian, it will come as no surprise that he regarded the first category as the one where people were most likely to make better decisions (and it should also come as no surprise that government spending lands, as it would have to, in the bottom category). There are objections to this classification - you could say that the first category is also the one in which we might allow our emotions to override rational decision making, whereas the last one has the greatest scope for calm cost/benefit analyses.
Those objections are at the heart of the debate about drug costs, because there is nothing that we are more likely to become irrational over than our own health or that of someone close to us. An extremely uncomfortable thought experiment is to imagine a close family member becoming gravely ill, and then figuring out what you would be willing to pay to make them better. (You can extend your discomfort by imaging whether or not you'd be willing to pay that same cost, out of pocket, to extend a similar benefit to someone on the other side of the world whom you've never met and never will. That's the line of reasoning taken by Adam Smith in The Theory of Moral Sentiments, subject of a recent popular exposition. ). OK, back to your close family member. Got a figure in mind for a cure? How about an extra year of life, then? After all, that's the unit of the Drug Abacus calculations. What about an extra year, but they can't get out of bed? An extra month? How about everything you have, house and all, flat broke for another ten minutes? Ten seconds? At some point, Homo economicus gets up off the floor, having been clubbed over the head at the beginning of this exercise, and says "Hmm. Maybe not."
But it takes a while for that to happen, understandably. And the whole thing, as mentioned, is wildly unpleasant even as a thought experiment, so going through it in real life is an experience you wouldn't wish on anyone. The debate on drug pricing, though, grabs us by the backs of our heads and forces our noses down into the subject.
Comments (48)
+ TrackBacks (0) | Category: Drug Prices
June 18, 2015
Posted by Derek
I've mentioned numerous times around here that therapies directed against aging in general have a rough regulatory outlook. The FDA, in general, has not considered aging a disease by itself, but rather the baseline against which disease (increasingly) appears. This has meant that companies with ideas for anti-aging therapies have had to work them into other frameworks - diabetes, osteoporosis, what have you - in order to get clinical data that the agency will be able to work with.
Now, according to Nature News, the group that's testing metformin for a variety of effects in elderly patients is going to meet with the FDA to address just this issue:
Barzilai and other researchers plan to test that notion in a clinical trial called Targeting Aging with Metformin, or TAME. They will give the drug metformin to thousands of people who already have one or two of three conditions — cancer, heart disease or cognitive impairment — or are at risk of them. People with type 2 diabetes cannot be enrolled because metformin is already used to treat that disease. The participants will then be monitored to see whether the medication forestalls the illnesses they do not already have, as well as diabetes and death.
On 24 June, researchers will try to convince FDA officials that if the trial succeeds, they will have proved that a drug can delay ageing. That would set a precedent that ageing is a disorder that can be treated with medicines, and perhaps spur progress and funding for ageing research.
During a meeting on 27 May at the US National Institute on Aging (NIA) in Bethesda, Maryland, Robert Temple, deputy director for clinical science at the FDA’s Center for Drug Evaluation and Research, indicated that the agency is open to the idea.
Metformin and rapamycin are two of the compounds that would fit this way of thinking, and there will surely be more. Let's face it - any other syndrome that caused the sorts of effects that age does on our bodies would be considered a plague. To quote Martin Amis's lead character in Money, who's thinking about an actress he's casting in a movie who "time had been kind to", he goes on to note that "Over the passing years, time had been cruel to nearly everybody else. Time had been wanton, virulent and spiteful. Time had put the boot in." It sure does.
But we're used to it, and it happens to everyone, and it happens slowly. Does it have to be that way? The history of medicine is a refusal to play the cards that we've been dealt, and there's no reason to stop now.
Comments (43)
+ TrackBacks (0) | Category: Aging and Lifespan | Regulatory Affairs
Posted by Derek
A huge amount of medicinal chemistry - and a huge amount of medicine - depends on small molecules binding to protein targets. Despite decades of study, though, with all the technology we can bring to bear on the topic, we still don't have as clear a picture of the process as we'd like. Protein structure is well-known as an insanely tricky subject, and the interactions a protein can make with a small molecule are many, various, and subtle.
This gets re-emphasized in this new paper from the Shoichet group at UCSF. They're using a well-studied model protein pocket (the L99A mutant of T4 lyzozyme, itself an extremely well-studied protein). That cavity is lined with hydrophobic residues, and (being a mutant at a site without function) it's not evolutionarily adapted for any small-molecule ligands. It's just a plain, generic roundish space inside a protein, and a number of nonpolar molecules have had X-ray structures determined inside it.
What this paper does is determine crystal structures (to about 1.6A or better) for a series of closely related compounds: benzene, toluene, ethylbenzene, n-propylbenzene, sec-butylbenzene, n-butylbenzene, n-pentylbenzene, and n-hexylbenzene. That's about as nondescript a collection of aryl hydrocarbons as you could ask for, differing from each other only by the number and placement of methylene groups. (Three of these had already been determined in earlier studies). How does the protein cavity handle such similar compounds?
By doing different things. They found that one nearby part of the protein, the F helix, adopts two different conformations in the same crystal, in different proportions varying with the ligand. (The earlier structures from the 1990s show this, too, although it wasn't realized at the time). The empty cavity and the benzene-bound one have one "closed" conformation, but even just moving up to toluene gives you about 20% of the intermediate one, with a shifted F-helix. By the time you get to n-butylbenzene, that conformation is now about 60% occupied in the crystal structure, with 10% of the "closed", and now 30% of a third, "open" state. The pentyl- and hexyl-benzene structures are mostly in the open state. Digging through the PDB for other lyzozyme cavity structures turned up examples of all three forms.
These adjustments come via rearrangement of hydrogen bonds between the protein residues, and it apparently has a number of tiny rachet-like slips it can make to accommodate the ligands. And there's the tricky part: these changes are all balances of free energies - the energy it takes for the protein to shift, and the energy differences between the various forms once the shift(s) have taken place, which include the interactions with the various ligands. The tradeoffs and payoffs of these sorts of movements are the nuts-and-bolts, assembly-language level of ligand binding.
And it has to be emphasized that this is a very simple case indeed. No polar interactions at all, no hydrogen bonds, no water molecules bridging or being displaced from the protein surface, no halogen bonds, no pi-stacking or edge-to-pi stuff. There are also, it has to be noted, other ways for proteins to deal with such small changes. The authors here, in fact, looked through the literature and the PDB for just such series to compare to, and found (for example) that the enzyme enoyl-ACP reductase (FabI) doesn't take on such discrete states - instead, a key residue just sort of smoothly slides into a range of positions. That said, they also found examples where the behavior is more like the mode-switching seen here.
If that's common, then calculating ligand binding gets more complicated, which is not what it was needing. These are about the smallest and least substantial ligand changes you can come up with, and here's a protein shifting around quite noticeably between an ensemble of low-energy states to deal with them. The problem is, there are a huge number of such states available to most binding sites, and distinguishing them from each other, or from the original binding mode, by first principles is (in many cases) going to be beyond our capabilities for now.
Here's Wavefunction's take on these results - he says that "The conclusions of the paper are a bit discomforting. . ", and if I were a molecular modeler, I'd say the same thing!
Comments (17)
+ TrackBacks (0) | Category: Analytical Chemistry
June 17, 2015
Posted by Derek
Why do we test new drug candidates on animals? The simple answer is that there's nothing else like an animal. There are clearly chemical and biological features of living systems that we don't yet understand, or even realize exist - the discovery of things like siRNAs is enough proof of that. So you're not going to be able to build anything from first principles; there isn't enough information. Your only hope is to put together something that matches the real thing as closely as possible, using original cells and tissues as much as possible.
The easiest way to do that, by far, is to just give your compounds to a real animal and see what happens. But you have to think carefully. Mice aren't humans, and neither are dogs (and nor are dogs mice, for that matter). Every species is different, sometimes in ways that make little difference, and sometimes in ways that can mean life or death. Animal testing is the only way to access the complexity of a living system, and the advantages of that outweigh the difficulties of figuring out the all differences when moving on to humans. But those difficulties are very real nonetheless. (One way around this would be to make animals with as many humanized tissues and systems as possible, although that's not going to make anyone any happier about testing drugs on them. The other way is try to recapitulate a living system in vitro.
But the cells in a living organ are different than the cells in a culture dish, both in ways that we understand and in ways that we don't. The architecture and systematic nature of a living organ (a pancreas, a liver) is very complex, and subject to constant regulation and change by still other systems, so taking one type of cell and growing it up in a roller bottle (or whatever) is just not going to recapitulate that. Liver cells, for example, will still do some liver-y things in culture. But not all of the things, and not all of them in the same way. And the longer they're grown in culture, the further they can diverge from their roots.
There has been a huge amount of work over the years trying to improve this situation. Growing cells in a more three-dimensional culture style is one technique, although (since we don't make blood vessels in culture tubes) there's only so far you can take that. Co-cultures, where you try to recreate the various populations of cell types in the original organ, are another. But those are tricky, too, because all the types of cell can change their behaviors in different ways under lab conditions, and their interactions can diverge as well. Every organ in a living creature is a mixture of different sorts of cells, not all of whose functions are understood by a long shot.
Ideally, you'd want to have many different such systems, and give them a chance to communicate with each other. After all, the liver (for example) is getting hit with the contents of the hepatic portal vein, full of what's been absorbed from the small intestine, and is also constantly being bathed with the blood supply from the rest of the body, whose contents are being altered by the needs of the muscles and other organs. And it's getting nerve signals from the brain along with hormonal signals from the gut and elsewhere, with all these things being balanced off against each other all the time. If you're trying to recreate a liver in a dish, you're going to have to recreate these things, or (more likely) realize that you have to fall short in some areas, and figure out what differences those shortfalls make.
The latest issue of The Economist has a look at the progress being made in these areas. The idea is to use the smallest cohorts of cells possible (these being obtained from primary human tissue), with microfluidic channels to mimic blood flow. (Here's a review from last year in Nature Biotechnology). It's definitely going to take years before these techniques are ready for the world, so when you see headlines about how University of X has made a real, working "(Organ Y) On a Chip!", you should adjust your expectations accordingly. (For one thing, no one's trying to build, say, an actual working liver just yet. These studies are all aimed at useful models, not working organs). There's a lot that has to be figured out. The materials from which you make these things, the sizes and shapes of the channels and cavities, the substitute for blood (and its flow), what nutrients, hormones, growth factors, etc. you have in the mix (and how much, and when) - there are a thousand variables to be tinkered with, and (unfortunately) hardly any of them will be independent ones.
But real progress has been made, and I have no doubt that it'll continue to be made. There's no reason, a priori, why the task should be impossible; it's just really hard. Worth the effort, though - what many people outside the field don't realize is how expensive and tricky running a meaningful animal study really is. Running a meaningful human study is, naturally, far more costly, but since the animal studies are the gatekeepers to those, you want them to be as information-rich, as reproducible, and as predictive as possible. Advanced in vitro techniques could help in all those areas, and (eventually) be less expensive besides.
Comments (17)
+ TrackBacks (0) | Category: Animal Testing | Drug Assays | Toxicology
|
|