Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
Here's an interesting list (PDF) of patent activity for life-science firms in the New England area. What I can't quite work out is how the numbers were generated. For example, it's very strange that Pfizer shows up with 3 patents from last year. A quick look through the databases shows more issued patents than that, although many of the ones I'm seeing are probably continuations-in-part of older parents.
But at any rate, if there's any consistent method of evaluating patents that shows Pfizer with fewer patents over the last few years than the likes of Neurogen and Nitromed - and that's what this one shows - then something's quite odd. I've emailed the people at MassHighTech.com to ask what's up.
Update: an email from the magazine says that the patent count is supposed to represent just the ones from the New England area. That does clear out a lot of Pfizer ones which originated from Sandwich, St. Louis, and other exotic ports. But I still can't get the numbers to come out right. I looked through about a quarter of the 2009 US patents assigned to Pfizer, and found five or six out of Groton/New London just in that group. I've emailed the magazine again about this. . .
Well, now, this is a disappointment. In a new Angewandte Chemie paper, a French team reports synthesizing trinitropyrazole. And it's. . .well, it's well-behaved. Surprisingly insensitive. Not that touchy. Might actually be useful as a storable high-energy material that could actually be handled.
The fools! Don't they realize that Angewandte is the place to unload the barely-in-our-plane-of-existence compounds, the sweat-starting, nostril-flaring "How could it blow up? It's in liquid nitrogen!" stuff? Surely there's a better home for things with actual utility, the Journal of Not So Horrible Once You've Made Them, Really, or "That Wasn't So Bad Now, Was It" Communications. Sheesh.
While we're on the subject of patents, PatentBaristas has a good summing-up of the Ariad decision I mentioned here last week. There is indeed a written description requirement for a patent, and it's separate from enablement, and it had better be good.
I haven't commented so far on the decision yesterday in the Myriad Genetics case involving their breast cancer assay gene patents. This is surely going to be appealed, and we're not going to really know what's up here until the CAFC has a say. And who knows? This is the sort of case that might go even further than that.
That's what the folks at Patently Obvious think, at any rate. They note that this decision is rather far out of the usual range of case law on patentability, and will likely be reversed on appeal. And then?
A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you're wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.
I hadn't seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn't symmetrical - and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn't appear to be just due to less-precise studies tending to show positive effects for some reason - it strongly suggests that there are negative studies that just haven't been reported.
The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.
The loss of experiments that showed no effect shouldn't surprise anyone - after all, it's long been known that publishing such papers is just plain harder than publishing ones that show something happening. There's an obvious industry bias toward only showing positive data, but there's an academic one, too, which affects basic research results. As the authors put it:
These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.
And there's an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):
The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.
This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it's clear that this can't happen if the results are just sitting on someone's hard drive. But it's also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing's for free.
Another promising Phase II oncology idea goes into the trench in Phase III: GenVec has been working on a gene-therapy approach ("TNFerade") to induce TNF-alpha expression in tumors. That's not a crazy idea, by any means, although (as with all attempts at gene therapy) getting it to work is extremely tricky.
And so it has proved in this case. It's been a long, hard process finding that out, too. Over the years, the company has looked at TNFerade for metastatic melanoma, soft tissue sarcoma, and other cancers. They announced positive data back in 2001, and had some more encouraging news on pancreatic cancer in 2006 (here's the ASCO abstract on that one). But last night, the company announced that an interim review of the Phase III trial data showed that the therapy was not going to make any endpoint, and the trial was discontinued. Reports are that TNFerade is being abandoned entirely.
This is bad news, of course. I'd very much like gene therapy to turn into a workable mode of treatment, and I'd very much like for people with advanced pancreatic cancer to have something to turn to. (It's truly one of the worst diagnoses in oncology, with a five-year survival rate of around 5%). A lot of new therapeutic ideas have come up short against this disease, and as of yesterday, we can add another one to the list. And we can add another Promising in Phase II / Nothing in Phase III drug to the list, too, the second one this week. . .
We get reminded again and again that interesting Phase II results are only that: interesting, and no guarantee of anything. Antisoma (and their partner Novartis) are the latest company to illustrate that painful reality - their drug AS404 (vadimezan) looked in Phase II as if it might be a useful addition to oncology treatments, but has completely missed its endpoints in the bigger, more realistic world of Phase III. The trial was halted after an interim analysis showed basically no hope of it showing benefit if things continued.
There are many reasons for why these things happen. Phase II trials are typically smaller, and their patient populations are more carefully selected. And they're quite susceptible to wishful thinking. They're designed to keep things going, to show some reason to proceed, and they often do. If your drug candidate makes it through Phase II, that may say more about how you designed the trial than it says about the compound.
That's not to say that getting past Phase II is meaningless. Compared to having no efficacy data at all, it's a big step. But Phase III, when a compound goes out to a larger and more diverse patient population, is a much bigger one. And plenty of candidates aren't up to it.
For the medicinal chemists in the audience, I wanted to strongly recommend a new paper from a group at Roche. It's a tour through the various sorts of interactions between proteins and ligands, with copious examples, and it's a very sensible look at the subject. It covers a number of topics that have been discussed here (and throughout the literature in recent years), and looks to be an excellent one-stop reference.
In fact, read the right way, it's a testament to how tricky medicinal chemistry is. Some of the topics are hydrogen bonds (and why they can be excellent keys to binding or, alternatively, of no use whatsoever), water molecules bound to proteins (and why disturbing them can account for large amounts of binding energy, or, alternatively, kill your compound's chances of ever binding at all), halogen bonds (which really do exist, although not everyone realizes that), interactions with aryl rings (some of which can be just as beneficial coming in 90 degrees to where you might imagine), and so on.
And this is just to get compounds to bind to their targets, which is the absolute first step on the road to a drug. Then you can start worrying about how to have your compounds not bind to things you don't want (many of which you probably don't even realize even are out there). And about how to get it to decent blood levels, for a decent amount of time, and into the right compartments of the body. And at that point, it's nearly time to see if it does any good for the disease you're trying to target!
As we slowly attack the major causes of disease, and necessarily pick the low-lying fruit in doing so, it can get harder and harder to see the effects of the latest advances. Nowhere, I'd say, is that more true than for cardiovascular disease, which is now arguably the most well-served therapeutic area of them all. It's not that there aren't things to do (or do better) - it's that showing the benefit of them is no easy task.
Robert Fortner has a good overview of the problem here. The size of the trials needed in this area is daunting, but they have to be that size to show the incremental improvements that we're down to now. He also talks about oncology, but that one's a bit of a different situation, to my mind. There's plenty of room to show a dramatic effect in a lot of oncology trials, it's just that we don't know how to cause one. In cardiovascular, on the other hand, the space in which to show something amazing has flat-out decreased. This is a feature, by the way, not a bug. . .
Technical book author (and occasional commenter here) Robert Bruce Thompson has a channel on YouTube called "The Home Scientist" that's quite interesting. Many of these seem to be companion videos for his book, The Illustrated Guide to Home Chemistry Experiments. This is real, well-done chemistry with reagents that can be easily purchased and manipulated by a competent non-chemist. Well worth sending on to people who would like to get a feel for what the science is like!
The discussion of "privileged scaffolds" in drugs here the other day got me to thinking. A colleague of mine mentioned that there may well be structures that don't hit nearly as often as you'd think. The example that came to his mind was homopiperazine, and he might have a point; I've never had much luck with those myself. That's not much of a data set, though, so I wanted to throw the question out for discussion.
We'll have to be careful to account for Commercial Availability Bias (which at least for homopiperazines has decreased over the years) and Synthetic Tractability Bias. Some structures don't show up much because they just don't get made much. And we'll also have to be sure that we're talking about the same things: benzo-fused homopiperazines (and other fused seven-membered rings) hit like crazy, as opposed to the monocyclic ones, which seem to be lower down the scale, somehow.
It's not implausible that there should be underprivileged scaffolds. The variety of binding sites is large, but not infinite, and I'm sure that it follows a power-law distribution like so many other things. The usual tricks (donor-acceptor pairs spaced about so wide apart, pi-stacking sandwiches, salt bridges) surely account for much more than their random share of the total amount of binding stabilization out there in the biosphere. And some structures are going to match up with those motifs better than others.
So, any nominations? Have any of you had structural types that seem as if they should be good, but always underperform?
. . .now's a heck of a time to buy. I just noticed this in my e-mail - thanks to Pfizer, and their appetite for closing down research buildings, you now have the opportunity to buy massive piles of once-useful instruments at auction.
Don't let the fact that a massive drug company has no need for all this equipment put you off. You might be able to find a use for it! It's good stuff: high-field NMRs, LC/mass spec machines of all sorts, liquid handlers, robotics platforms, cell culture apparatus, spectrophotometers, microscopes, centrifuges. . .the list is a long one. Removal of evil spirits is, as far as I can tell, not included. But otherwise, there's everything you'd need to start a productive research company, except the employees. And there are plenty of those on the market, too, you know.
Nature has a review of a new book on the anti-aging field, Eternity Soup by Greg Critser, and I found this part very instructive. The same things apply to several other therapeutic areas where people see fast money to be made:
Critser's methodical portrayal of a host of anti-ageing practitioners reveals some fascinating people who seek to convince others that they can purchase longer and healthier lives like any other commodity. He makes clear that many anti-ageing treatments are based more on faith healing than on science, and that the industry defends them and presents them to the public with evangelical zeal. Scientific gerontologists who point out the lack of empirical evidence behind the claims are shouted down, sued for libel or made fun of as lab technicians or statisticians with no experience in treating patients.
Critser became aware during his research of why the ridiculed scientific gerontologists find the anti-ageing industry so aggravating. The industry closely monitors the field for any advances, and when it spots something that might be turned into a commercial enterprise, the product is repackaged, branded and sold to the public as the next great breakthrough of its own invention. . .
It's interesting, though, that the cancer-cure quacks tend not to ride so much on the current research. A lot of that stuff seems just to be completely made up, without even a connection to something in the scientific literature. Perhaps that's because there are occasional spontaneous remissions from cancer, but none from old age. . .
In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.
Naturehas a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.
In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.
A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.
And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).
It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.
But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.
Here's a new article on the concept of "privileged scaffolds", the longstanding idea that there seem to be more biologically active compounds built around some structures than others. This doesn't look like it tells me anything I didn't know, but it's a useful compendium of such structures if you're looking for one. Overall, though, I'm unsure of how far to push this idea.
On the one hand, it's certainly true that some structural motifs seem to match up with binding sites more than others (often, I'd say, because of some sort of donor-acceptor pair motif that tends to find a home inside protein binding sites). But in other cases, I think that the appearance of what looks like a hot scaffold is just an artifact of everyone ripping off something that worked - others might have served just as well, but people ran with what had been shown to work. And then there are other cases, where I think that the so-called privileged structure should be avoided for everyone's good: our old friend rhodanine makes an appearance in this latest paper, for example. Recall this this one has been referred to as "polluting the literature", with which judgment I agree.
I've spoken about fragment-based drug design and ligand efficiency here a few times. There's a new paper in J. Med. Chem. that puts some numbers on that latter concept. (Full disclosure - I've worked with its author, although I had nothing to do with this particular paper).
For the non-chemists in the crowd who want to know what I'm talking about, fragment-based methods are an attempt to start with smaller, weaker-binding chemical structures than we usually work with. But if you look at how much affinity you're getting for the size of the molecules, you find that some of these seemingly weaker compounds are actually doing a great job for their size. Starting from these and building out, with an eye along the way toward keeping that efficiency up, could be a way of making better final compounds than you'd get by starting from something larger.
Looking over a number of examples where the starting compounds can be compared to the final drugs (not a trivial data set to assemble, by the way), this work finds that drugs, compared to their corresponding leads, tend to have similar to slightly higher binding efficiencies, although there's a lot of variability. They also tend to have similar logP values, which is a finding that doesn't square with some previous analyses (which showed things getting worse during development). But drugs are almost invariably larger than their starting points, so no matter what, one of the keys is not to make the compounds greasier as you add molecular weight. (My "no naphthyls" rule comes from this, actually).
There are a few examples of notably poor ligand-efficient starting structures that have nonetheless been developed into drugs. Interestingly, several of these are the HIV protease inhibitors, with Reyataz (atazanavir) coming in as the least ligand-efficient drug in the whole data set. A look at its structure will suffice. The wildest one on the list appears to be no-longer-marketed amprenavir, whose original lead was 53 micromolar and weighed over 600, nasty numbers indeed. I would not recommend emulating that one. In case you're wondering, the most ligand efficient drug in the set is Chantix (varenicline).
In the cases where ligand efficiency actually went down along the optimization route, inspection of the final structures shows that in many cases, the discovery team was trading efficiency for some other property (PK, solubility, etc.) To me, that's another good argument to make things as efficient as you can, because that gives you something to trade. A big, chunky, lashed-together structure doesn't give you much room to maneuver.
You know, you'd think that we'd understand the way things bind to proteins well enough to be able to explain why biotin sticks so very, very tightly to avidins. That's one of the most impressive binding events in all of biology, short of pushing electrons and forming a solid chemical bond - biotin's stuck in there at femtomolar levels. It's so strong and so reliable that this interaction is the basis for untold numbers of laboratory and commercial assays - just hang a biotin off one thing, expose it to something else that has an avidin (most often streptavidin) coated on it, and it'll stick, or else something is Very Wrong. So we have that all figured out.
Wrong. Turns out that there's a substantial literature given to arguing about just why this binding is so tight. One group holds out for hydrophobic interactions (which seems rather weird to me, considering that biotin's rather polar by most standards). Another group has a hydrogen-bonding explanation, which (on the surface) seems more feasible to me. Now a new paper says that the computational methods applied so far can't handle electrostatic factors well, and that those are the real story.
I'm not going to take a strong position on any of these; I'll keep my head down while the computational catapults launch at each other. But it's definitely worth noting that we apparently can't explain the strongest binding site interaction that we know of. It's the sort of thing that we'd all like to be able to generate at will in our med-chem programs, but how can we do that when we don't even know what's causing it?
OK, enough politics around here for a while. It's time to talk about fat rats. When I last wrote about fructose around here, it was to highlight a paper that suggested that it had effects on satiety signaling in the brain. The hypothesis was that fructose could lead to an abnormal drop in ATP levels in the hypothalamus, leading to an inappropriate hunger signal. This is partially borne out by the results of infusing various sugars directly into the brains of rats: if you do that trick with glucose, the rats stop eating - their cells have detected abundant glucose, which is a signal that they've been fed recently. On the other hand, if you use fructose, the rodents actually eat more.
Some of the big questions, though, have been whether fructose does this under normal conditions in rats (that is, without the power-drill route of administration into the brain), and whether that result carries over to humans. There's a new paper from a group at Princeton that's sure to add fuel to the debate. They studied the effects on rats of access to high-fructose corn syrup (8% in water) versus 10% sucrose, with unlimited access to normal rat chow, and looked at whether it made a difference if you allowed access for half the day versus the whole 24 hours.
Over an 8-week period, the groups diverged significantly. The half-day corn syrup rats put on significantly more weight than the half-day sucrose rats did, even though (most interestingly) the corn syrup group turned out to be ingesting fewer calories from the added corn syrup than the sucrose rats were getting from their sugar water. That is, the difference in caloric intake (and thus the excess weight) was all coming from eating more chow.
When the study was extended to six months, it turned out that it didn't matter much if the rats had 12-hour or 24-hour access to the high-fructose corn syrup - by week 3, the weights of both groups had diverged from the controls. (Looking at the graphs, it appears that the 24-hour group may have done somewhat worse, but I don't think they reached statistical significance versus the 12-hours). But that result is in male rats. The females showed what seems to be a much less dramatic effect. Only the 24-hour-HFCS group showed a significant weight difference from the controls.
Looking at the fat deposits the rats had laid down during this time shows another gender difference, although it doesn't help clear things up any. The males show a tendency for more fat pad mass, although the only measurement that reached significance was the abdominal fat for the 12-hour-a-day group. The females, although they didn't show nearly as wide a difference in weight gain, had much more significant differences in their fat mass (but only for the 24-hour-a-day HFCS group). Finally, in blood chemistry, none of the groups showed differences in insulin levels. But the both the male HFCS groups had elevated triglycerides, as did the 24-hour-HFCS females.
Taken together, it appears that rats (especially males) are able to adjust their caloric intake when given access to small amounts of sucrose, but not so much when given equivalent amounts of HFCS. Earlier work has shown that access to higher levels of sucrose or other sugars, though, will indeed cause rats to gain weight. But not everyone, it seems, even sees these effects. A study from last December looked at a variety of sweetened waters, given to rats 12 hours/day for ten weeks, but only three days out of each week. No differences in weight were seen, although it should be noted that in head-to-head tests, the rats preferred HFCS to agave or Stevia sweeteners. (I wish this group had run sucrose in this experiment, too).
So does this effect even apply across the board in rodents? And if it does, is it operating in humans as well? Short term, no one has been able to find any short-term differences in satiety or blood chemistry when comparing HFCS with sucrose in humans. That alone (as mentioned in the earlier post here linked in the first paragraph) makes you wonder if that fructose/brain hypothesis can hold up in people. But what about long-term effects, which may or may not have anything to do with that CNS-based mechanism?
As far as I can tell, we have no controlled data for that, which isn't surprising, considering the sort of experiment you'd have to run. Most people aren't in a position to have their food and liquid intake completely monitored for two or three months. But short of that, I'm not sure how we're ever going to straighten all this out.
One of the giants of medicinal chemistry has died today at the age of 85 - Sir James Black, who pioneered beta-adrenoceptor antagonists and many other areas in drug discovery. Keep in mind that earlier in his career, many people thought of the concept of a "receptor" as an abstract placeholder, not necessarily something with any physical meaning. We've come a long way since then, and his work is one of the big reasons why.
He was part of the "pure medicinal chemistry" Nobel Prize award of 1988, along with George Hitchings and Gertrude Elion. There's a good interview with him at that Nobel site, and here's a tribute to him on YouTube.
I mentioned Benford's Law in passing in this post (while speculating on how long people report their reactions to have run when publishing their results). That's the rather odd result that many data sets don't show a random distribution of leading digits - rather, 1 is the first digit around 30% of the time, 2 leads off about 18% of the time, and so on down.
For data that come from some underlying power-law distribution, this actually makes some sense. In that case, the data points spend more time being collected in the "lag phase" when they're more likely to start with a 1, and proportionally less and less time out in the higher-number-leading areas. The law only holds up when looking at distributions that cover several orders of magnitude - but all the same, it also seems to apply to data sets where there's no obvious exponential growth driving the numbers.
Lack of adherence to Benford's Law can be acceptable as corroborative evidence of financial fraud. Now a group from Astellas reports that several data sets used in drug discovery (such as databases of water solubility values) obey the expected distribution. What's more, they're suggesting that modelers and QSAR people check their training data sets to make sure that those follow Benford's Law as well, as a way to make sure that the data have been randomly selected.
Is anyone willing to try this out on a bunch of raw clinical data to see what happens? Could this be a way to check the integrity of reported data from multiple trial centers? You'd have to pick your study set carefully - a lot of the things we look for don't cover a broad range - but it's worth thinking about. . .
And it's an interesting decision. As mentioned here, the company's last stand was on questions of patentability, specifically the written description requirement. Well, the appeals court has ruled this morning, and Ariad's '516 patent does, finally, appear to be invalidated. There's more at Patently Obvious, who seem to be among the first with this story.
From what I can see, the court's decision makes it clear that there really has to be a description sufficient for one skilled in the art to reproduce an invention, and that stating your hypothesis isn't enough to meet this requirement. So in Ariad's case, claiming all sorts of (not yet existing) things to modulate NF-kB function doesn't fly, because they don't actually tell anyone how to do that, just how they wanted to own it if and when someone does. The written description requirement, the court holds, doesn't mean that you have to actually reduce something to practice (although I'd have to say, from my own perspective, that it most certainly would be a good idea to do so if you can), but you have to show how that could be done. "Patents are not awarded for academic theories, no matter how groundbreaking or necessary to the later patentable inventions of others" is a key quote.
Since I've written occasionally about the current health care reform efforts here, I feel as if I should say something now that a bill has passed the House. To be honest, though, I'm having a bit of trouble getting my thoughts in order, although I do feel the need to vent. Readers who aren't in the mood for my political opinions can skip this one.
Here goes: first off, it's rather hard for me to get past my anger at being told (repeatedly, by both the President and members of Congress) that this bill will "bend the cost curve" and on top of that, actually reduce the deficit. This is, in this case, such a transparent lie that it indicates actual contempt for their audience on the part of those repeating it. We can start with history and general principles: I have yet to hear of a state or federal health care system in this country that has not ended up costing hugely more than it was ever slated to.
I can get more specific in this case, though, since the entire bill was carefully structured to show a spurious deficit reduction (in order for it to be pushed through the budget reconciliation process, without which it could not have passed at all). Costs are pushed out past the Congressional Budget Office's ten-year time horizon, offloaded onto the states (whose Attorneys-General are now frantically trying to figure out what to do), or just blatantly left out. In the last category is the "doc fix", the adjustment to Medicare reimbursement rates that had to be dropped from the current bill in order to hocus the CBO numbers. The firm understanding between the interested parties is that the House will quietly pass that in the near future when not so many people are paying attention, and damn the numbers anyway. As I said above, "contempt" is the word that keeps coming to mind.
To my mind, this bill will indeed manage to provide health insurance to a portion of those now uninsured, but at a ferocious cost. And to that point, I was unhappy with the amount of money the Bush administration spent, but had I only known what was coming, I would have enjoyed the fiscal restraint while I could. I believe that we're spending entirely too much money that we don't have, and not getting that much in return for it (other than lots of warm, heartfelt favors to friendly constituencies that can be expected to support the current administration).
And here's my last point: my own industry's trade association, PhRMA, believes itself to be in that last category. Whether you felt like it or not, if you work in the drug industry, you spent a lot of money to help get this bill passed. I haven't heard the details of the quid pro quo deals for our business, but no doubt there are some nice ones hidden in the recesses of the bill (or just outside it, like the doc fix). My worry, though, is that dealing with the government on this level is like dealing with a hungry bear. Sooner than we think, the costs of this bill will kick in. At that point, I predict that we will find ourselves in yet another Health Care Crisis, having failed to bend any cost curves whatsoever. Then the bear will turn its head to us again, but this time, with a new look in its eyes.
Here's the sort of thing we'll be seeing more and more of - on the whole, I think it's a good development, but it's certainly possible that one's mileage could vary:
Ginkgo’s BioBrick Assembly Kit includes the reagents for constructing BioBrick parts, which are nucleic acid sequences that encode a specific biological function and adhere to the BioBrick assembly standard. The kit, which includes the instructions for putting those parts together, sells for $235 through the New England BioLabs, an Ipswich, MA-based supplier of reagents for the life sciences industry.
Shetty didn’t release any specific sales figures for the kit, but said its users include students, researchers, and industrial companies. The kit was also intended to be used in the International Genetically Engineered Machine competition (iGEM), in Cambridge, MA. The undergraduate contest, co-launched by Knight, challenges students teams to use the biological parts to build systems and operate them in living cells.
I realize that there have been about one Godzillion of these videos by now (the Michael Jackson one is a particular favorite), but at the same time, I'd be lying if I said that I didn't have similar thoughts the last time I had a manuscript rejected:
The movie is, of course, "Downfall" (Der Untergang), and probably many more people have been motivated to see the whole thing thanks to these YouTube clips. I have to turn the sound down a lot to really get into the spirit of the parodies; my German is still sufficient to make the reworked subtitles clash. Which reminds me of a distinctly weird sensation, watching a World War II documentary at home with my father after returning from my German post-doc stint. When archival footage of Hitler came up on the screen I suddenly found that I could follow his speech, and watched in amazement as he pounded the podium, hitting the compound verbs at the end of his sentence.
Update, March 19: I've added a few more suppliers to the list, and broken out a third category for the mixed reviews. And I note in the comments that someone claiming to be Kathy Yu from 3B Chemicals is threatening me with legal action. The IP address resolves only to AT&T Internet Services, but there does appear to be someone from that name who works at 3B. I hope, for her sake and that of the company, that this is someone impersonating her, because whoever is leaving these comments is doing 3B no favors.
And since I am reporting opinions, both my own and those of other contributors that I have no reason to doubt, and am doing so without malicious intent, I will cheerfully ignore all legal threats.
OK, here are the lists of good companies and not-so-good companies, based on my experience and those of readers. I've had some personal communications, too, which I've added to the data set. As more reports come in, this will be the post that's updated, so it can serve as a reference.
I should note up front that I'm not listing the Big Guys, since (while they can have their ups and downs), you generally know that they're going to send you something. What we're looking at are the companies that you might not have dealt with, but want to know if they're reliable. And that brings us to the:
ABCR: good prices and hit rate on orders. Very professional.
Activate: expensive, but what's there is there, and it's the right stuff.
Adesis: not cheap, but very reliable and willing to work with customers to deliver similar compounds.
Advanced Chem Tech: recommended for peptide/amino acid stuff.
AK Scientific: several good reports on availability and purity.
Alinda: have ordered one thing from them, which was fine.
Anaspec: good reports on reliability
Apollo: good stuff, but catalog needs to be a bit more in line with their real stock.
Array: very pricey, but it's all there.
Astatech: good experience reported
Bionet: interesting catalog, doesn't back-order you.
Chembridge: a big catalog, but it's all real. Occasional purity problem.
Chem/Impex: good hit rate on availability. Some questions on their chiral purities.
Combi-Blocks: good list of useful intermediates, delivers on them.
Enamine: similar to ChemBridge in many ways. Big catalog. Not the fastest out there.
Florida Center for Heterocyclics: occasional purity issues, but they do deliver.
Frontier: great source for boronic acids and the like.
Life Chemicals: have had good experiences with compound purity here.
Lu: good source for custom peptides.
Matrix: interesting catalog, which they will really ship to you.
Maybridge: on the border of being one of the big guys. Very reliable.
Midwest: good reports on reliability.
Netchem: custom synthesis, but (for once!) with good turnaround and purity.
Oakwood/Fluorochem: good prices and reliability.
Peptide Protein Research: good for custom peptides.
Pharmacore: good stock of intermediates.
Rieke: reliable, only game in town for many odd reagents.
Strem: well known for quality inorganics and organometallics.
Synquest: used to be PCR. Good customer service.
Synthonix: stuff is in stock, customer service is responsive.
TCI: has always delivered, and quickly.
Transworld: very reliable and responsive.
Tyger: have never had a problem with them.
Waterstone Chemicals: good experience on pricing and availability
American Custom Chemicals (ACC): several tales of bad purity and customer service, but others have had nothing but good experiences with them.
3B Chemicals: "will lead you on for months". Several bad experiences reported. On the other hand, I've just heard directly from a colleague who's had good luck with them.
J&W Pharmlab: bad experience reported (delays and purity), but others OK.
Ontario: one good report, but others complain of availability and leads times.
SPECS: mixed reports, but overall positive.
The Not So Good:
Ambinter: seems to source a lot of stuff from mystery suppliers. Many delays.
Any supplier, sad to say, with "Hangzhou" or "Shanghai" in the name. Tend to have absolutely nothing on the shelf, and if there's even a listed price, it's science fiction.
Anichem: very bad experience here with unexplained delays.
Beta Pharma: bad experience reported.
ChemMaker: very negative report on customer service and responsiveness.
City Chemicals: several bad experiences reported
Combi-Phos: several reports of purity problems.
Rarechem: haven't come across anyone with a good report here.
UK Green: a bad experience reported.
Uorsy: nothing ever seems to be in stock.
Zelinsky: several bad experiences reported.
I'm not sure that the term will catch on, but this new paper proposes "antedrug" to describe a compound that's deliberately designed to be cleaved quickly to something inactive. I see where they're coming from - reverse of "prodrug" - but in spoken English it's too close to "anti-drug". Hasn't someone come up with this concept before? Perhaps they didn't bother to name it. . .
Is it just me, or is the fine chemicals supply business getting even more out of hand than usual? I was just talking with a colleague who'd sourced an interesting intermediate, at the (steep!) price of about $900 for a gram. She placed the order and. . .you guessed it, the supplier immediately back-ordered it, saying the price had changed. It took someone from Purchasing to drag the new quote out of them (they apparently wouldn't give it over the phone). Now (to no one's surprise, I'm sure) the material is over $3000/gram, and will have a lead time of weeks.
This sort of thing has gone one for a long time, of course. But my impression is that there's more of it than ever. When the Chinese and ex-Soviet suppliers began to appear some years ago, they were often a pretty cheap source of some unusual compounds. But that's changed.
My belief - and I'll be glad to hear from people who do more compound purchasing than I do - is that the Chinese outfits especially have decided in recent years that they have some real pricing power, and are pushing it to see how far they can get. Add that to the hand-waving don't-you-worry-now aspect of many of their product lists, and you have a recipe for irritation and wasted time. (Another colleague described some of these online catalogs as "things they wish they sold".)
A previous comment on to a post like this listed some suppliers that had been found to be reliable, and I'll reproduce that here, in no particular order: Maybridge, Enamine, Asinex, Key Organics, ChemBridge, Specs, ASDI Biosciences, InterBioScreen, Vitas M Labs, Life Chemicals, Labotest, and TimTec. Suppliers of weirdo outlier compounds that nonetheless tend to come through were Albany Molecular, Chem T&I, Florida Center for Heterocyclic Compounds, and Princeton Biomolecular. I've used many of those folks myself (and have had particular success with Life Chemicals and Specs, as far as availability and purity). Some of these companies are faster to ship than others. But the thing that stands out with all of them is that they have what they say that they have, and what's more, it costs what it says that it costs.
For intermediates, as opposed to final-compound-like structures, I'd say that I've had good dealings with Apollo, Synthonix, Matrix, Pharmacore, Adesis, Tyger, Fluorochem, Oakwood, and Astatech. There are, I'm sure, several other suppliers in this category, and I'd be glad to list more of them after seeing the comments.
But now let's reverse the polarity. What's a blog for if you can't say what you really think? Here, then, is a preliminary blacklist of suppliers. These people either have product listings that overlay poorly with reality, try to jerk you around on the price, take much longer to deliver than their initial estimates, or (lucky you) can do all of these at once. My personal recommendation is to be quite careful with ChemPacific, Uorsys, CTI, Zelinsky, and everyone with the words "Hangzhou" or "Shanghai" in the company name.
Please feel free to add others to the lists. I'll do a consolidated post reflecting everyone's experience - that way, we can give business to deserving companies you might not have worked with before, and we can perhaps shame some others into acting more reasonably.
Here's another addition: A New Merck, Reviewed, which is someone's attempt to dig through everything about the new Merck/Schering-Plough hybrid. I'm not sure that all the info is reliable, of course, and whoever writes this has a strange way with italics, but it's worth a look.
Update: this turns out to be the Wordpress backup site for Shearlings Got Plowed, which I've mentioned here before. With the merger, the blog's author is covering his bases.
I'm a complete sucker for dense but well-presented information, and this one isn't bad at all: here's a chart of nutritional supplements by the strength of the evidence for them in human trials. I haven't cross-checked the data, but the authors appear to have done some homework in PubMed, at least, and haven't included any non-human or in vitro data. The interactive version at the link is particularly fun to mess around with. (Thanks to a reader and commenter here who put me on to this).
$75 million dollars worth of antipsychotics - that's a lot of pills, and I'm not surprised to see that the thieves used a tractor-trailer to haul everything off. You'd have to assume that there's a well-worked-out pathway to unload all of these things, and that no one's going to go to all this trouble on "spec".
Glad to see that my industry's products are so much in demand. . .
A small company called BioTime has gotten a lot of attention in the last couple of days after a press release about cellular aging. To give you an idea of the company's language, here's a quote:
"Normal human cells were induced to reverse both the "clock" of differentiation (the process by which an embryonic stem cell becomes the many specialized differentiated cell types of the body), and the "clock" of cellular aging (telomere length)," BioTime reports. "As a result, aged differentiated cells became young stem cells capable of regeneration."
Hey, that sounds good to me. But when I read their paper in the journal Regenerative Medicine, it seems to be interesting work that's a long way from application. Briefly - and since I Am Not a Cell Biologist, it's going to be brief - what they're looking at is telomere length in various stem cell lines. Telomere length is famously correlated with cellular aging - below a certain length, senescence sets in and the cells don't divide any more.
What's become clear is that a number of "induced pluripotent" cell lines have rather short telomeres as compared to their embryonic stem cell counterparts. You can't just wave a wand and get back the whole embryonic phenotype; their odometers still show a lot of wear. The BioTime people induced in such cells a number of genes thought to help extend and maintain telomeres, in an attempt to roll things back. And they did have some success - but only by brute force.
The exact cocktail of genes you'd want to induce is still very much in doubt, for one thing. And in the cell line that they studied, five of their attempts quickly shed telomere length back to the starting levels. One of them, though, for reasons that are completely unclear, maintained a healthy telomere length over many cell divisions. So this, while a very interesting result, is still only that. It took place in one particular cell line, in ways that (so far) can't be controlled or predicted, and the practical differences between this one clone and other similar cells lines still aren't clear (although you'd certainly expect some). It's worthwhile early-stage research, absolutely - but not, to my mind, worth this.
The entries I've done on the "open-plan" Biochemistry building at Oxford (see also Jim Hu) generated a lot of comments from people who've worked in poorly designed science facilities. I've heard from Linda Wang, a reporter at C&E News, who's writing article on this very subject. She's looking for chemists who are willing to talk about both good and bad experiences working in various building designs, so if you fit that description, feel free to email her at l_wang-at-acs.org (email address de-spammified, just substitute the usual symbol) or give her a call at 202-872-4579.
Now here's something that I don't think anyone expected. A recent paper in PLoS One makes the case that beta-amyloid, the protein that has been fingered for decades as a major player in Alzheimer's disease, is actually part of the body's antimicrobial defenses.
Well, it's good to hear that it's doing something. Many people had hypothesized that it was a useless (indeed, harmful) byproduct, a waste stream from aberrant processing of the amyloid precursor protein (APP). Still, there have been reports over the years that beta-amyloid was substrate for active transport pumps, might be a ligand for various receptors, etc., but not everyone was willing to take these results seriously.
But it turns out that some of A-beta's properties are similar to those of innate host defense peptides. When this latest team checked the amyloid protein's activity, it turns out to be pretty active. The prototype peptide in this area, LL-37, appears to have a broader spectrum of activity, but A-beta beats it against several organisms, most notably the yeast C. albicans. And as it turns out, brain homogenates from Alzheimer's patients are much more active against yeast in vitro than samples from age-matched controls without the disease. But that only holds true for parts of the brain (like the temporal lobe) that are known to be high in amyloid. Samples from the cerebellum (which doesn't usually show Alzheimer's pathology) had no activity. (One has to wonder if this is the first time - or at least the first time in a very long while - that anyone's evaluated human brain homogenates for their microbicidal activity).
This could lead to a complete rethink of Alzheimer's pathology. It's been known for a long time that there's a big inflammation component to the disease - perhaps the problem (or at least the trigger) is an underlying infection that sets off the innate immune system in the brain. Larger than normal amounts of beta-amyloid are produced in response, but it starts to precipitate out.
The more familiar adaptive immune system has limited access to the CNS, although that's not stopping people from trying to use it. But that approach (and many others) presume that beta-amyloid is a cause of the disease. Perhaps it isn't. Maybe it's the body's attempt at a solution - and if that's true, we need to look elsewhere for the cause, and soon. This is one of the most thought-provoking looks at Alzheimer's that I've seen in a long time. Here's hoping it leads to something new.
I was thinking the other day about the sheer number of reasonable chemical structures that have never been made. Chemical space is famously roomy - that's how we make a living in the drug industry, since we prefer to make things that have never been made before. And it still surprises non-chemists when I tell them that I make new compounds all the time - the feeling, I think, is that anything that's reasonably easy to make surely must have been mined out long ago. Not so. (It's worth remembering, though, that just because something's never been reported doesn't always mean that you can't buy it).
What brought this to mind was a steroid structure that I saw during a presentation. Looking at it like a medicinal chemist, I wondered idly if the carbons in the famous steroid backbone had ever been swapped out much with oxygen or nitrogen atoms. And in a few cases they have (more for oxygen, in some natural products), but for the most part, no. You can drop a tertiary amine into some spots on the steroid framework and immediately come up with no literature hits whatsoever. Many others yield only a handful.
It's worth noting that the partially-aromatized steroids have had some of this kind of work done on them - for example here and here. The aromatic rings give you a bit more of a handle to work with, but even here it's not like the literature is always packed with examples.
So there's as bioactive a scaffold as you could ask for, but many of the simple analogs still haven't been described. To be fair, these azasteroids aren't simple to make, and probably wouldn't have steroid-like activities in many cases. (Their natural receptors sure aren't expecting a basic amine in those spots). But many azasteroids do show biological activities, and I'd be quite surprised if these unknown compounds were pharmacologically inert. It's just that there's been no particular reason to make any of them yet. Chemical space is so huge, and our ability to explore it has been with us for such a relatively short time, that we just haven't gotten around to them yet.
There have been complaints that something is going wrong in the publication of stem cell research. This isn't my field, so I don't have a lot of inside knowledge to share, but there appear to have been a number of researchers charging that journals (and their reviewers) are favoring some research teams over others:
The journal editor decides to publish the research paper usually when the majority of reviewers are satisfied. But professors Lovell-Badge and Smith believe that increasingly some reviewers are sending back negative comments or asking for unnecessary experiments to be carried out for spurious reasons.
In some cases they say it is being done simply to delay or stop the publication of the research so that the reviewers or their close colleagues can be the first to have their own research published.
"It's hard to believe except you know it's happened to you that papers have been held up for months and months by reviewers asking for experiments that are not fair or relevant," Professor Smith said.
You hear these sorts of complaints a lot - everyone who's had a paper turned down by a high-profile journal is a potential customer for the idea that there's some sort of backroom dealing going on for the others who've gotten in. But just because such accusations are thrown around frequently doesn't mean that they're never true. I hate to bring the topic up again, but the "Climategate" leaks illustrate just how this sort of thing can be done. Groups of researchers really can try to keep competing work from being published. I just don't know if it's happening in the stem cell field or not.
It's easy to lose sight of what a drug is supposed to do. Many conditions come on so slowly that we have to use blood chemistry or other markers to see the progress of therapy in a realistic time. And over time, that blood marker can get confused with the disease itself.
To pick one famous example, try cholesterol. Everyone you stop on the street will know that "high cholesterol is bad for you". But the first thing you have to do is distinguish between LDL and HDL cholesterol - if the latter is a large enough fraction of the total, the aggregate number doesn't matter as much. And fundamentally, there's not a disease called "high cholesterol" - that's a symptom of some other cluster of metabolic processes that have gone subtly off. And the endpoint of any therapy in that field isn't really to lower the number in a blood test: it's to prevent heart attacks and to extend healthy lifetimes, mortality and morbidity. As we're seeing with Vytorin, it may be possible to drop the numbers in a blood test but not see the benefit that's supposed to be there.
Another example of this came up over the weekend. The fibrates are a class of drugs that change lipid levels, although the way they work is still rather obscure. They're supposed to be ligands for the PPAR-alpha nuclear receptor, but they're not very potent against it when you study that closely. At any rate, they do lower triglycerides and have some other effects, which should be beneficial in patients whose lipids are off and are at risk for cardiac problems.
But are they? Type II diabetics tend to be people who fit that last category well, and that's where a lot of fenofibrate is prescribed (as Abbott's Tricor in the US, and under a number of other names around the world). A five-year study in over five thousand diabetic patients, though, has just shown no difference versus placebo. Again, there's no doubt that the drug lowers triglycerides and changes the HDL/LDL/VLDL ratios. It's just that, for reasons unknown, doing so with fenofibrate doesn't seem to actually help diabetic patients avoid cardiac trouble.
Mortality and morbidity: lowering them is a very tough test for any drug, but if you can't, then what's the point of taking something in the first place? This is something to keep in mind as the push for biomarkers delivers more surrogate endpoints. Some of them will, inevitably, turn out not to mean as much as they're supposed to mean.
The discoverer of the prostate-specific antigen (Richard Ablin) has a most interesting Op-Ed in the New York Times. He's pointing out what people should already know: that using PSA as a screen for prostate cancer is not only useless, but actually harmful.
The numbers just aren't there, and Ablin is right to call it a "hugely expensive public health disaster". Some readers will recall the discussion here of a potential Alzheimer's test, which illustrates some of the problems that diagnostic screens can have. But that was for a case where a test seemed as if it might be fairly accurate (just not accurate enough). In the case of PSA, the link between the test and the disease hardly exists at all, at least for the general population. The test appears to have very little use in detecting prostate cancer, and early detection itself is notoriously unreliable as a predictor of outcomes in this disease.
The last time I had blood work done, I made a point of telling the nurse that she could check the PSA box if she wanted to, but I would pay no attention to the results. (I'd already come across Donald Berry's views on the test, and he's someone whose word I trust on biostatistics). I'd urge other male readers to do the same.
Freeman Dyson has written about his belief that molecular biology is becoming a field where even basement tinkerers can accomplish things. Whether we're ready for it or not, biohacking is on its way. The number of tools available (and the amount of surplus equipment that can be bought) have him imagining a "garage biotech" future, with all the potential, for good and for harm, that that entails.
Well, have a look at this garage, which is said to be somewhere in Silicon Valley. I don't have any reason to believe the photos are faked; you could certainly put your hands on this kind of equipment very easily in the Bay area. The rocky state of the biotech industry just makes things that much more available. From what I can see, that's a reasonably well-equipped lab. If they're doing cell culture, there needs to be some sort of incubator around, and presumably a -80 degree freezer, but we don't see the whole garage, do we? I have some questions about how they do their air handling and climate control (although that part's a bit easier in a California garage than it would be in a Boston one). There's also the issue of labware and disposables. An operation like this does tend to run through a goodly amount of plates, bottles, pipet tips and so on, but I suppose those are piled up on the surplus market as well.
But what are these folks doing? The blog author who visited the site says that they're "screening for anti-cancer compounds". And yes, it looks as if they could be doing that, but the limiting reagent here would be the compounds. Cells reproduce themselves - especially tumor lines - but finding compounds to screen, that must be hard when you're working where the Honda used to be parked. And the next question is, why? As anyone who's worked in oncology research knows, activity in a cultured cell line really doesn't mean all that much. It's a necessary first step, but only that. (And how many different cell lines could these people be running?)
The next question is, what do they do with an active compound when they find one? The next logical move is activity in an animal model, usually a xenograft. That's another necessary-but-nowhere-near-sufficient step, but I'm pretty sure that these folks don't have an animal facility in the basement, certainly not one capable of handling immunocompromised rodents. So put me down as impressed, but puzzled. The cancer-screening story doesn't make sense to me, but is it then a cover for something else? What?
If this post finds its way to the people involved, and they feel like expanding on what they're trying to accomplish, I'll do a follow-up. Until then, it's a mystery, and probably not the only one of its kind out there. For now, I'll let Dyson ask the questions that need to be asked, from that NYRB article linked above:
If domestication of biotechnology is the wave of the future, five important questions need to be answered. First, can it be stopped? Second, ought it to be stopped? Third, if stopping it is either impossible or undesirable, what are the appropriate limits that our society must impose on it? Fourth, how should the limits be decided? Fifth, how should the limits be enforced, nationally and internationally? I do not attempt to answer these questions here. I leave it to our children and grandchildren to supply the answers.
The Daily Telegraph in the UK has a story today claiming that a 1951 outbreak of hallucinations and dementia in the French village of Pont-Saint-Esprit was not (as everyone thought) an example of ergot poisoning. No, according to some guy who's writing a book, it was. . .a secret LSD experiment.
Now, there most certainly were secret LSD experiments during the 1950s and 1960s. (The book Storming Heaven has a good account of them, as well as of the history of LSD in general). But it's rather hard to see why the CIA should decide to dose some village in the Auvergne, especially when the symptoms (burning sensations in the extremities as well as hallucinations) seem to match ergotism quite well.
But no matter. I think we can dispose of this new book and its author pretty quickly. Just take a look at some of his scoop:
However, H P Albarelli Jr., an investigative journalist, claims the outbreak resulted from a covert experiment directed by the CIA and the US Army's top-secret Special Operations Division (SOD) at Fort Detrick, Maryland.
The scientists who produced both alternative explanations, he writes, worked for the Swiss-based Sandoz Pharmaceutical Company, which was then secretly supplying both the Army and CIA with LSD.
Mr Albarelli came across CIA documents while investigating the suspicious suicide of Frank Olson, a biochemist working for the SOD who fell from a 13th floor window two years after the Cursed Bread incident. One note transcribes a conversation between a CIA agent and a Sandoz official who mentions the "secret of Pont-Saint-Esprit" and explains that it was not "at all" caused by mould but by diethylamide, the D in LSD.
Laughter may now commence. For the non-chemists in the audience, diethylamide isn't a separate compound; it's the name of a chemical group. And LSD isn't some sort of three-component mixture, it's the diethylamide derivative of the parent compound, lysergic acid. (I'd like to hear this guy explain to me what the "S" stands for). Diethylamides have no particular hallucinogenic properties; they're too small and common a chemical group for anything like that. DEET, the insect repellent, is a common one, and there are plenty of others.
In short, neither the author of this new book, nor the people at the Telegraph, nor the supposed scientific "source" of this quote, know anything about chemistry. This is like saying that the secret of TNT is a compound called "Tri". Nonsense.
Update: see the comments section. Not everyone's buying my line of thought here. . .
If you want to know why people continue to speculate in biotech stocks, just take a look at the stairsteppy last few days of trading in Intermune (ITMN). Last Thursday it was at $15; now it's at $38. And all you have to do to cash in on these moves is read the FDA's mind!
That's not a money-making proposition, in case anyone thinks I'm advocating it. There are just too many surprises. But Intermune's good fortune started last week, when the FDA briefing documents came out on the application on the company's pirfenidone for idiopathic pulmonary fibrosis and were characterized as "not as bad as they could have been". (The company's history of overzealous PR wasn't helping it at this point). And if you still don't think that the moves in the stock have been surprising, consider that two ITMN executives sold shares on after the first jump, missing out on the second one completely when the FDA advisory panel gave the drug a favorable recommendation.
Pirfenidone, by the way, is another structure entry in the so-simple-I-can't-believe-it drug sweepstakes. If approved, it would be the first specific therapy for IPF, which can be a nasty disease. I certainly hope it helps out the patients involved (a few hundred thousand in the US), but that small patient population means that the drug isn't going to be cheap. Intermune's investors certainly don't think so.
But as has been clear for some time, we're in a rather tricky environment for expensive health care options. If pirfenidone makes it, I'd guess that it will be picked up widely, but cautiously, by health insurance. No one knows how it'll perform in the real world, and if little benefit is seen, it'll be hard to justify reimbursing for it. (It made one Phase III trial's endpoint, but missed another one, so there's room to wonder). The more cost-conscious European regulatory agencies will be a good place to watch this argument play out. One correspondent of mine refers to the drug as the next Iressa. That's not a compliment.
The Supreme Court has agreed to hear a vaccine-liability case, in an attempt to untangle conflicting lower court rulings. This all turns on the 1986 act that shields manufacturers from liability suits and a followup law that establishes a separate compensation system for injuries. A Georgia Supreme Court ruling has recently held that such suits can go on in state court, which seems to contradict other court decisions (and the intent of the 1986 law as well, you'd think).
I agree with Jim Edwards of BNET that although this particular case involves the DPT vaccine, the vaccines-cause-autism crowd will be watching this one very closely. Lawsuits will no doubt be ready to fly later this year in case the Supreme Court breaks that way - which seems to me unlikely, but I'm no judge. . .
We haven't had a How Not to Do It around here in a while, so here's a companion piece to the famous Sealed-Up Liquid Nitrogen Tank. This incident happened (as far as I can tell) about ten years ago. It's been used in a number of safety presentations then, thanks to the Airgas Corp., whose safety officer assembled a number of photos (and this is the time to emphasize that they had nothing to do with the accident itself, because people who work for a pressurized-gas company actually know how to handle pressure vessels.
As opposed to the two guys who scavenged a liquid oxygen Dewar from a scrap metal yard and decided to put it back into service. According to the most detailed report, they tried to rig up a connection to refill the cylinder, but found that it vented immediately through the pressure-relief valve. So. . .well, yeah, you know what's coming next: they took the darn thing off and plugged it shut. No more pesky venting! They filled up their cylinder, which was loaded on the back of their pickup truck, and went rolling down the interstate at lunchtime. Whereupon they had a flat tire, and pulled over for a while to fix things. . .
OK, you can look out from behind your hands now. Although I can't imagine how, neither of these two cowboys managed to get themselves killed, nor did they take out anyone else, through what appears to be sheer blind luck. According to the report, one member of the Cylinder Kings ended up being blown across five lanes of traffic, while his partner was launched forty feet in another direction. You can see from the photo how the truck weathered things. I can't imagine that a pressure wave of straight oxygen hitting tank of gasoline can end well; it's a perfectly reasonable mixture to put a payload into low-earth orbit.
Which is a good note on which to take inventory here. We have the owners of the oxygen cylinder accounted for, and their truck. What about the cylinder itself? Well, similar to the nitrogen tank referenced above, it had failed at the bottom weld and thus departed the scene of the accident like an artillery shell. It re-entered the affairs of the world a quarter of a mile away, plunging through the roof of an apartment, completely trashing the place (and severing a natural gas line in the process). As I said, how a dozen people didn't end up killed by all this is a complete mystery to me. (The red circle in that photo is where the pressure-relief device used to be. )
So the moral of this story is, I suppose, that Pressure Relief Devices Are There For A Reason. Or maybe it's "don't scrounge gas cylinders from the scrap yard and try to get them to work". Or perhaps "just because you haven't seen a pressure vessel explode yet, it doesn't mean that they can't". Or "Gegen der Dummheit kämpfen Götter selbst vergebens." Or something.
Nature Biotechnologyweighs in on the GSK/Sirtris controversy. They have a lot of good information, and I'm not just saying that because someone there has clearly read over the comments that have showed up to my posts on the subject. The short form:
The controversy over Sirtris drugs reached a tipping point in January with a publication by Pfizer researchers led by Kay Ahn showing that resveratrol activates SIRT1 only when linked to a fluorophore. Although Ahn declined to be interviewed by Nature Biotechnology, a statement issued by Pfizer says the group's findings “call into question the mechanism of action of resveratrol and other reported activators of the SIRT1 enzyme.”
Most experts, however, say it's too soon to write off Sirtris' compounds altogether, assuming they're clinically useful by mechanisms that don't involve sirtuin binding. And for its part, GSK won't concede that Sirtris' small molecules don't bind the targets. In an e-mailed statement, Ad Rawcliffe, head of GSK's WorldWide Business Development group, says, “There is nothing that has happened to date, including the publication [by Pfizer,] that suggests otherwise.”
We'll see if GSK and Sirtris have some more publications ready to silence their detractors. But what will really do that, and what we'll all have to wait for, are clinical results.
Well, it takes all kinds to make a market. And the collapse in Medivation's shares after their disastrous Phase III results the other day seem to have brought out some hopeful buyers. Take this guy:
. . .I'm telling you right now, I believe that sell-off has gone twice as deep as good sense can justify. At least, that's the way I see it.
First off, we should understand that drug trials are Medivation's business. Clinical trials are what the company does. This failed phase 3 study isn't to be considered a crash into a brick wall. It's not a crippling lawsuit. It's not the loss of a major customer account. It's simply a sudden downshift, a temporary change of gears. In many ways, for Medivation, it's just one facet of business as usual.
As I look at Medivation's one-year and three-year performance charts, the opening to invest is just screaming at me. . .
All I can say is "Go for it, chief!" I might just add, very quietly, that early-stage drug discovery is not really the kind of business where one-year and three-year stock performance is much of a guide. And it's also worth remembering that although clinical trials are indeed what drug companies do, we try not to do big honking Phase III face-plants. You don't start clinical trials that you think are going to end that way, so a crash into a brick wall is actually not a bad analogy.
But hey - the dented hubcaps have just about finished wobbling around into the dust, and who knows, the stock might actually bounce back up a little bit, thanks to the brave and the foolhardy. But if Medivation is ever to make it back to where it was, I don't see how it's going to be because of Dimebon.
Via RJAlvarez on Twitter, who says "Tough call, but this is perhaps the worst post recommending a biotch stock I've ever read."
I'm hearing from more than one source that Exelixis has laid off about 40% of their work force, which is somewhere around 250 people (the numbers I get don't all agree). This seems to be across the board, all departments, and most everyone is being asked to leave today.
The Bay area biotech scene doesn't seem to be at its healthiest these days (although it's still in better shape than San Diego, from the sound of it), but this isn't going to help it one bit. . .
A discussion at work the other day got me to thinking: what structures do you medicinal chemists out there just refuse to work on? Any? We all have our own prejudices - in fact, if you get enough chemists into one conference room, one or another of them will probably rule out just about any structure you propose. Try that sometime, and be sure to sneak a few marketed drugs in there to tick people off. Don't like organoazides? Michael acceptors? Nitroaromatics? Epoxides? Chloromethyl ketones? They're out there working in the real world and making real money.
Now, I'm not saying that you should concentrate on these things. The success rate for (say) chloromethyl ketones is surely lower than for a lot of other compound classes, and there's only so much time and money available. That's why I have personal rules like "No Naphthyls". If someone shows me a structure with a raw naphthalene hanging off it that works, well, good for them, and I guess I'd work on it on that basis. But I won't contribute any myself, because I think the odds are too low.
But I have even more deep-seated prejudices. There are some structures that I just don't think have a chance, even if it looks like they work at first. I'd rather kill them immediately than take the (grave) chance of wasting everyone's time. The first thing I can think of on such a list would be quinones and their ilk. There are just too many other bad things that they're capable of. Now that I've said this, I feel sure that someone is come up in the comments with an example of a quinone that's making five hundred million dollars a year or something. But I sure can't think of one myself, and I just don't see the point of trying to make a drug out of such a structure (unless their lively reactivity is part of some nasty mechanism all its own, in which case, good luck to you).
Kim Girard, a reporter from CBS Interactive is doing a story on how people are coping with the merger - if you'd like to speak with her, she's at (de-spammified): kim.berg30-at-gmail.com. I didn't have any direct knowledge to help her, so I offered to post this to see if she gets something useful. . .
Here's another outside the field - in fact, it's outside of a lot of people's fields. Where Is Everybody? presents fifty possible solutions to the Fermi Paradox: if there are a lot of planets in the galaxy, and if life is pretty easy to get going, and if it's possible to travel or just communicate between solar systems. . .why haven't we seen anything? Enrico Fermi, in his typically disconcerting way, ran the math on this question during a lunchtime conversation in 1950, and realized that at least one of the common assumptions behind it must be off, and by a great deal.
I was thinking about this last night, because this weekend I'll have swarms of fourth graders and their parents looking through my telescope (if the weather cooperates), under the auspices of the Amateur Telescope Makers of Boston. And it's impossible to look at the night sky without wondering what life might exist out there and what form it might take. That Wikipedia article is quite good, but if you find it interesting, this book goes into the question in greater detail. I should note that a new book, The Eerie Silence, has just come out on the same topic, but I haven't seen that one yet.
There's a report in Nature on the bacteria found in the human gut that's getting a lot of press today (especially for a paper about, well, bacteria in the human gut). A team at the Beijing Genomics Institute, with many collaborators, has done a large shotgun sequencing effort on gut flora and identified perhaps one thousand different species.
I can well believe it. The book I recommended the other day on bacteria field marks has something to say about that, pointing out that if you're just counting cells, that the cells of our body are far outnumbered by the bacteria we're carrying with us. Of course, the bacteria have an advantage, being a thousand times smaller (or more) than our eukaryotic cells, but there's no doubt that we're never alone. In case you're wondering, the average European subject of the study probably carries between 150 and 200 different types of bacteria, so there's quite a bit of person-to-person variability. Still, a few species (mostly Bacteroides varieties) were common to all 124 patients in the study, while the poster child for gut bacteria (E. coli) is only about halfway down the list of the 75 most common organisms. We have some Archaea, too, but they're outnumbered about 100 to 1.
What's getting all the press is that idea that particular mixtures of intestinal bacteria might be contributing to obesity, cancer, Crohn's disease and other conditions. This isn't a new idea, although the new study does provide more data to shore it up (which was its whole purpose, I should add). It's very plausible, too: we already know of an association between Helicobacter and stomach cancer, and it would be surprising indeed if gut bacteria weren't involved with conditions like irritable bowel syndrome or Crohn's. This paper confirms earlier work that such patients do indeed have distinctive microbiota, although it certainly doesn't solve the cause-or-effect tangle that such results always generate.
The connection with obesity is perhaps more of a stretch. You can't argue with thermodynamics. Clearly, people are obese because they're taking in a lot more calories than they're using up, and doing that over a long period. So what do bacteria have to do with that? The only thing I can think of is perhaps setting off inappropriate food cravings. We're going to have to be careful with that cause and effect question here, too.
One problem I have with this work, though, is the attitude of the lead author on the paper, Wang Jun. In an interview with Reuters, he makes a very common mistake for an academic: assuming that drug discovery and treatment is the easy part. After all, the tough work of discovery has been done, right?
"If you just tackle these bacteria, it is easier than treating the human body itself. If you find that a certain bug is responsible for a certain disease and you kill it, then you kill the disease," Wang said
For someone who's just helped sequence a thousand of them, Wang doesn't have much respect for bacteria. But those of us who've tried to discover drugs against them know better. Where are these antibiotics that kill single species of bacteria? No such thing exists, to my knowledge. To be sure, we mostly haven't looked, since the need is for various broader-spectrum agents, but it's hard to imagine finding a compound that would kill off one Clostridium species out of a bunch. And anyway, bacteria are tough. Even killing them off wholesale in a human patient can be very difficult.
Even if we magically could do such things, there's the other problem that we have no idea of which bacterial strains we'd want to adjust up or down. The Nature paper itself is pretty good on this topic, emphasizing that we really don't know what a lot of these bacteria are doing inside us and how they fit into what is clearly a very complex and variable ecosystem. A look at the genes present in the samples shows the usual common pathways, then a list that seem to be useful for survival in the gut (adhesion proteins, specific nutrient uptake), and then a massive long tail of genes that do we know not what nor why. Not only do we not know what's happening on other planets, or at the bottom of our own oceans, we don't even know what's going on in our own large intestines. It's humbling.
Dr. Wang surely realizes this; I just wish he'd sound as if he does.
Robert Langreth, an editor at Forbes, points to a possible way that Dimebon could get approval for Alzheimer's: for its behavioral effects, not anything to do with amyloid or memory.
I'm not buying it, I have to say. Even Langreth's source admits that behavioral numbers didn't reach statistical significance. I don't see how this will be enough to rescue this one, even if one of the ongoing trials does use a behavioral score as an endpoint.
Update: Langreth has an earlier piece on how Dimebon appears to have been overhyped from the beginning, a viewpoint I concur with. The same thing happens with any drug for Alzheimer's, and is a constant problem in cancer and obesity, too.
Some blogs run pictures of cats to give the readers a break from the ordinary. Around here, I thought that this might be appropriate. Here are the alkali metals, from top to bottom, differentiated in the most basic way possible. No, not by tasting them, sheesh: by tossing them into a dish of water:
(Courtesy of the Open University site in the UK). One thing they don't go into is the effect of density. Up to potassium, the metals are still light enough to float. But cesium drops like the rock it is, with depth-charge results.
I will consider running a photo of a cat, as long as he's working up a reaction.
I've written both here and elsewhere about flow chemistry, the technique where you pump your reactions through a reaction tube of some sort rather than mixing them up in a flask. And I freely admit that I have a fondness for the idea, but it's definitely not the answer to every problem.
For one thing, I tend to like the idea of sending reactants over a bed of catalyst or solid-supported reagent (what I call Type II or Type III flow reactions in that 2008 link above). Type I reactions, in my scheme, are the ones where you just use a plain tube or channel, and all the reactants are present in solution. A big advantage of those, as far as I can tell, is to handle tricky intermediates that you wouldn't want to have large amounts of or to control potential runaway exothermic reactions. There's also the possibility of running the reaction stream through some solid-phase purifications and scavengers, the way Steve Ley and his group like to work, which is convenient since you're already pumping the stuff along anyway.
But the sorts of reactions that you often see in the flow-chemistry equipment brochures. . .well, that's something else again. More than one outfit has earnestly tried to sell me a machine based on how well it did a Fischer esterification. My problem wasn't that the reaction was discovered almost in Neanderthal times - it was that Thag run reaction in round bottom flask, work fine, not need flow reactor. I mean, really, this is a nonexistent problem and needs no solution.
So I read this new paper in Angewandte Chemie with interest. The authors are looking at some standard catalytic organic transformations and comparing them carefully between batch mode and a flow setup. They stipulate at the beginning that flow chemistry has the advantages mentioned above, but they're wondering about what it can do for more ordinary chemistry:
"In addition to these developments, general and rather sweeping claims have been made that microreactor systems accelerate organic reactions and that lower catalyst loadings and higher yields can routinely be achieved in these systems compared to those of reactions carried out in flasks. Despite these potential advantages, examples of successful implementation of microflow reaction technologies in either academic organic synthesis or industrial process research and manufacturing remain more isolated than these reports would suggest. However, the implication is that it is only a matter of time before microflow reactors will dominate laboratory studies aimed at both fundamental research and practical applications of complex organic reactions, with our current mode of operation in reaction flasks ultimately becoming a relic of the past. It seems therefore worthwhile to examine the assumptions behind this viewpoint to provide a critical analysis of “flask versus flow” as a means for effecting reactions."
What they find is that there's very little difference. A catalyzed aldol reaction that was studied under flow conditions by the Seeburger lab is shown to perform identically to a batch reaction, if you make sure to run them at the same temperature and with the same catalyst loading. The paper then looks at asymmetric addition of diethyl zinc to benzaldehyde, a model reaction that I often wish would disappear from human consciousness so it would afflict us no more. But here, too, under more challenging heat-transfer conditions, flow showed no differences from batch. The authors point out that this reaction is, in fact, run under industrial conditions, but not in a flow apparatus. Rather, it's done in batch mode, but though good old slow addition of reagent, which also gives you control over exotherms.
The authors specifically exempt all supported-reagent chemistry from their analysis, so that preserves what I like about flow systems. But for homogeneous reactions, the only time they can see an advantage for the flow reactors is when there's a potential for a dangerous rise in temperature. So now we'll see what some of the more flow-oriented people have to say in reply. . .
Earlier this month I wrote about Medivation and their Russian-derived clinical candidate for Alzheimer's disease, Dimebon (latrepirdine). At the time, I wrote that "A lot of eye-catching numbers from small Phase II trials tend to flatten out in the wider world of Phase III, and if forced, that's the way I'd bet here."
Unfortunately, that's just what appears to have happened. The results are out today, and Dimebon has not showed any efficacy at all versus placebo. From the data given in the press release, the comparison is just absolutely flat; you could have been giving the study patients breath mints and seen the same numbers. Since the design of this trial was similar to the smaller Phase II trials that showed such interesting results, there's clearly something going on that we don't understand. But that's the motto for all central nervous system research, isn't it?
I'm really not sure if there's a way forward for this drug. When you go to a larger, more well-controlled trial and revert back to baseline, it's hard to make a case for continued development. Pfizer (Medivation's partner here) still has a lot of money and a lot of desire to find a good Alzheimer's drug. But I don't think they'll be in the mood to spend much more of it here.
Well, here's a brow-furrowing paper, courtesy of PNAS. Th authors, from the National Institute on Aging, contend that most laboratory rodents are overfed, under-stimulated, and are (to use their phrase) "metabolically morbid". This affects their suitability as control and experimental animals for a wide variety of assays.
There seem to be effects across the board - the immune system, glucose and lipid handling, cardiovascular numbers, susceptibility to tumors, cognitive performance. The list is a long one, and the route causes seem to be ad libitum feeding and lack of exercise. The beneficial effects of some drugs in rodent models, the authors propose, could be due (at least in part) to their ability to reverse the artificial conditions that the animals are maintained under, and the application of these results to the real world could be doubtful. (The same concerns don't apply nearly as much to larger animals such as dogs and primates. They're handled differently, and their physiologies don't seem to be altered, or at least nowhere near as much).
Of course, some people live similar lifestyles, as far as the lack of activity and ad libitum feeding goes, so I have to wonder about the rodents being better test animals than one might wish for. But overall, this seems like a useful wake-up call to the animal testing community, especially in some therapeutic areas. On a domestic level, I'm thinking through the implications of this for the two guinea pigs my children have - they seem to sit around and eat all the time. The guinea pigs, I mean, not the kids.
OK, I think the company has made an official announcement on this. They're getting out of schizophrenia, bipolar disease, depression, anxiety, acid reflux, thrombosis, ovarian and bladder cancers, systemic scleroderma and hepatitis C. (So much for some of the company's current and recent big-selling areas. . .)
As for facilities, they're shutting down early R&D in Wilmington and in Lund (Sweden), and the Charnwood site in the UK is closing. I haven't heard if there are other cuts going on in the sites (or therapeutic areas) that are remaining, though. Details in the comments. . .
I was just talking about greasy compounds the other day, and reasons to avoid them. Right on cue, there's a review article in Expert Opinion in Drug Discovery on lipophilicity. It has some nice data in it, and I wanted to share a bit of it here. It's worth noting that you can make your compounds too polar, as well as too greasy. Check these out - the med-chem readers will find them interesting, and who knows, others might, too:
So, what are these graphs? They show how well compound cross the membranes of Caco-2 cells, a standard assay for permeability. These cells (derived from human colon tissue) have various active-transport pumps going (in both directions), and you can grow them in a monolayer, expose one side to a solution of drug substance, and see how much compound appears on the other side and how quickly. (Of course, good old passive diffusion is also operating, too - a lot of compounds cross membranes by just soaked on through them).
Now, I have problems with extrapolating Caco-2 data too vigorously to the real world - if you have five drug candidates from the same series and want to rank order them, I'd suggest getting real animal data rather than rely on the cell assay. The array of active transport systems (and their intrinsic activity) may well not match up closely enough to help you - as usual, cultured cell lines don't necessarily match reality. But as a broad measure of whether a large set of compounds has a reasonable chance of getting through cell membranes, the assay's not so bad.
First, we have a bunch of compounds with molecular weights between 350 and 400 (a very desirable space to occupy). The Y axis is the partitioning between the two sides of the cells, and X axis is LogD, a standard measure of compound greasiness. That thin blue line is the cutoff for 100 nanomoles/sec of compound transport, so the green compounds above it travel across the membrane well, and the red ones below it don't cross so readily. You'll note that as you go to the left (more and more polar, as measured by logD), the proportion of green compounds gets smaller and smaller. They're rather hang out in the water than dive through any cell membranes, thanks.
So if you want a 50% chance of hitting that 100 nm/sec transport level, then you don't want to go much more polar than a LogD of 2. But that's for compounds in the 350-400 weight range - how about the big heavyweights? Those are shown in the second graph, for compounds greater than 500. Note that the distribution has scrunched disturbingly. Now almost everything is lousy, and if you want that 50% chance of good penetration, you're going to have to get up to a logD of at least 4.5.
That's not too good, because you're always fighting a two-front war here. If you make your compounds that greasy (or more) to try to improve their membrane-crossing behavior, you're opening yourself up (as I said the other day) to more metabolic clearance and more nonspecific tox, as your sticky compounds glop onto all sorts of things in vivo. (They'll be fun to formulate, too). Meanwhile, if you dip down too far into that really-polar left-hand side, crossing your fingers for membrane crossing, you can slide into the land of renal clearance, as the kidneys vacuum out your water-soluble wonder drug and give your customers very expensive urine.
But in general, you have more room to maneuver in the lower molecular weight range. The humungous compounds tend to not get through membranes at reasonable LogD values. And if you try to fix that by moving to higher LogD, they tend to get chewed up or do unexpectedly nasty things in tox. Stay low and stay happy.
From Naturecomes word of a brainlessly restrictive new law that's about to pass in Turkey. The country started out trying to get in line with EU regulations on genetically-modified crops, and ended up with a bill that forbids anyone to modify the DNA of any organism at all - well, unless you submit the proper paperwork, that is:
. . .Every individual procedure would have to be approved by an inter-ministerial committee headed by the agriculture ministry, which is allowed 90 days to consider each application with the help of experts.
The committee would be responsible for approving applications to import tonnes of GM soya beans for food — but also for every experiment involving even the use of a standard plasmid to transfer genes into cells. Work with universally used model organisms, from mice and zebrafish to fruitflies and bacteria, would be rendered impossible. Even if scientists could afford to wait three months for approval of the simplest experiment, the committee would be overwhelmed by the number of applications. One Turkish scientist who has examined the law estimates that his lab alone would need to submit 50 or so separate applications in a year.
It's no doubt coming as a surprise to them that biologists modify the DNA of bacteria and cultured mammalian cells every single day of the week. Actually, it might come as a surprise to many members of the public, too - we'll see if this becomes a widespread political issue or not. . .
Just a quick note to say that traffic here broke all the house records last month - over 440,000 page views (partly thanks to a late surge in interest in the wonderful properties of dioxygen difluoride). The number of people interested in this sort of thing continues to exceed my estimates. . .!
I've been involved in a mailing list discussion that I wanted to open up to a wider audience in drug discovery, so here goes. We spend our time (well, a lot of it, when we're not filling out forms) trying to get compound to bind well to our targets. And that binding is, of course, all about energy: the lower the overall energy of the system when your compound binds, relative to the starting state, the tighter the binding.
That energy change can be broken down (all can all chemical free energy changes) into an enthalpic part and an entropic part (that latter one depends on temperature, but we'll assume that everything's being done at a constant T and ignore that part). Roughly speaking, the enthalpic component is where you see effects of hydrogen bonds, pi-pi stacking, and other such "productive" interactions, and the entropic part is where you're pushing water molecules and side chains around - hydrophobic interactions and such.
That's a gross oversimplification, but it's a place to start. It's important to remember that these things are all tangled together in most cases. If you come in with a drug molecule and displace a water molecule that was well-attached to your binding pocket, you've broken some hydrogen bonds - for which you'll pay in enthalpy. But you may well have formed some, too, to your molecule - so you'll get some enthalpy term back. And by taking a bound water and setting it free, you'll pick up some good entropy change, too. But not all waters are so tightly bound - there are a few cases where they're actually at a lower entropy state in a protein pocket then they are out in solution, so displacing one of those actually hurts you in entropy. Hmm.
And as I mentioned here, you have the motion of your drug molecule to consider. If it goes from freely rotating to stuck when it binds (as it may well), then you're paying entropy costs. (That's one reason why tying down your structure into a ring can help so dramatically, when it helps at all). And don't forget the motion of the protein overall - if it's been flopping around until it folds over and clenches down on your molecule, there's another entropy penalty for you, which you'd better be able to make up in enthalpy. And so on.
There's been a proposal, spread most vigorously by Ernesto Freire of Johns Hopkins, that drug researchers should use calorimetry to pick compounds that have the biggest fraction of their binding from enthalpic interactions. (That used to be a terrible pain to do, but recent instruments have made it much more feasible). His contention is that the "best in class" drugs in long-lived therapeutic categories tend to move in that direction, and that we can use this earlier in our decision-making process. People doing fragment-based drug discovery are also urged to start with enthalpically-biased fragments, so that the drug candidate that grows out from them will have a better chance of ending up in the same category.
One possible reason for all this is that drugs that get most of their binding from sheer greasiness, fleeing the water to dive into a protein's sheltering cave, might not be so picky about which cave they pick. There's a persistent belief, which I think is correct, that very hydrophobic compounds tend to have tox problems, because they're often just not selective enough about where they bind. And then they tend to get metabolized and chewed up more, too, which adds to the problem.
And all that's fine. . .except for one thing: is anyone actually doing this? That's the question that came up recently, and (so far), for what it's worth, no one's willing to speak up and say that they are. Perhaps all this is a new enough consideration that all the work is still under wraps. But it will be interesting to see if it holds up or not. We need all the help we can get in drug discovery, so if this is real, then it's welcome. But we also don't need to run more assays that only confuse things, either, so it would be worth knowing if drug-candidate calorimetry falls into that roomy category, too. Opinions?
I hate to start the week out like this, but I have a report that Lilly is looking to cut quite a few chemistry positions (and maybe others), with word to come on Friday, March 12. Anyone else have anything on this?
Update: to clarify, these appear to be the layoffs that were announced last fall, not a new series. It's just that we finally seem to be finding out who stays and who doesn't. . .