Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
Here's an interesting list (PDF) of patent activity for life-science firms in the New England area. What I can't quite work out is how the numbers were generated. For example, it's very strange that Pfizer shows up with 3 patents from last year. A quick look through the databases shows more issued patents than that, although many of the ones I'm seeing are probably continuations-in-part of older parents.
But at any rate, if there's any consistent method of evaluating patents that shows Pfizer with fewer patents over the last few years than the likes of Neurogen and Nitromed - and that's what this one shows - then something's quite odd. I've emailed the people at MassHighTech.com to ask what's up.
Update: an email from the magazine says that the patent count is supposed to represent just the ones from the New England area. That does clear out a lot of Pfizer ones which originated from Sandwich, St. Louis, and other exotic ports. But I still can't get the numbers to come out right. I looked through about a quarter of the 2009 US patents assigned to Pfizer, and found five or six out of Groton/New London just in that group. I've emailed the magazine again about this. . .
Well, now, this is a disappointment. In a new Angewandte Chemie paper, a French team reports synthesizing trinitropyrazole. And it's. . .well, it's well-behaved. Surprisingly insensitive. Not that touchy. Might actually be useful as a storable high-energy material that could actually be handled.
The fools! Don't they realize that Angewandte is the place to unload the barely-in-our-plane-of-existence compounds, the sweat-starting, nostril-flaring "How could it blow up? It's in liquid nitrogen!" stuff? Surely there's a better home for things with actual utility, the Journal of Not So Horrible Once You've Made Them, Really, or "That Wasn't So Bad Now, Was It" Communications. Sheesh.
While we're on the subject of patents, PatentBaristas has a good summing-up of the Ariad decision I mentioned here last week. There is indeed a written description requirement for a patent, and it's separate from enablement, and it had better be good.
I haven't commented so far on the decision yesterday in the Myriad Genetics case involving their breast cancer assay gene patents. This is surely going to be appealed, and we're not going to really know what's up here until the CAFC has a say. And who knows? This is the sort of case that might go even further than that.
That's what the folks at Patently Obvious think, at any rate. They note that this decision is rather far out of the usual range of case law on patentability, and will likely be reversed on appeal. And then?
A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you're wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.
I hadn't seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn't symmetrical - and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn't appear to be just due to less-precise studies tending to show positive effects for some reason - it strongly suggests that there are negative studies that just haven't been reported.
The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.
The loss of experiments that showed no effect shouldn't surprise anyone - after all, it's long been known that publishing such papers is just plain harder than publishing ones that show something happening. There's an obvious industry bias toward only showing positive data, but there's an academic one, too, which affects basic research results. As the authors put it:
These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.
And there's an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):
The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.
This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it's clear that this can't happen if the results are just sitting on someone's hard drive. But it's also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing's for free.
Another promising Phase II oncology idea goes into the trench in Phase III: GenVec has been working on a gene-therapy approach ("TNFerade") to induce TNF-alpha expression in tumors. That's not a crazy idea, by any means, although (as with all attempts at gene therapy) getting it to work is extremely tricky.
And so it has proved in this case. It's been a long, hard process finding that out, too. Over the years, the company has looked at TNFerade for metastatic melanoma, soft tissue sarcoma, and other cancers. They announced positive data back in 2001, and had some more encouraging news on pancreatic cancer in 2006 (here's the ASCO abstract on that one). But last night, the company announced that an interim review of the Phase III trial data showed that the therapy was not going to make any endpoint, and the trial was discontinued. Reports are that TNFerade is being abandoned entirely.
This is bad news, of course. I'd very much like gene therapy to turn into a workable mode of treatment, and I'd very much like for people with advanced pancreatic cancer to have something to turn to. (It's truly one of the worst diagnoses in oncology, with a five-year survival rate of around 5%). A lot of new therapeutic ideas have come up short against this disease, and as of yesterday, we can add another one to the list. And we can add another Promising in Phase II / Nothing in Phase III drug to the list, too, the second one this week. . .
We get reminded again and again that interesting Phase II results are only that: interesting, and no guarantee of anything. Antisoma (and their partner Novartis) are the latest company to illustrate that painful reality - their drug AS404 (vadimezan) looked in Phase II as if it might be a useful addition to oncology treatments, but has completely missed its endpoints in the bigger, more realistic world of Phase III. The trial was halted after an interim analysis showed basically no hope of it showing benefit if things continued.
There are many reasons for why these things happen. Phase II trials are typically smaller, and their patient populations are more carefully selected. And they're quite susceptible to wishful thinking. They're designed to keep things going, to show some reason to proceed, and they often do. If your drug candidate makes it through Phase II, that may say more about how you designed the trial than it says about the compound.
That's not to say that getting past Phase II is meaningless. Compared to having no efficacy data at all, it's a big step. But Phase III, when a compound goes out to a larger and more diverse patient population, is a much bigger one. And plenty of candidates aren't up to it.
For the medicinal chemists in the audience, I wanted to strongly recommend a new paper from a group at Roche. It's a tour through the various sorts of interactions between proteins and ligands, with copious examples, and it's a very sensible look at the subject. It covers a number of topics that have been discussed here (and throughout the literature in recent years), and looks to be an excellent one-stop reference.
In fact, read the right way, it's a testament to how tricky medicinal chemistry is. Some of the topics are hydrogen bonds (and why they can be excellent keys to binding or, alternatively, of no use whatsoever), water molecules bound to proteins (and why disturbing them can account for large amounts of binding energy, or, alternatively, kill your compound's chances of ever binding at all), halogen bonds (which really do exist, although not everyone realizes that), interactions with aryl rings (some of which can be just as beneficial coming in 90 degrees to where you might imagine), and so on.
And this is just to get compounds to bind to their targets, which is the absolute first step on the road to a drug. Then you can start worrying about how to have your compounds not bind to things you don't want (many of which you probably don't even realize even are out there). And about how to get it to decent blood levels, for a decent amount of time, and into the right compartments of the body. And at that point, it's nearly time to see if it does any good for the disease you're trying to target!
As we slowly attack the major causes of disease, and necessarily pick the low-lying fruit in doing so, it can get harder and harder to see the effects of the latest advances. Nowhere, I'd say, is that more true than for cardiovascular disease, which is now arguably the most well-served therapeutic area of them all. It's not that there aren't things to do (or do better) - it's that showing the benefit of them is no easy task.
Robert Fortner has a good overview of the problem here. The size of the trials needed in this area is daunting, but they have to be that size to show the incremental improvements that we're down to now. He also talks about oncology, but that one's a bit of a different situation, to my mind. There's plenty of room to show a dramatic effect in a lot of oncology trials, it's just that we don't know how to cause one. In cardiovascular, on the other hand, the space in which to show something amazing has flat-out decreased. This is a feature, by the way, not a bug. . .
Technical book author (and occasional commenter here) Robert Bruce Thompson has a channel on YouTube called "The Home Scientist" that's quite interesting. Many of these seem to be companion videos for his book, The Illustrated Guide to Home Chemistry Experiments. This is real, well-done chemistry with reagents that can be easily purchased and manipulated by a competent non-chemist. Well worth sending on to people who would like to get a feel for what the science is like!
The discussion of "privileged scaffolds" in drugs here the other day got me to thinking. A colleague of mine mentioned that there may well be structures that don't hit nearly as often as you'd think. The example that came to his mind was homopiperazine, and he might have a point; I've never had much luck with those myself. That's not much of a data set, though, so I wanted to throw the question out for discussion.
We'll have to be careful to account for Commercial Availability Bias (which at least for homopiperazines has decreased over the years) and Synthetic Tractability Bias. Some structures don't show up much because they just don't get made much. And we'll also have to be sure that we're talking about the same things: benzo-fused homopiperazines (and other fused seven-membered rings) hit like crazy, as opposed to the monocyclic ones, which seem to be lower down the scale, somehow.
It's not implausible that there should be underprivileged scaffolds. The variety of binding sites is large, but not infinite, and I'm sure that it follows a power-law distribution like so many other things. The usual tricks (donor-acceptor pairs spaced about so wide apart, pi-stacking sandwiches, salt bridges) surely account for much more than their random share of the total amount of binding stabilization out there in the biosphere. And some structures are going to match up with those motifs better than others.
So, any nominations? Have any of you had structural types that seem as if they should be good, but always underperform?
. . .now's a heck of a time to buy. I just noticed this in my e-mail - thanks to Pfizer, and their appetite for closing down research buildings, you now have the opportunity to buy massive piles of once-useful instruments at auction.
Don't let the fact that a massive drug company has no need for all this equipment put you off. You might be able to find a use for it! It's good stuff: high-field NMRs, LC/mass spec machines of all sorts, liquid handlers, robotics platforms, cell culture apparatus, spectrophotometers, microscopes, centrifuges. . .the list is a long one. Removal of evil spirits is, as far as I can tell, not included. But otherwise, there's everything you'd need to start a productive research company, except the employees. And there are plenty of those on the market, too, you know.
Nature has a review of a new book on the anti-aging field, Eternity Soup by Greg Critser, and I found this part very instructive. The same things apply to several other therapeutic areas where people see fast money to be made:
Critser's methodical portrayal of a host of anti-ageing practitioners reveals some fascinating people who seek to convince others that they can purchase longer and healthier lives like any other commodity. He makes clear that many anti-ageing treatments are based more on faith healing than on science, and that the industry defends them and presents them to the public with evangelical zeal. Scientific gerontologists who point out the lack of empirical evidence behind the claims are shouted down, sued for libel or made fun of as lab technicians or statisticians with no experience in treating patients.
Critser became aware during his research of why the ridiculed scientific gerontologists find the anti-ageing industry so aggravating. The industry closely monitors the field for any advances, and when it spots something that might be turned into a commercial enterprise, the product is repackaged, branded and sold to the public as the next great breakthrough of its own invention. . .
It's interesting, though, that the cancer-cure quacks tend not to ride so much on the current research. A lot of that stuff seems just to be completely made up, without even a connection to something in the scientific literature. Perhaps that's because there are occasional spontaneous remissions from cancer, but none from old age. . .
In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.
Naturehas a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.
In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.
A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.
And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).
It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.
But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.
Here's a new article on the concept of "privileged scaffolds", the longstanding idea that there seem to be more biologically active compounds built around some structures than others. This doesn't look like it tells me anything I didn't know, but it's a useful compendium of such structures if you're looking for one. Overall, though, I'm unsure of how far to push this idea.
On the one hand, it's certainly true that some structural motifs seem to match up with binding sites more than others (often, I'd say, because of some sort of donor-acceptor pair motif that tends to find a home inside protein binding sites). But in other cases, I think that the appearance of what looks like a hot scaffold is just an artifact of everyone ripping off something that worked - others might have served just as well, but people ran with what had been shown to work. And then there are other cases, where I think that the so-called privileged structure should be avoided for everyone's good: our old friend rhodanine makes an appearance in this latest paper, for example. Recall this this one has been referred to as "polluting the literature", with which judgment I agree.
I've spoken about fragment-based drug design and ligand efficiency here a few times. There's a new paper in J. Med. Chem. that puts some numbers on that latter concept. (Full disclosure - I've worked with its author, although I had nothing to do with this particular paper).
For the non-chemists in the crowd who want to know what I'm talking about, fragment-based methods are an attempt to start with smaller, weaker-binding chemical structures than we usually work with. But if you look at how much affinity you're getting for the size of the molecules, you find that some of these seemingly weaker compounds are actually doing a great job for their size. Starting from these and building out, with an eye along the way toward keeping that efficiency up, could be a way of making better final compounds than you'd get by starting from something larger.
Looking over a number of examples where the starting compounds can be compared to the final drugs (not a trivial data set to assemble, by the way), this work finds that drugs, compared to their corresponding leads, tend to have similar to slightly higher binding efficiencies, although there's a lot of variability. They also tend to have similar logP values, which is a finding that doesn't square with some previous analyses (which showed things getting worse during development). But drugs are almost invariably larger than their starting points, so no matter what, one of the keys is not to make the compounds greasier as you add molecular weight. (My "no naphthyls" rule comes from this, actually).
There are a few examples of notably poor ligand-efficient starting structures that have nonetheless been developed into drugs. Interestingly, several of these are the HIV protease inhibitors, with Reyataz (atazanavir) coming in as the least ligand-efficient drug in the whole data set. A look at its structure will suffice. The wildest one on the list appears to be no-longer-marketed amprenavir, whose original lead was 53 micromolar and weighed over 600, nasty numbers indeed. I would not recommend emulating that one. In case you're wondering, the most ligand efficient drug in the set is Chantix (varenicline).
In the cases where ligand efficiency actually went down along the optimization route, inspection of the final structures shows that in many cases, the discovery team was trading efficiency for some other property (PK, solubility, etc.) To me, that's another good argument to make things as efficient as you can, because that gives you something to trade. A big, chunky, lashed-together structure doesn't give you much room to maneuver.
You know, you'd think that we'd understand the way things bind to proteins well enough to be able to explain why biotin sticks so very, very tightly to avidins. That's one of the most impressive binding events in all of biology, short of pushing electrons and forming a solid chemical bond - biotin's stuck in there at femtomolar levels. It's so strong and so reliable that this interaction is the basis for untold numbers of laboratory and commercial assays - just hang a biotin off one thing, expose it to something else that has an avidin (most often streptavidin) coated on it, and it'll stick, or else something is Very Wrong. So we have that all figured out.
Wrong. Turns out that there's a substantial literature given to arguing about just why this binding is so tight. One group holds out for hydrophobic interactions (which seems rather weird to me, considering that biotin's rather polar by most standards). Another group has a hydrogen-bonding explanation, which (on the surface) seems more feasible to me. Now a new paper says that the computational methods applied so far can't handle electrostatic factors well, and that those are the real story.
I'm not going to take a strong position on any of these; I'll keep my head down while the computational catapults launch at each other. But it's definitely worth noting that we apparently can't explain the strongest binding site interaction that we know of. It's the sort of thing that we'd all like to be able to generate at will in our med-chem programs, but how can we do that when we don't even know what's causing it?
OK, enough politics around here for a while. It's time to talk about fat rats. When I last wrote about fructose around here, it was to highlight a paper that suggested that it had effects on satiety signaling in the brain. The hypothesis was that fructose could lead to an abnormal drop in ATP levels in the hypothalamus, leading to an inappropriate hunger signal. This is partially borne out by the results of infusing various sugars directly into the brains of rats: if you do that trick with glucose, the rats stop eating - their cells have detected abundant glucose, which is a signal that they've been fed recently. On the other hand, if you use fructose, the rodents actually eat more.
Some of the big questions, though, have been whether fructose does this under normal conditions in rats (that is, without the power-drill route of administration into the brain), and whether that result carries over to humans. There's a new paper from a group at Princeton that's sure to add fuel to the debate. They studied the effects on rats of access to high-fructose corn syrup (8% in water) versus 10% sucrose, with unlimited access to normal rat chow, and looked at whether it made a difference if you allowed access for half the day versus the whole 24 hours.
Over an 8-week period, the groups diverged significantly. The half-day corn syrup rats put on significantly more weight than the half-day sucrose rats did, even though (most interestingly) the corn syrup group turned out to be ingesting fewer calories from the added corn syrup than the sucrose rats were getting from their sugar water. That is, the difference in caloric intake (and thus the excess weight) was all coming from eating more chow.
When the study was extended to six months, it turned out that it didn't matter much if the rats had 12-hour or 24-hour access to the high-fructose corn syrup - by week 3, the weights of both groups had diverged from the controls. (Looking at the graphs, it appears that the 24-hour group may have done somewhat worse, but I don't think they reached statistical significance versus the 12-hours). But that result is in male rats. The females showed what seems to be a much less dramatic effect. Only the 24-hour-HFCS group showed a significant weight difference from the controls.
Looking at the fat deposits the rats had laid down during this time shows another gender difference, although it doesn't help clear things up any. The males show a tendency for more fat pad mass, although the only measurement that reached significance was the abdominal fat for the 12-hour-a-day group. The females, although they didn't show nearly as wide a difference in weight gain, had much more significant differences in their fat mass (but only for the 24-hour-a-day HFCS group). Finally, in blood chemistry, none of the groups showed differences in insulin levels. But the both the male HFCS groups had elevated triglycerides, as did the 24-hour-HFCS females.
Taken together, it appears that rats (especially males) are able to adjust their caloric intake when given access to small amounts of sucrose, but not so much when given equivalent amounts of HFCS. Earlier work has shown that access to higher levels of sucrose or other sugars, though, will indeed cause rats to gain weight. But not everyone, it seems, even sees these effects. A study from last December looked at a variety of sweetened waters, given to rats 12 hours/day for ten weeks, but only three days out of each week. No differences in weight were seen, although it should be noted that in head-to-head tests, the rats preferred HFCS to agave or Stevia sweeteners. (I wish this group had run sucrose in this experiment, too).
So does this effect even apply across the board in rodents? And if it does, is it operating in humans as well? Short term, no one has been able to find any short-term differences in satiety or blood chemistry when comparing HFCS with sucrose in humans. That alone (as mentioned in the earlier post here linked in the first paragraph) makes you wonder if that fructose/brain hypothesis can hold up in people. But what about long-term effects, which may or may not have anything to do with that CNS-based mechanism?
As far as I can tell, we have no controlled data for that, which isn't surprising, considering the sort of experiment you'd have to run. Most people aren't in a position to have their food and liquid intake completely monitored for two or three months. But short of that, I'm not sure how we're ever going to straighten all this out.
One of the giants of medicinal chemistry has died today at the age of 85 - Sir James Black, who pioneered beta-adrenoceptor antagonists and many other areas in drug discovery. Keep in mind that earlier in his career, many people thought of the concept of a "receptor" as an abstract placeholder, not necessarily something with any physical meaning. We've come a long way since then, and his work is one of the big reasons why.
He was part of the "pure medicinal chemistry" Nobel Prize award of 1988, along with George Hitchings and Gertrude Elion. There's a good interview with him at that Nobel site, and here's a tribute to him on YouTube.
I mentioned Benford's Law in passing in this post (while speculating on how long people report their reactions to have run when publishing their results). That's the rather odd result that many data sets don't show a random distribution of leading digits - rather, 1 is the first digit around 30% of the time, 2 leads off about 18% of the time, and so on down.
For data that come from some underlying power-law distribution, this actually makes some sense. In that case, the data points spend more time being collected in the "lag phase" when they're more likely to start with a 1, and proportionally less and less time out in the higher-number-leading areas. The law only holds up when looking at distributions that cover several orders of magnitude - but all the same, it also seems to apply to data sets where there's no obvious exponential growth driving the numbers.
Lack of adherence to Benford's Law can be acceptable as corroborative evidence of financial fraud. Now a group from Astellas reports that several data sets used in drug discovery (such as databases of water solubility values) obey the expected distribution. What's more, they're suggesting that modelers and QSAR people check their training data sets to make sure that those follow Benford's Law as well, as a way to make sure that the data have been randomly selected.
Is anyone willing to try this out on a bunch of raw clinical data to see what happens? Could this be a way to check the integrity of reported data from multiple trial centers? You'd have to pick your study set carefully - a lot of the things we look for don't cover a broad range - but it's worth thinking about. . .
And it's an interesting decision. As mentioned here, the company's last stand was on questions of patentability, specifically the written description requirement. Well, the appeals court has ruled this morning, and Ariad's '516 patent does, finally, appear to be invalidated. There's more at Patently Obvious, who seem to be among the first with this story.
From what I can see, the court's decision makes it clear that there really has to be a description sufficient for one skilled in the art to reproduce an invention, and that stating your hypothesis isn't enough to meet this requirement. So in Ariad's case, claiming all sorts of (not yet existing) things to modulate NF-kB function doesn't fly, because they don't actually tell anyone how to do that, just how they wanted to own it if and when someone does. The written description requirement, the court holds, doesn't mean that you have to actually reduce something to practice (although I'd have to say, from my own perspective, that it most certainly would be a good idea to do so if you can), but you have to show how that could be done. "Patents are not awarded for academic theories, no matter how groundbreaking or necessary to the later patentable inventions of others" is a key quote.
Since I've written occasionally about the current health care reform efforts here, I feel as if I should say something now that a bill has passed the House. To be honest, though, I'm having a bit of trouble getting my thoughts in order, although I do feel the need to vent. Readers who aren't in the mood for my political opinions can skip this one.
Here goes: first off, it's rather hard for me to get past my anger at being told (repeatedly, by both the President and members of Congress) that this bill will "bend the cost curve" and on top of that, actually reduce the deficit. This is, in this case, such a transparent lie that it indicates actual contempt for their audience on the part of those repeating it. We can start with history and general principles: I have yet to hear of a state or federal health care system in this country that has not ended up costing hugely more than it was ever slated to.
I can get more specific in this case, though, since the entire bill was carefully structured to show a spurious deficit reduction (in order for it to be pushed through the budget reconciliation process, without which it could not have passed at all). Costs are pushed out past the Congressional Budget Office's ten-year time horizon, offloaded onto the states (whose Attorneys-General are now frantically trying to figure out what to do), or just blatantly left out. In the last category is the "doc fix", the adjustment to Medicare reimbursement rates that had to be dropped from the current bill in order to hocus the CBO numbers. The firm understanding between the interested parties is that the House will quietly pass that in the near future when not so many people are paying attention, and damn the numbers anyway. As I said above, "contempt" is the word that keeps coming to mind.
To my mind, this bill will indeed manage to provide health insurance to a portion of those now uninsured, but at a ferocious cost. And to that point, I was unhappy with the amount of money the Bush administration spent, but had I only known what was coming, I would have enjoyed the fiscal restraint while I could. I believe that we're spending entirely too much money that we don't have, and not getting that much in return for it (other than lots of warm, heartfelt favors to friendly constituencies that can be expected to support the current administration).
And here's my last point: my own industry's trade association, PhRMA, believes itself to be in that last category. Whether you felt like it or not, if you work in the drug industry, you spent a lot of money to help get this bill passed. I haven't heard the details of the quid pro quo deals for our business, but no doubt there are some nice ones hidden in the recesses of the bill (or just outside it, like the doc fix). My worry, though, is that dealing with the government on this level is like dealing with a hungry bear. Sooner than we think, the costs of this bill will kick in. At that point, I predict that we will find ourselves in yet another Health Care Crisis, having failed to bend any cost curves whatsoever. Then the bear will turn its head to us again, but this time, with a new look in its eyes.
Here's the sort of thing we'll be seeing more and more of - on the whole, I think it's a good development, but it's certainly possible that one's mileage could vary:
Ginkgo’s BioBrick Assembly Kit includes the reagents for constructing BioBrick parts, which are nucleic acid sequences that encode a specific biological function and adhere to the BioBrick assembly standard. The kit, which includes the instructions for putting those parts together, sells for $235 through the New England BioLabs, an Ipswich, MA-based supplier of reagents for the life sciences industry.
Shetty didn’t release any specific sales figures for the kit, but said its users include students, researchers, and industrial companies. The kit was also intended to be used in the International Genetically Engineered Machine competition (iGEM), in Cambridge, MA. The undergraduate contest, co-launched by Knight, challenges students teams to use the biological parts to build systems and operate them in living cells.
I realize that there have been about one Godzillion of these videos by now (the Michael Jackson one is a particular favorite), but at the same time, I'd be lying if I said that I didn't have similar thoughts the last time I had a manuscript rejected:
The movie is, of course, "Downfall" (Der Untergang), and probably many more people have been motivated to see the whole thing thanks to these YouTube clips. I have to turn the sound down a lot to really get into the spirit of the parodies; my German is still sufficient to make the reworked subtitles clash. Which reminds me of a distinctly weird sensation, watching a World War II documentary at home with my father after returning from my German post-doc stint. When archival footage of Hitler came up on the screen I suddenly found that I could follow his speech, and watched in amazement as he pounded the podium, hitting the compound verbs at the end of his sentence.
Update, March 19: I've added a few more suppliers to the list, and broken out a third category for the mixed reviews. And I note in the comments that someone claiming to be Kathy Yu from 3B Chemicals is threatening me with legal action. The IP address resolves only to AT&T Internet Services, but there does appear to be someone from that name who works at 3B. I hope, for her sake and that of the company, that this is someone impersonating her, because whoever is leaving these comments is doing 3B no favors.
And since I am reporting opinions, both my own and those of other contributors that I have no reason to doubt, and am doing so without malicious intent, I will cheerfully ignore all legal threats.
OK, here are the lists of good companies and not-so-good companies, based on my experience and those of readers. I've had some personal communications, too, which I've added to the data set. As more reports come in, this will be the post that's updated, so it can serve as a reference.
I should note up front that I'm not listing the Big Guys, since (while they can have their ups and downs), you generally know that they're going to send you something. What we're looking at are the companies that you might not have dealt with, but want to know if they're reliable. And that brings us to the:
ABCR: good prices and hit rate on orders. Very professional.
Activate: expensive, but what's there is there, and it's the right stuff.
Adesis: not cheap, but very reliable and willing to work with customers to deliver similar compounds.
Advanced Chem Tech: recommended for peptide/amino acid stuff.
AK Scientific: several good reports on availability and purity.
Alinda: have ordered one thing from them, which was fine.
Anaspec: good reports on reliability
Apollo: good stuff, but catalog needs to be a bit more in line with their real stock.
Array: very pricey, but it's all there.
Astatech: good experience reported
Bionet: interesting catalog, doesn't back-order you.
Chembridge: a big catalog, but it's all real. Occasional purity problem.
Chem/Impex: good hit rate on availability. Some questions on their chiral purities.
Combi-Blocks: good list of useful intermediates, delivers on them.
Enamine: similar to ChemBridge in many ways. Big catalog. Not the fastest out there.
Florida Center for Heterocyclics: occasional purity issues, but they do deliver.
Frontier: great source for boronic acids and the like.
Life Chemicals: have had good experiences with compound purity here.
Lu: good source for custom peptides.
Matrix: interesting catalog, which they will really ship to you.
Maybridge: on the border of being one of the big guys. Very reliable.
Midwest: good reports on reliability.
Netchem: custom synthesis, but (for once!) with good turnaround and purity.
Oakwood/Fluorochem: good prices and reliability.
Peptide Protein Research: good for custom peptides.
Pharmacore: good stock of intermediates.
Rieke: reliable, only game in town for many odd reagents.
Strem: well known for quality inorganics and organometallics.
Synquest: used to be PCR. Good customer service.
Synthonix: stuff is in stock, customer service is responsive.
TCI: has always delivered, and quickly.
Transworld: very reliable and responsive.
Tyger: have never had a problem with them.
Waterstone Chemicals: good experience on pricing and availability
American Custom Chemicals (ACC): several tales of bad purity and customer service, but others have had nothing but good experiences with them.
3B Chemicals: "will lead you on for months". Several bad experiences reported. On the other hand, I've just heard directly from a colleague who's had good luck with them.
J&W Pharmlab: bad experience reported (delays and purity), but others OK.
Ontario: one good report, but others complain of availability and leads times.
SPECS: mixed reports, but overall positive.
The Not So Good:
Ambinter: seems to source a lot of stuff from mystery suppliers. Many delays.
Any supplier, sad to say, with "Hangzhou" or "Shanghai" in the name. Tend to have absolutely nothing on the shelf, and if there's even a listed price, it's science fiction.
Anichem: very bad experience here with unexplained delays.
Beta Pharma: bad experience reported.
ChemMaker: very negative report on customer service and responsiveness.
City Chemicals: several bad experiences reported
Combi-Phos: several reports of purity problems.
Rarechem: haven't come across anyone with a good report here.
UK Green: a bad experience reported.
Uorsy: nothing ever seems to be in stock.
Zelinsky: several bad experiences reported.
I'm not sure that the term will catch on, but this new paper proposes "antedrug" to describe a compound that's deliberately designed to be cleaved quickly to something inactive. I see where they're coming from - reverse of "prodrug" - but in spoken English it's too close to "anti-drug". Hasn't someone come up with this concept before? Perhaps they didn't bother to name it. . .
Is it just me, or is the fine chemicals supply business getting even more out of hand than usual? I was just talking with a colleague who'd sourced an interesting intermediate, at the (steep!) price of about $900 for a gram. She placed the order and. . .you guessed it, the supplier immediately back-ordered it, saying the price had changed. It took someone from Purchasing to drag the new quote out of them (they apparently wouldn't give it over the phone). Now (to no one's surprise, I'm sure) the material is over $3000/gram, and will have a lead time of weeks.
This sort of thing has gone one for a long time, of course. But my impression is that there's more of it than ever. When the Chinese and ex-Soviet suppliers began to appear some years ago, they were often a pretty cheap source of some unusual compounds. But that's changed.
My belief - and I'll be glad to hear from people who do more compound purchasing than I do - is that the Chinese outfits especially have decided in recent years that they have some real pricing power, and are pushing it to see how far they can get. Add that to the hand-waving don't-you-worry-now aspect of many of their product lists, and you have a recipe for irritation and wasted time. (Another colleague described some of these online catalogs as "things they wish they sold".)
A previous comment on to a post like this listed some suppliers that had been found to be reliable, and I'll reproduce that here, in no particular order: Maybridge, Enamine, Asinex, Key Organics, ChemBridge, Specs, ASDI Biosciences, InterBioScreen, Vitas M Labs, Life Chemicals, Labotest, and TimTec. Suppliers of weirdo outlier compounds that nonetheless tend to come through were Albany Molecular, Chem T&I, Florida Center for Heterocyclic Compounds, and Princeton Biomolecular. I've used many of those folks myself (and have had particular success with Life Chemicals and Specs, as far as availability and purity). Some of these companies are faster to ship than others. But the thing that stands out with all of them is that they have what they say that they have, and what's more, it costs what it says that it costs.
For intermediates, as opposed to final-compound-like structures, I'd say that I've had good dealings with Apollo, Synthonix, Matrix, Pharmacore, Adesis, Tyger, Fluorochem, Oakwood, and Astatech. There are, I'm sure, several other suppliers in this category, and I'd be glad to list more of them after seeing the comments.
But now let's reverse the polarity. What's a blog for if you can't say what you really think? Here, then, is a preliminary blacklist of suppliers. These people either have product listings that overlay poorly with reality, try to jerk you around on the price, take much longer to deliver than their initial estimates, or (lucky you) can do all of these at once. My personal recommendation is to be quite careful with ChemPacific, Uorsys, CTI, Zelinsky, and everyone with the words "Hangzhou" or "Shanghai" in the company name.
Please feel free to add others to the lists. I'll do a consolidated post reflecting everyone's experience - that way, we can give business to deserving companies you might not have worked with before, and we can perhaps shame some others into acting more reasonably.
Here's another addition: A New Merck, Reviewed, which is someone's attempt to dig through everything about the new Merck/Schering-Plough hybrid. I'm not sure that all the info is reliable, of course, and whoever writes this has a strange way with italics, but it's worth a look.
Update: this turns out to be the Wordpress backup site for Shearlings Got Plowed, which I've mentioned here before. With the merger, the blog's author is covering his bases.
I'm a complete sucker for dense but well-presented information, and this one isn't bad at all: here's a chart of nutritional supplements by the strength of the evidence for them in human trials. I haven't cross-checked the data, but the authors appear to have done some homework in PubMed, at least, and haven't included any non-human or in vitro data. The interactive version at the link is particularly fun to mess around with. (Thanks to a reader and commenter here who put me on to this).
$75 million dollars worth of antipsychotics - that's a lot of pills, and I'm not surprised to see that the thieves used a tractor-trailer to haul everything off. You'd have to assume that there's a well-worked-out pathway to unload all of these things, and that no one's going to go to all this trouble on "spec".
Glad to see that my industry's products are so much in demand. . .
A small company called BioTime has gotten a lot of attention in the last couple of days after a press release about cellular aging. To give you an idea of the company's language, here's a quote:
"Normal human cells were induced to reverse both the "clock" of differentiation (the process by which an embryonic stem cell becomes the many specialized differentiated cell types of the body), and the "clock" of cellular aging (telomere length)," BioTime reports. "As a result, aged differentiated cells became young stem cells capable of regeneration."
Hey, that sounds good to me. But when I read their paper in the journal Regenerative Medicine, it seems to be interesting work that's a long way from application. Briefly - and since I Am Not a Cell Biologist, it's going to be brief - what they're looking at is telomere length in various stem cell lines. Telomere length is famously correlated with cellular aging - below a certain length, senescence sets in and the cells don't divide any more.
What's become clear is that a number of "induced pluripotent" cell lines have rather short telomeres as compared to their embryonic stem cell counterparts. You can't just wave a wand and get back the whole embryonic phenotype; their odometers still show a lot of wear. The BioTime people induced in such cells a number of genes thought to help extend and maintain telomeres, in an attempt to roll things back. And they did have some success - but only by brute force.
The exact cocktail of genes you'd want to induce is still very much in doubt, for one thing. And in the cell line that they studied, five of their attempts quickly shed telomere length back to the starting levels. One of them, though, for reasons that are completely unclear, maintained a healthy telomere length over many cell divisions. So this, while a very interesting result, is still only that. It took place in one particular cell line, in ways that (so far) can't be controlled or predicted, and the practical differences between this one clone and other similar cells lines still aren't clear (although you'd certainly expect some). It's worthwhile early-stage research, absolutely - but not, to my mind, worth this.
The entries I've done on the "open-plan" Biochemistry building at Oxford (see also Jim Hu) generated a lot of comments from people who've worked in poorly designed science facilities. I've heard from Linda Wang, a reporter at C&E News, who's writing article on this very subject. She's looking for chemists who are willing to talk about both good and bad experiences working in various building designs, so if you fit that description, feel free to email her at l_wang-at-acs.org (email address de-spammified, just substitute the usual symbol) or give her a call at 202-872-4579.
Now here's something that I don't think anyone expected. A recent paper in PLoS One makes the case that beta-amyloid, the protein that has been fingered for decades as a major player in Alzheimer's disease, is actually part of the body's antimicrobial defenses.
Well, it's good to hear that it's doing something. Many people had hypothesized that it was a useless (indeed, harmful) byproduct, a waste stream from aberrant processing of the amyloid precursor protein (APP). Still, there have been reports over the years that beta-amyloid was substrate for active transport pumps, might be a ligand for various receptors, etc., but not everyone was willing to take these results seriously.
But it turns out that some of A-beta's properties are similar to those of innate host defense peptides. When this latest team checked the amyloid protein's activity, it turns out to be pretty active. The prototype peptide in this area, LL-37, appears to have a broader spectrum of activity, but A-beta beats it against several organisms, most notably the yeast C. albicans. And as it turns out, brain homogenates from Alzheimer's patients are much more active against yeast in vitro than samples from age-matched controls without the disease. But that only holds true for parts of the brain (like the temporal lobe) that are known to be high in amyloid. Samples from the cerebellum (which doesn't usually show Alzheimer's pathology) had no activity. (One has to wonder if this is the first time - or at least the first time in a very long while - that anyone's evaluated human brain homogenates for their microbicidal activity).
This could lead to a complete rethink of Alzheimer's pathology. It's been known for a long time that there's a big inflammation component to the disease - perhaps the problem (or at least the trigger) is an underlying infection that sets off the innate immune system in the brain. Larger than normal amounts of beta-amyloid are produced in response, but it starts to precipitate out.
The more familiar adaptive immune system has limited access to the CNS, although that's not stopping people from trying to use it. But that approach (and many others) presume that beta-amyloid is a cause of the disease. Perhaps it isn't. Maybe it's the body's attempt at a solution - and if that's true, we need to look elsewhere for the cause, and soon. This is one of the most thought-provoking looks at Alzheimer's that I've seen in a long time. Here's hoping it leads to something new.
I was thinking the other day about the sheer number of reasonable chemical structures that have never been made. Chemical space is famously roomy - that's how we make a living in the drug industry, since we prefer to make things that have never been made before. And it still surprises non-chemists when I tell them that I make new compounds all the time - the feeling, I think, is that anything that's reasonably easy to make surely must have been mined out long ago. Not so. (It's worth remembering, though, that just because something's never been reported doesn't always mean that you can't buy it).
What brought this to mind was a steroid structure that I saw during a presentation. Looking at it like a medicinal chemist, I wondered idly if the carbons in the famous steroid backbone had ever been swapped out much with oxygen or nitrogen atoms. And in a few cases they have (more for oxygen, in some natural products), but for the most part, no. You can drop a tertiary amine into some spots on the steroid framework and immediately come up with no literature hits whatsoever. Many others yield only a handful.
It's worth noting that the partially-aromatized steroids have had some of this kind of work done on them - for example here and here. The aromatic rings give you a bit more of a handle to work with, but even here it's not like the literature is always packed with examples.
So there's as bioactive a scaffold as you could ask for, but many of the simple analogs still haven't been described. To be fair, these azasteroids aren't simple to make, and probably wouldn't have steroid-like activities in many cases. (Their natural receptors sure aren't expecting a basic amine in those spots). But many azasteroids do show biological activities, and I'd be quite surprised if these unknown compounds were pharmacologically inert. It's just that there's been no particular reason to make any of them yet. Chemical space is so huge, and our ability to explore it has been with us for such a relatively short time, that we just haven't gotten around to them yet.
There have been complaints that something is going wrong in the publication of stem cell research. This isn't my field, so I don't have a lot of inside knowledge to share, but there appear to have been a number of researchers charging that journals (and their reviewers) are favoring some research teams over others:
The journal editor decides to publish the research paper usually when the majority of reviewers are satisfied. But professors Lovell-Badge and Smith believe that increasingly some reviewers are sending back negative comments or asking for unnecessary experiments to be carried out for spurious reasons.
In some cases they say it is being done simply to delay or stop the publication of the research so that the reviewers or their close colleagues can be the first to have their own research published.
"It's hard to believe except you know it's happened to you that papers have been held up for months and months by reviewers asking for experiments that are not fair or relevant," Professor Smith said.
You hear these sorts of complaints a lot - everyone who's had a paper turned down by a high-profile journal is a potential customer for the idea that there's some sort of backroom dealing going on for the others who've gotten in. But just because such accusations are thrown around frequently doesn't mean that they're never true. I hate to bring the topic up again, but the "Climategate" leaks illustrate just how this sort of thing can be done. Groups of researchers really can try to keep competing work from being published. I just don't know if it's happening in the stem cell field or not.
It's easy to lose sight of what a drug is supposed to do. Many conditions come on so slowly that we have to use blood chemistry or other markers to see the progress of therapy in a realistic time. And over time, that blood marker can get confused with the disease itself.
To pick one famous example, try cholesterol. Everyone you stop on the street will know that "high cholesterol is bad for you". But the first thing you have to do is distinguish between LDL and HDL cholesterol - if the latter is a large enough fraction of the total, the aggregate number doesn't matter as much. And fundamentally, there's not a disease called "high cholesterol" - that's a symptom of some other cluster of metabolic processes that have gone subtly off. And the endpoint of any therapy in that field isn't really to lower the number in a blood test: it's to prevent heart attacks and to extend healthy lifetimes, mortality and morbidity. As we're seeing with Vytorin, it may be possible to drop the numbers in a blood test but not see the benefit that's supposed to be there.
Another example of this came up over the weekend. The fibrates are a class of drugs that change lipid levels, although the way they work is still rather obscure. They're supposed to be ligands for the PPAR-alpha nuclear receptor, but they're not very potent against it when you study that closely. At any rate, they do lower triglycerides and have some other effects, which should be beneficial in patients whose lipids are off and are at risk for cardiac problems.
But are they? Type II diabetics tend to be people who fit that last category well, and that's where a lot of fenofibrate is prescribed (as Abbott's Tricor in the US, and under a number of other names around the world). A five-year study in over five thousand diabetic patients, though, has just shown no difference versus placebo. Again, there's no doubt that the drug lowers triglycerides and changes the HDL/LDL/VLDL ratios. It's just that, for reasons unknown, doing so with fenofibrate doesn't seem to actually help diabetic patients avoid cardiac trouble.
Mortality and morbidity: lowering them is a very tough test for any drug, but if you can't, then what's the point of taking something in the first place? This is something to keep in mind as the push for biomarkers delivers more surrogate endpoints. Some of them will, inevitably, turn out not to mean as much as they're supposed to mean.
The discoverer of the prostate-specific antigen (Richard Ablin) has a most interesting Op-Ed in the New York Times. He's pointing out what people should already know: that using PSA as a screen for prostate cancer is not only useless, but actually harmful.
The numbers just aren't there, and Ablin is right to call it a "hugely expensive public health disaster". Some readers will recall the discussion here of a potential Alzheimer's test, which illustrates some of the problems that diagnostic screens can have. But that was for a case where a test seemed as if it might be fairly accurate (just not accurate enough). In the case of PSA, the link between the test and the disease hardly exists at all, at least for the general population. The test appears to have very little use in detecting prostate cancer, and early detection itself is notoriously unreliable as a predictor of outcomes in this disease.
The last time I had blood work done, I made a point of telling the nurse that she could check the PSA box if she wanted to, but I would pay no attention to the results. (I'd already come across Donald Berry's views on the test, and he's someone whose word I trust on biostatistics). I'd urge other male readers to do the same.
Freeman Dyson has written about his belief that molecular biology is becoming a field where even basement tinkerers can accomplish things. Whether we're ready for it or not, biohacking is on its way. The number of tools available (and the amount of surplus equipment that can be bought) have him imagining a "garage biotech" future, with all the potential, for good and for harm, that that entails.
Well, have a look at this garage, which is said to be somewhere in Silicon Valley. I don't have any reason to believe the photos are faked; you could certainly put your hands on this kind of equipment very easily in the Bay area. The rocky state of the biotech industry just makes things that much more available. From what I can see, that's a reasonably well-equipped lab. If they're doing cell culture, there needs to be some sort of incubator around, and presumably a -80 degree freezer, but we don't see the whole garage, do we? I have some questions about how they do their air handling and climate control (although that part's a bit easier in a California garage than it would be in a Boston one). There's also the issue of labware and disposables. An operation like this does tend to run through a goodly amount of plates, bottles, pipet tips and so on, but I suppose those are piled up on the surplus market as well.
But what are these folks doing? The blog author who visited the site says that they're "screening for anti-cancer compounds". And yes, it looks as if they could be doing that, but the limiting reagent here would be the compounds. Cells reproduce themselves - especially tumor lines - but finding compounds to screen, that must be hard when you're working where the Honda used to be parked. And the next question is, why? As anyone who's worked in oncology research knows, activity in a cultured cell line really doesn't mean all that much. It's a necessary first step, but only that. (And how many different cell lines could these people be running?)
The next question is, what do they do with an active compound when they find one? The next logical move is activity in an animal model, usually a xenograft. That's another necessary-but-nowhere-near-sufficient step, but I'm pretty sure that these folks don't have an animal facility in the basement, certainly not one capable of handling immunocompromised rodents. So put me down as impressed, but puzzled. The cancer-screening story doesn't make sense to me, but is it then a cover for something else? What?
If this post finds its way to the people involved, and they feel like expanding on what they're trying to accomplish, I'll do a follow-up. Until then, it's a mystery, and probably not the only one of its kind out there. For now, I'll let Dyson ask the questions that need to be asked, from that NYRB article linked above:
If domestication of biotechnology is the wave of the future, five important questions need to be answered. First, can it be stopped? Second, ought it to be stopped? Third, if stopping it is either impossible or undesirable, what are the appropriate limits that our society must impose on it? Fourth, how should the limits be decided? Fifth, how should the limits be enforced, nationally and internationally? I do not attempt to answer these questions here. I leave it to our children and grandchildren to supply the answers.
The Daily Telegraph in the UK has a story today claiming that a 1951 outbreak of hallucinations and dementia in the French village of Pont-Saint-Esprit was not (as everyone thought) an example of ergot poisoning. No, according to some guy who's writing a book, it was. . .a secret LSD experiment.
Now, there most certainly were secret LSD experiments during the 1950s and 1960s. (The book Storming Heaven has a good account of them, as well as of the history of LSD in general). But it's rather hard to see why the CIA should decide to dose some village in the Auvergne, especially when the symptoms (burning sensations in the extremities as well as hallucinations) seem to match ergotism quite well.
But no matter. I think we can dispose of this new book and its author pretty quickly. Just take a look at some of his scoop:
However, H P Albarelli Jr., an investigative journalist, claims the outbreak resulted from a covert experiment directed by the CIA and the US Army's top-secret Special Operations Division (SOD) at Fort Detrick, Maryland.
The scientists who produced both alternative explanations, he writes, worked for the Swiss-based Sandoz Pharmaceutica