Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
Walensky and Bird have a Miniperspective out in J. Med. Chem. on stapled peptides, giving advice on how to increase one's chances of success in the area. Worth checking out, unless you're at Genentech or WEHI, of course. The authors might say that it's especially worth reading in those cases, come to think of it. I await the day when this dispute gets resolved, although a lot of people awaited the day that the nonclassical carbocation controversy got resolved, too, and look how long that took.
And in Science, Tehshik Yoon has a review on visible-light catalyzed photochemistry. I like these reactions a lot, and have run a few myself. The literature has been blowing up all over the place in this field, and it's good to have an overview like this to keep things straight.
There's an interesting report from the Buchwald group using the Fujita "molecular sponge" crystallography technique. The last report on this was a correction, amid reports that the method was not as widely applicable as had been hoped, so I'm very happy to see it being used here.
They're revising the structure of a new reagent (from the Lu and Shen groups in Shanghai) for introducing the SCF3 group. It was proposed to be a hypervalent iodine (similar to other reagents in this class), but Buchwald's group found some NMR data and reactivity trends that suggested the structure might be in the open form, rather than the five-membered iodine ring one.
Soaking this reagent into the MOF crystal provided a structure, although if you read the supporting information, it wasn't easy. The compound was still somewhat disordered in the MOF lattice, and there were still nitrobenzene and cyclohexane solvent molecules present. The SCF3 reagent showed up in two crystallographically independent sites, one of them associated with residual nitrobenzene. After a good deal of work, though, they did show that open-form structure was present. (The Shen et al. paper's conclusions on its synthetic uses, though, are all still valid; it's just the the structure doesn't fall into the same series as expected).
So the MOF crystallography method lives, although I've still yet to hear of it giving a structure with a nitrogen-containing compound (which rather limits its use in drug discovery work, as you might imagine).
Just Like Cooking has an overview of some interesting new chemistry from the Hartwig group. They're using a rhodium catalyst to directly functionalize aryl rings with silyl groups (which can be used in a number of transformations downstream). One nice thing is that the selectivities are basically the opposite of the direct borylation reactions, so this could open up some isomers that are otherwise difficult to come by.
See Arr Oh makes a good point about the paper, too - it has a lot of detail in it and a lot of information. If you check out the Supplementary Information, there are about thirty pages of further details, and about sixty pages of spectral data. I particularly like the tables of various reaction conditions, hydrogen acceptors, and ligands. The main paper shows the conditions that work the best, but this gives you a chance to see under the hood at everything else that was tried. Every new methods paper should do this - in fact, every new methods paper should be required to do this. Good stuff.
I've received word that well-known organic chemist Alan Katritzky has passed away. He's famous for his work on the use of benzotriazole compounds, and a great deal of other heterocyclic chemistry besides (2,170 papers!)
I first heard him speak in the early 1990s at the Heterocycles Gordon Conference, back in its old location in New Hampshire. And although I 'd been warned to sit near the back of the conference room, I still wasn't ready for the. . .vigor he brought to his presentation. Katritzky had clearly honed his lecturing style in large, unamplified halls, and could be easily heard outside on the lawn. The next day, Stuart McCombie opened the morning program by thanking him for ". . .sharing with me the last secret of benzotriazole. He sprinkled some down my throat AND I NEVER NEED A MICROPHONE AGAIN!"
Katritzky was a link to another era of chemistry (he studied under Sir Robert Robinson), but he leaves behind a huge legacy of work for the modern researcher. He may well have been too productive for his accomplishments to be easily categorized, or at least not yet (those 2,170 papers. . .), but there's no doubt that his name will live on.
The thesis is miserable. One and a half years of new substances prepared like baker’s bread rolls… and in addition, lots of negative results just where I was looking for significant results, and further, results that I cannot even publish because I fear that a competent chemist will find them and prove to me that the camel is missing its humps. One learns to be modest.
Now, Haber was definitely someone to take seriously. He's showing up in "The Chemistry Book", for sure, both for his historic ammonia process and his work in chemical warfare. He was a good enough chemist to know that his doctoral work was not all that great, although he seems to have followed my own recommended path to get that degree as soon as is consistent with honor and not making enemies.
The post's author, MB, wonders what this says about organic synthesis in general. How much of it is just baking bread rolls, and how bad is that? My own take is that the sort of think that Haber was regretting is the lowest form of synthesis. We've all seen the sorts of papers - here is a heterocyclic core, of no particular interest that anyone has ever been able to show. Here it has an amine. Here are twenty-five amides of that amine. Here is our paper telling you about them. Part fourteen in a series. In six months, the sulfonamides. This sort of things gets published, when it does, in the lowest tiers of the journals, and rightly so. There's nothing wrong with it (well, not usually, although this stuff isn't always the most careful work in the world). But there's nothing right with it either. It's reference data. Someone, someday, might stumble into this area of chemical space again, and when they do, they'll find a name scratched onto the wall and below it, a yellowing pile of old spectral data.
I've wondered before about what to do with those sorts of papers. There are so many compounds in the world of organic chemistry that the marginal utility of describing new random ones, while clearly not zero, is very, very close to it, especially if they're not directed towards any known use other than to make a manuscript. So if this is what's meant by baking rolls, then it's not too useful.
But I'm a medicinal chemist. When I start working on a new hit structure, I will most likely turn around and put the biggest pan of bread rolls into the biggest oven I can find. This, though, is chemistry with a purpose - there's some activity that I'm seeking, and if cranking out compounds is the best and/or fastest way to move in on it, then crank away. I'm not going to turn that blast of analogs into a paper; most (maybe all) of them will be tested, found wanting, and make their way into our compound archives. Their marginal utility is pretty low, too, given the numbers of compounds already in there, but it's still by far the best thing to do with them. Any that show activity, though, will get more attention.
I really don't mind that aspect of the synthesis I do. Setting up a row of easy reactions is actually kind of pleasant, because I know that (1) they're likely to work, and (2) they're going to tell me something I really want to know after I send them off for testing. Maybe they aren't bread rolls after all - they're bricks, and I can just possibly build something from them.
I can strongly recommend this article by Carmen Drahl in C&E News on the way that we chemists pick fights over nomenclature. She has examples of several kinds of disagreement (competing terms for the same thing, terms that overlap but are still different, competing ways to measure something in different ways, and terms that are fuzzy enough that some want to eliminate them entirely).
As several of the interviewees note, these arguments are not (always) petty, and certainly not always irrational. Humans are good at reification - turning something into a "thing". Name a concept well, and it sort of shimmers into existence, giving people a way to refer to it as if it were a solid object in the world of experience. This has good and bad aspects. It's crucial to the ability to have any sort of intellectual discussion and progress, since we have to be able to speak of ideas and other entities that are not actual physical objects. But a badly fitting name can do real harm, obscuring the most valuable or useful parts of an idea and diverting thoughts about it unproductively.
My own favorite example is the use of "agonist" and "antagonist" to describe the actions of nuclear receptor ligands. This (to my way of thinking) is not only useless, but does real harm to the thinking of anyone who approaches nuclear receptors having first learned about GPCRs. Maybe the word "receptor" never should have been used for these things in the first place, although realizing that would have required supernatural powers of precognition.
There are any number of examples outside chemistry, of course. One of my own irritants is when someone says that something has been "taken to the next level". You would probably not survive watching a sports channel if that phrase were part of a drinking game. But it presupposes that some activity comes in measurable chunks, and that everyone agrees on what order they come in. I'm reminded of the old blenders with their dials clicking between a dozen arbitrary "levels", labeled with tags like "whip", "chop", and "liquify". Meaningless. It's an attempt to quantify - to reify - what should have been a smooth rheostat knob with lines around it.
OK, I'll stop before I bring up Wittgenstein. OK, too late. But he was on to something when he told people to be careful about the way language is used, and to watch out when you get out onto the "frictionless ice" of talking about constructions of thought. His final admonition in his Tractacus Logico-Philosophicus, that if we cannot speak about something that we have to pass over it in silence, has been widely quoted and widely unheeded, since we're all sure that we can, of course, speak about what we're speaking about. Can't we?
Origin-of-life studies have been a feature of chemistry for a long time, and over the years some key questions have become clear. It's clear from astronomical and planetary science data that the common molecules of organic chemistry are more or less soaking the universe. Amino acids and simple carbohydrates are apparently part of the cloud of gunk that makes up a new solar system, with more forming all the time. But a major step is how (and why) molecules would have organized themselves into gradually more complex systems. Some parts of the process may have been modeled already; there are a number of interesting ways that primitive membranes might have formed, which would seem to be a necessary step in distinguishing the relatively concentrated inside of a proto-cell from the more watery outside.
But a new paper (discussed here as well) has a theory that says this might have been flat-out inevitable:
From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life. . .
. . .“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.
Self-replication would be an excellent way of doing this, and if England is right, then the development of self-organizing and replicating systems would be "baked in" to thermodynamics under the right conditions. Combine that with the organic chemistry that seems to obtain under astrophysical conditions, and we should, in theory, not be a bit surprised to find living creatures hopping around, full of amino acids and carbohydrates, using sunlight and chemical energy to do their thing.
England's theory is still fairly speculative, but he seems to be moving right along in applying it to living systems, at least on paper. What I like about this idea is that it would seem to be testable, in both living and nonliving systems. Perhaps something can be done at the level of bacteria, yeast, or even viruses or bacteriophages. I look forward to seeing some data!
Well, just after blasting antioxidant supplements for cancer patients (and everyone else) comes this headline: "Vitamin C Injections Ease Ovarian Cancer Treatments". Here's the study, in Science Translational Medicine. So what's going on here?
A closer look shows that this, too, appears to fit into the reactive-oxygen-species framework that I was speaking about:
Drisko and her colleagues, including cancer researcher Qi Chen, who is also at the University of Kansas, decided that the purported effects of the vitamin warranted a closer look. They noticed that earlier trials had partially relied on intravenous administration of high doses of vitamin C, or ascorbate, whereas the larger follow-up studies had used only oral doses of the drug.
This, they reasoned, could be an important difference: ascorbate is processed by the body in different ways when administered orally versus intravenously. Oral doses act as antioxidants, protecting cells from damage caused by reactive compounds that contain oxygen. But vitamin C given intravenously can have the opposite effect by promoting the formation of one of those compounds: hydrogen peroxide. Cancer cells are particularly susceptible to damage by such reactive oxygen-containing compounds.
Drisko, Chen and their colleagues found that high concentrations of vitamin C damaged DNA and promoted cell death in ovarian cancer cells grown in culture. In mice grafted with human ovarian cancer cells, treatment with intravenous vitamin C combined with conventional chemotherapy slowed tumour growth, compared to chemotherapy treatment alone.
The concentrations attained by the intravenous route are apparently necessary to get these effects, and you can't reach those by oral dosing. This 2011 review goes into the details - i.v. ascorbate reaches at least 100x the blood concentrations provided by the maximum possible oral dose, and at those levels it serves, weirdly, as a percursor of hydrogen peroxide (and a much safer one than trying to give peroxide directly, as one can well imagine). There's a good amount of evidence from animal models that this might be a useful adjunct therapy, and I'm glad to see it being tried out in the clinic.
So does this mean that Linus Pauling was right all along? Not exactly. This post at Science-Based Medicine provides an excellent overview of that question. It reviews the earlier work on intravenous Vitamin C, and also Pauling's earlier advocacy. Unfortunately, Pauling was coming at this from a completely different angle. He believed that oral Vitamin C could prevent up to 75% of cancers (his words, sad to say). His own forays into the clinic with this idea were embarrassing, and more competently run trials (several of them) have failed to turn up any benefit. Pauling had no idea that for Vitamin C to show any efficacy, that it would have to be run up to millimolar concentrations in the blood, and he certainly had no idea that it would work by actually promoting reactive oxygen species. (He had several other mechanisms in mind, such as inhibition of hyaluronidase, which do not seem to be factors in the current studies at all). In fact, Pauling might well have been horrified. Promoting rampaging free radicals throughout the bloodstream was one of the last things he had in mind; he might have seen this as no better than traditional chemotherapy (since it's also based on a treatment that's slightly more toxic to tumor cells than it is to normal ones). At the same time, he also showed a remarkable ability to adapt to new data (or to ignore it), so he might well have claimed victory, anyway.
This brings up another topic - not Vitamin C, but Pauling himself. As I've been writing "The Chemistry Book" (coming along fine, by the way), one of the things I've enjoyed is a chance to re-evaluate some of the people and concepts in the field. And I've come to have an even greater appreciation of just what an amazing chemist Linus Pauling was. He seems to show up all over the 20th century, and in my judgment could have been awarded a second science Nobel, or part of one, without controversy. I mean, you have The Nature of the Chemical Bond (a tremendous accomplishment by itself), the prediction of noble gas fluorides as possible, the alpha-helix and beta-pleated sheet structures of proteins, the mechanism of sickle cell anemia (and the concept of a "molecular disease"), the suggestion that enzymes work by stabilizing transitions states, and more. Pauling shows up all over the place - encouraging the earliest NMR work ("Don't listen to the physicists"), taking a good cut at working out the structure of DNA, all sorts of problems. He was the real deal, and accomplished about four or five times as much as anyone would consider a very good career.
But that makes it all the more sad to see what became of him in his later years. I well remember his last hurrah, which was being completely wrong about quasicrystals, from when I was in graduate school. But naturally, I'd also heard of his relentless advocacy for Vitamin C, which gradually (or maybe not so gradually) caused people to think that he had slightly lost his mind. Perhaps he had; there's no way of knowing. But the way he approached his Vitamin C work was a curious (and sad) mixture of the same boldness that had served him so well in the past, but now with a messianic strain that would probably have proven fatal to much of his own earlier work. Self-confidence is absolutely necessary for a great scientist, but too much of it is toxic. The only way to find out where the line stands is to cross it, but you won't realize it when you have (although others will).
We remember Isaac Newton for his extraordinary accomplishments in math and physics, not for his alchemical and religious calculations (to which he devoted much time, and which shocked John Maynard Keynes when he read Newton's manuscripts). Maybe in another century or two, Pauling will be remembered for his accomplishments, rather than for the times he went off the rails.
This morning I heard reports of formaldehyde being found in Charleston, West Virginia water samples as a result of the recent chemical spill there. My first thought, as a chemist, was "You know, that doesn't make any sense". A closer look confirmed that view, and led me to even more dubious things about this news story. Read on - there's some chemistry for a few paragraphs, and then near the end we get to the eyebrow-raising stuff.
The compound that spilled was (4-methylcyclohexane)methanol, abbreviated as 4-MCHM. That's its structure over there.
For the nonchemists in the audience, here's a chance to show how chemical nomenclature works. Those lines represent bonds between atoms, and if the atom isn't labeled with its own letter, it's a carbon (this compound has one one labeled atom, that O for oxygen). These sorts of carbons take four bonds each, and that means that there are a number of hydrogens bonded to them that aren't shown. You'd add one, two, or three hydrogens as needed to each to take each one up to four bonds.
The six-membered ring in the middle is "cyclohexane" in organic chemistry lingo. You'll note two things coming off it, at opposite ends of the ring. The small branch is a methyl group (one carbon), and the other one is a methyl group subsituted with an alcohol (OH). The one-carbon alcohol compound (CH3OH) is methanol, and the rules of chemical naming say that the "methanol-like" part of this structure takes priority, so it's named as a methanol molecule with a ring stuck to its carbon. And that ring has another methyl group, which means that its position needs to be specified. The ring carbon that has the "methanol" gets numbered as #1 (priority again), so the one with the methyl group, counting over, is #4. So this compound's full name is (4-methylcyclohexane)methanol.
I went into that naming detail because it turns out to be important. This spill, needless to say, was a terrible thing that never should have happened. Dumping a huge load of industrial solvent into a river is a crime in both the legal and moral senses of the word. Early indications are that negligence had a role in the accident, which I can easily believe, and if so, I hope that those responsible are prosecuted, both for justice to be served and as a warning to others. Handling industrial chemicals involves a great deal of responsibility, and as a working chemist it pisses me off to see people doing it so poorly. But this accident, like any news story involving any sort of chemistry, also manages to show how little anyone outside the field understands anything about chemicals at all.
I say that because among the many lawsuits being filed, there are some that show (thanks, Chemjobber!) that the lawyers appear to believe that the chemical spill was a mixture of 4-methylcyclohexane and methanol. Not so. This is a misreading of the name, a mistake that a non-chemist might make because the rest of the English language doesn't usually build up nouns the way organic chemistry does. Chemical nomenclature is way too logical and cut-and-dried to be anything like a natural language; you really can draw a complex compound's structure just by reading its name closely enough. This error is a little like deciding that a hairdryer must be a device made partly out of hair.
I'm not exaggerating. The court filing, by the law firm of Thompson and Barney, says explicitly:
30. The combination chemical 4-MCHM is artificially created by combining methylclyclohexane (sic) with methanol.
31. Two component parts of 4-MCHM are methylcyclohexane and methanol which are both known dangerous and toxic chemicals that can cause latent dread disease such as cancer.
Sure thing, guys, just like the two component parts of dogwood trees are dogs and wood. Chemically, this makes no sense whatsoever. Now, it's reasonable to ask if 4-MCHM can chemically degrade to methanol and 4-methylcyclohexane. Without going into too much detail, the answer is "No". You don't get to break carbon-carbon bonds that way, not without a lot of energy. If you ran the chemical (at high temperature) through some sort of catalytic cracking reactor at an oil refinery, you might be able to get something like that to happen (although I'd expect other things as well, probably all at the same time), but otherwise, no. For the same sorts of reasons, you're not going to be able to get formaldehyde out of this compound, either, not without similar conditions. Air and sunlight and water aren't going to do it, and if bacteria and fungi metabolize it, I'd expect things like (4-methylcyclohexane)carboxaldehyde and (4-methylcyclohexane)carboxylic acid, among others. I would not expect them to break off that single-carbon alcohol as formaldehyde.
So where does all this talk of formaldehyde come from? Well, one way that formaldehyde shows up is from oxidation of methanol, as shown in that reaction (this time I've drawn in all the hydrogens). This is, in fact, one of the reasons that methanol is toxic. In the body, it gets oxidized to formaldehyde, and that gets oxidized right away to formic acid, which shuts down an important enzyme. Exposure to formaldehyde itself is a different problem. It's so reactive that most cancers associated with exposure to it are in the upper respiratory tract; it doesn't get any further.
As that methanol oxidation reaction pathway shows, the body actually has ways of dealing with formaldehyde exposure, up to a point. In fact, it's found at low levels (around 20 to 30 nanograms/milliliter) in things like tomatoes and oranges, so we can assume that these exposure levels are easily handled. I am not aware of any environmental regulations on human exposure to orange juice or freshly cut tomatoes. So how much formaldehyde did Dr. Scott Simonton find in his Charleston water sample? Just over 30 nanograms per milliliter. Slightly above the tomato-juice level (27 ng/mL). For reference, the lowest amount that can be detected is about 6 ng/mL. Update: and the amount of formaldehyde in normal human blood is about 1 microgram/mL, which is over thirty times the levels that Simonton says he found in his water samples. This is produced by normal human metabolism (enzymatic removal of methyl groups and other reactions). Everyone has it. And another update: the amount of formaldehyde in normal human saliva can easily be one thousand times that in Simonton's water samples, especially in people who smoke or have cavities. If you went thousands of miles away from this chemical spill, found an untouched wilderness and had one of its natives spit in a collection vial, you'd find a higher concentration of formaldehyde.
But Simonton is a West Virginia water quality official, is he not? Well, not in this capacity. As this story shows, he is being paid in this matter by the law firm of Thompson and Barney to do water analysis. Yes, that's the same law firm that thinks that 4-MCHM is a mixture with methanol in it. And the water sample that he obtained was from the Vandalia Grille in Charleston, the owners of which are defendants in that Thompson and Barney lawsuit that Chemjobber found.
So let me state my opinion: this is a load of crap. The amounts of formaldehyde that Dr. Simonton states he found are within the range of ozonated drinking water as it is, and just above those of fresh tomato juice. These are levels that have never been shown to be harmful in humans. His statements about cancer and other harm coming to West Virginia residents seem to me to be irresponsible fear-mongering. The sort of irresponsible fear-mongering that someone might do if they're being paid by lawyers who don't understand any chemistry and are interested in whipping up as much panic as they can. Just my freely offered opinions. Do your own research and see what you think.
Update: I see that actual West Virginia public health officials agree.
Another update: I've had people point out that the mixture that spilled may have contained up to 1% methanol. But see this comment for why this probably doesn't have any bearing on the formaldehyde issue. Update, Jan 31: Here's the MSDS for the "crude MHCM" that was spilled. The other main constituent (4-methoxymethylcyclohexane)methanol is also unlikely to produce formaldehyde, for the same reasons given above. The fact remains that the levels reported (and sensationalized) by Dr. Simonton are negligible by any standard.
Here's a long article from the Raleigh News and Observer (part one and part two) on the Eaton/Feldheim/Franzen dispute in nanoparticles, which some readers may already be familiar with (I haven't covered it on the blog myself). The articles are clearly driven by Franzen's continued belief that research fraud has been committed, and the paper makes the most of it.
The original 2004 publication in Science claimed that RNA solutions could influence the crystal form of palladium nanoparticles, which opened up the possibility of applying the tools of molecular biology to catalysts and other inorganic chemistry applications. Two more papers in JACS extended this to platinum and looked at in vitro evolutionary experiments. But even by 2005, Franzen's lab (who had been asked to join the collaboration between Eaton and Feldheim, who were now at Colorado and a startup company) was generating disturbing data: the original hexagonal crystals (a very strange and interesting form for palladium) weren't pure palladium at all - on an elemental basis, they were mostly carbon. (Later work showed that they were unstable crystals of (roughly) Pd(dba)3, with solvated THF. And they were produced just as well in the negative control experiments, with no RNA added at all.
N. C. State investigated the matter, and the committee agreed that the results were spurious. But they found Feldheim guilty of sloppy work, rather than fraud, saying he should have checked things out more thoroughly. Franzen continued to feel as if justice hadn't been done, though:
In fall 2009, he spent $1,334 of his own money to hire Mike Tadych, a Raleigh lawyer who specializes in public records law and who has represented The News & Observer. In 2010, the university relented and allowed Franzen into the room where the investigation records were locked away.
Franzen found the lab notebooks, which track experiments and results. As he turned the pages, he recognized that Gugliotti kept a thorough and well-organized record.
“I found an open-and-shut case of research fraud,” Franzen said.
The aqueous solution mentioned in the Science article? The experiments routinely used 50 percent solvent. The experiments only produced the hexagonal crystals when there was a high level of solvent, typically 50 percent or more. It was the solvent creating the hexagonal crystals, not the RNA.
On Page 43 of notebook 3, Franzen found what he called a “smoking gun.”
(Graduate student Lina) Gugliotti had pasted four images of hexagonal crystals, ragged around the edges. The particles were degrading at room temperature. The same degradation was present in other samples, she noted.
The Science paper claimed the RNA-templated crystals were formed in aqueous solution with 5% THF and were stable. NC State apparently offered to revoke Gugliotti's doctorate (and another from the group), but the article says that the chemistry faculty objected, saying that the professors involved should be penalized, not the students. The university isn't commenting, saying that an investigation by the NSF is still ongoing, but Franzen points out that it's been going on for five years now, a delay that has probably set a record. He's published several papers characterizing the palladium "nanocrystals", though, including this recent one with one of Eaton and Feldheim's former collaborators and co-authors. And there the matter stands.
It's interesting that Franzen pursued this all the way to the newspaper (known when I Iived in North Carolina by its traditional nickname of the Nuisance and Disturber). He's clearly upset at having joined what looked like an important and fruitful avenue of research, only to find out - rather quickly - that it was based on sloppy, poorly-characterized results. And I think what really has him furious is that the originators of the idea (Feldheim and Eaton) have tried, all these years, to carry on as if nothing was wrong.
I think, though, that Franzen is having his revenge whether he realizes it or not. It's coming up on ten years now since the original RNA nanocrystal paper. If this work were going to lead somewhere, you'd think that it would have led somewhere by now. But it doesn't seem to be. The whole point of the molecular-biology-meets-materials-science aspect of this idea was that it would allow a wide variety of new materials to be made quickly, and from the looks of things, that just hasn't happened. I'll bet that if you went back and looked up the 2005 grant application for the Keck foundation that Eaton, Feldheim (and at the time, Franzen) wrote up, it would read like an alternate-history science fiction story by now.
The Baran group has published a neat olefin-coupling reaction which looks like something pretty useful. Building on heteroatom/olefin couplings from Boger, Carriera, and others, they use an iron catalyst and a silane to form carbon-carbon bonds between olefins, inter- or intra-molecularly. As long as you've got one olefin with an electron-withdrawing group on it, things seem to fall into place (no homocoupling of the other olefin, for example). Update: here are more details from the Baran group blog about how this reaction came to be.
I like several things about this setup: the reagents are easy to come by, for one thing (no nine-step glovebox procedure to make the catalyst). And they've taken care to run it on larger scales (by bench standards) to see if it holds up (that reaction of 14 to 15 was done on gram scale, for example). They've also checked and found that the reaction doesn't mind if it's under nitrogen or not, and that you don't have to dry the solvents. These are exactly the questions that people ask every time a spiffy new reaction comes up, and all too often the answers are "We don't know" or "Well, yeah, about that. . ."
The only thing that worries me, looking over the tables of reactions, is that there's only one with a basic nitrogen (where 3-vinylpyridine was used). Boc-nitrogen seems to be OK, but a lot of the examples are rather alkane-ish. I've no doubt that people will be testing the limits of the system soon, because it looks like a reaction worth running.
Organic synthesis is, as many have put it, a victim of its own success. Synthetic chemists can, it's true, pretty much make whatever plausible structures you can draw on the board, or whatever product some tropical fungus or toxic sponge thinks is a good idea. But we can make those only if constraints on time and money are removed. "Give me enough postdocs and I will move the Earth".
Those aren't realistic conditions, though. There are many types of compounds, some of them quite simple, for which no good synthetic routes are known. Under infinite-postdoc conditions, many of these can be worked out for specific cases (step 43 of the total synthesis of shootmenowicene), but (and here's my industrial bias showing), a good synthetic route is one that works on a variety of substrates, with readily available reagents, in reliably useful yields, under non-strenuous conditions. We're missing a lot of those.
But it looks like one might have been crossed off the list. This paper in Science, from UT-Southwestern and Brigham Young, reports a new method to make aziridines, including NH ones, in one step under mild conditions. There are quite a few methods to make aziridines, but most of them are N-substituted, particularly N-Boc and N-tosyl. A direct reaction analogous to epoxidation to give you an NH aziridine is pretty rare, but this seems to be the answer. It's a rhodium-catalyzed route that has been applied to a range of olefins, and it looks pretty mild and pretty general.
This should simplify routes to a number of natural products with this motif, but it should also prompt some new chemistry as we get easier access to that functional group. Congratulations to the authors!
From Monash University comes this colorful (and doubtless extremely useful) chart of Smells of Chemistry. See if you agree with its assessments - I think it's broadly correct, but I might be a bit more descriptive in some of the boxes. Although "Unique and Unpleasant" does sum up some of them pretty well, and I do like the boxes marked "Old People", "Seaweed", and "Dead Animals".
A reader sends along this mysterious glassware set, which was donated to a nonprofit that he's working with. They're thinking of selling it on EBay, if they can figure out how to list it and what it is.
Looking at it, the lack of ground-glass joints makes you think "diazomethane kit", but I don't think that's quite right. (What are those gas impinger tubes doing in there, for example?) Kjeldahl apparatus? I haven't seen one in so long that I'm not sure about that, either. If anyone has any ideas, please feel free to take a crack.
The Danishefsky group has published their totally synthetic preparation of erythropoetin. This is a work that's been in progress for ten years now (here's the commentary piece on it), and it takes organic synthesis into realms that no one's quite experienced yet:
The ability to reach a molecule of the complexity of 1 by entirely chemical means provides convincing testimony about the growing power of organic synthesis. As a result of synergistic contributions from many laboratories, the aspirations of synthesis may now include, with some degree of realism, structures hitherto referred to as “biologics”— a term used to suggest accessibility only by biological means (isolation from plants, fungi, soil samples, corals, or microorganisms, or by recombinant expression). Formidable as these methods are for the discovery, development, and manufacturing of biologics, one can foresee increasing needs and opportunities for chemical synthesis to provide the first samples of homogeneous biologics. As to production, the experiments described above must be seen as very early days. . .
I can preach that one both ways, as the old story has it. I take the point about how synthesis can provide these things in more homogeneous form than biological methods can, and it can surely provide variations on them that biological systems aren't equipped to produce. At the same time, I might put my money on improving the biological methods rather than stretching organic synthesis to this point, at least in its present form. I see the tools of molecular biology as hugely powerful, but in need of customization, whereas organic synthesis can be as custom as you like, but can (so far) only reach this sort of territory by all-out efforts like Danishefsky's. In other words, I think that molecular biology has to improve less than organic chemistry has to get the most use out of such molecules.
That said, I think that the most impressive part of this impressive paper is the area where we have the fewest molecular biology tools: the synthesis of the polysaccharide side chains. Assembling the peptide part was clearly no springtime stroll (and if you read the paper, you find that they experienced the heartbreak of having to go back and redesign things when the initial assembly sequence failed). But polyglycan chemistry has been a long-standing problem (and one that Danishefsky himself has been addressing for years). I think that chemical synthesis really has a much better shot at being the method of choice there. And that should tell you what state the field is in, because synthesis of those things can be beastly. If someone manages to tame the enzymatic machinery that produces them, that'll be great, but for now, we have to make these things the organic chemistry way when we dare to make them at all.
Here's some good news for open (free) access to chemical information. A company called SureChem was trying to make a business out of chemical patent information, but had to fold. They've donated their database to the EMBL folks, and now we have SureChEMBL. At the moment, that link is taking me to the former SureChem site, but no doubt that's changing shortly.
This will give access to millions of chemical structures in patents, a resource that's been hard to search without laying out some pretty noticeable money. This isn't just the database dump, either - the software has been donated, too, so things will stay up to date:
SureChEMBL takes feeds of full text patents, identifies chemical objects from either the in-line text or from images and adds 2-D chemical structures. This is then loaded into a database and is searchable by chemical structure, so you can do substructure, similarity searching and so forth - all the good things you'd expect from a chemical database. This chemical search functionality is unavailable from the public, published patent documents, and is really essential for anyone seriously using the patent literature. Oh, and the system does this live, so as patents are published, they are processed and added to the system - the delay between publication and structures being available in SureChEMBL is about a day when converted from text, and a few days when converted from image sources.
Chemical Abstracts, Reaxsys, and the others in that business should take note: if they want people to keep paying for their systems, they'll need to keep providing more value for the money. Good news all around.
Chemjobber has a good post on a set of papers from Pfizer's process chemists. They're preparing filibuvir, and a key step along the way is a Dieckmann cyclization. Well, no problem, say the folks who've never run one of these things - just hit the diester compound with some base, right?
But which base? The example in CJ's post is a good one to show how much variation you can get in these things. As it turned out, LiHMDS was the base of choice, much better than NaHMDS or KHMDS. Potassium t-butoxide was just awful. But the hexamethyldisilazide was even much better than LDA, and those two are normally pretty close. But there were even finer distinctions to be made: it turned out that the reaction was (reproducibly) slightly better or slightly worse with LiHMDS from different suppliers. The difference came down to two processes used to prepare the reagent - via n-BuLi or via lithium metal, and the Pfizer team still isn't sure what the difference is that's making all the difference (see the link for more details).
That's pure, 100-proof process chemistry for you, chasing down these details. It's a good thing for people who don't do that kind of work at all, though, to read some of these papers, because it'll give you an appreciation of variables that otherwise you might not think of at all. When you get down to it, a lot of our reactions are balancing on some fairly wobbly tightropes strung across the energy-surface landscape, and it doesn't take much of a push to send them sliding off in different directions. Choice of cation, of Lewis acid, of solvent, of temperature, order of addition - these and other factors can be thermodynamic and kinetic game-changers. We really don't know too many details about what happens in our reaction flasks.
And a brief med-chem note, for context: filibuvir, into which all this work was put, was dropped from development earlier this year. Sometimes you have to do all the work just to get to the point where you can drop these things - that's the business.
Here's a roundup of the top chem-blog posts of the year, as picked by Nature's Sceptical Chymist blog. I made the list, but a lot of other good stuff did, too - have a look. Edit - link fixed now - sorry!
Pick an empirical formula. Now, what's the most stable compound that fits it? Not an easy question, for sure, and it's the topic of this paper in Angewandte Chemie. Most chemists will immediately realize that the first problem is the sheer number of possibilities, and the second one is figuring out their energies. A nonscientist might think that this is the sort of thing that would have been worked out a long time ago, but that definitely isn't the case. Why think about these things?
What is this “Guinness” molecule isomer search good for? Some astrochemists think in such terms when they look for molecules in interstellar space. A rule with exceptions says that the most stable isomers have a higher abundance (Astrophys. J. 2009, 696, L133), although kinetic control undoubtedly has a say in this. Pyrolysis or biotechnology processes, for example, in anaerobic biomass-to-fuel conversions, may be classified on the energy scale of their products. The fate of organic aerosols upon excitation with highly energetic radiation appears to be strongly influenced by such sequences because of ion-catalyzed chain reactions (Phys. Chem. Chem. Phys. 2013, 15, 940). The magic of protein folding is tied to the most stable atomic arrangement, although one must keep in mind that this is a minimum-energy search with hardly any chemical-bond rearrangement. We should rather not think about what happens to our proteins in a global search for their minimum-energy structure, although the peptide bond is not so bad in globally minimizing interatomic energy. Regularity can help and ab initio crystal structure prediction for organic compounds is slowly coming into reach. Again, the integrity of the underlying molecule is usually preserved in such searches.
Things get even trickier when you don't restrict yourself to single compounds. It's pointed out that the low-energy form of the hexose empirical formula (C6H12O6) might well be a mixture of methane and carbon dioxide (which sounds like the inside of a comet to me). That brings up another reason this sort of thinking is useful: if you want to sequester carbon dioxide, what's the best way to do it? What molecular assemblies are most energetically favorable, and at what temperatures do they exist, and what level of complexity? At larger scales, we'll also need to think about such things in the making of supramolecular assemblies for nanotechnology.
The author, Martin Suhm of Göttingen, calls for a database of the lowest-energy species for each given formula as an invitation for people to break the records. I'd like to see someone give it a try. It would provide challenges for synthesis, spectroscopy and (especially) modeling and computational chemistry.
A look back at the way it used to be, courtesy of ChemTips. What did you do without NMR, without LC-mass spec? You tried all kinds of tricks to get solids that you could recrystallize, and liquids that you could distill. I missed out on that era of chemistry, and most readers here can say the same. But it's a good mental exercise to picture what things used to be like.
Here's a very surprising idea that looks like it can be put to an experimental test. Mao-Sheng Miao (of UCSB and the Beijing Computational Sciences Research Center) has published a paper suggesting that under high-pressure conditions, some elements could show chemical bonding behavior involving their inner-shell electrons. Specific predictions include high-pressure forms of cesium fluoride - not just your plain old CsF, but CsF3 and CsF5, and man, do I feel odd writing down those formulae.
These have completely different geometries, and should be readily identifiable should they actually form. I'm thinking of this as cesium giving up its lone valence electron, and then you're left with a xenon-like arrangement. And xenon, as Neil Bartlett showed the world in 1962, can certainly go on to form fluorides. Throw in some pressure, and (perhaps) the deed it done in cesium's case. So I very much look forward to an experimental test of this idea, which I would imagine we'll see pretty shortly.
Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.
When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.
In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.
Here's something that you don't see every day: an article in the New York Times praising the sophomore organic chemistry course. It's from the Education section, and it's written from the author's own experience:
Contemplating a midlife career change from science writer to doctor, I spent eight months last year at Harvard Extension School slogging through two semesters of organic chemistry, or orgo, the course widely known for weeding out pre-meds. At 42, I was an anomaly, older than most of my classmates (and both professors), out of college for two decades and with two small children. When I wasn’t hopelessly confused, I spent my time wondering what the class was actually about. Because I’m pretty sure it wasn’t just about organic chemistry. For me, the overriding question was not “Is this on the test?” but rather “What are they really testing?”
That's a worthwhile question. Organic chemistry is a famous rite of passage for pre-med students, but it's safe to say that its details don't come up all that often in medical practice, at least not in the forms one finds them in most second-year courses. Of course, there's a lot to the viewpoint expressed by Chemjobber on Twitter, that if you can't understand sophomore organic, there are probably a lot of other topics in medical science you're going to have trouble understanding, too. The article touches on this, too:
But the rules have many, many exceptions, which students find maddening. The same molecule will behave differently in acid or base, in dark or sunlight, in heat or cold, or if you sprinkle magic orgo dust on it and turn around three times. You can’t memorize all the possible answers — you have to rely on intuition, generalizing from specific examples. This skill, far more than the details of every reaction, may actually be useful for medicine.
“It seems a lot like diagnosis,” said Logan McCarty, Harvard’s director of physical sciences education, who taught the second semester. “That cognitive skill — inductive generalization from specific cases to something you’ve never seen before — that’s something you learn in orgo.”
Or it's something you should learn, anyway. Taught poorly (or learned poorly) it's a long string of reactions to be memorized - this does that, that thing goes to this thing, on and on. Now, there are subjects that have to be given this treatment - the anatomy that those med students will end up studying is a good example - but you'd think that students would want to put off as much brute-force memorization as possible, in favor of learning some general principles. But sometimes those principles don't come across, and sometimes a student's natural response to new material is just to stuff it as it comes into the hippocampus. That's not a good solution, but in some cases organic chemistry gets to be the course that teaches that lesson. I don't suppose that knowing the Friedel-Crafts reaction helps out many physicians, but having to learn it might.
There's still a case for (future) physicians to know organic chemistry for the sake of knowing organic chemistry, though. You can't have much of a grasp of biochemistry without learning organic, and it comes in rather handy for pharmacology and toxicology, too. Depending on what kind of medicine a person's practicing, these may vary in utility. But I'd rather not have anyone as a physician who doesn't give them a thought.
Medicinal chemists have long been familiar with the "magic methyl" effect. That's the dramatic change in affinity that can be seen (sometimes) with the addition of a single methyl group in just the right place. (Alliteration makes that the phrase of choice, but there are magic fluoros, magic nitrogens, and others as well). The methyl group is also particularly startling to a chemist, because it's seen as electronically neutral and devoid of polarity - it's just a bump on the side of the molecule, right?
Some bump. There's a very useful new paper in Angewandte Chemie that looks at this effect, and I have to salute the authors. They have a number of examples from the recent literature, and it couldn't have been easy to round them up. The methyl groups involved tend to change rotational barriers around particular bonds, alter the conformation of saturated rings, and/or add what is apparently just the right note of nonpolar interaction in some part of a binding site. It's important to remember just how small the energy changes need to be for things like this to happen.
The latter part of the paper summarizes the techniques for directly introducing methyl groups (as opposed to going back to the beginning of the sequence with a methylated starting material). And the authors call for more research into such reactions: wouldn't it be useful to be able to just staple a methyl group in next to the nitrogen of a piperidine, for example, rather than having to redo the whole synthesis? There are ways to methylate aryl rings, via metal-catalyzed couplings or lithium chemistry, but alkyl methylations are thin on the ground. (The ones that exist tend to rely on those same sorts of mechanisms).
Methyl-group reagents of the same sort that have been found for trifluoromethyl groups in recent years would be welcome - the sorts of things you could expose a compound to and have it just methylate the most electrophilic or nucleophilic site(s) to see what you'd get. This is part of a general need for alkyl C-H activation chemistries, which people have been working on for quite a while now. It's one of the great undersolved problems in synthetic chemistry, and I hope that progress gets made. Otherwise I might have to break into verse again, and no one wants that.
I'm actually going to ignore the headline on this article at Chemistry World, although coming up with it must have made someone's day. Once I'd gotten my head back up out of my hands and read the rest of the piece, it was quite interesting.
It's a summary of this paper in Nature Chemistry, which used the ingenious system shown to measure what the alkyl-chain interactions are worth in different solvents.
The team has now used a synthetic molecular balance to measure the strength of van der Waals interactions between apolar alkyl chains in more than 30 distinct organic, fluorous and aqueous solvent environments. The balance measurements show that the interaction between alkyl chains is an order of magnitude smaller than estimates of dispersion forces derived from measurements of vaporisation enthalpies and dispersion-corrected calculations. Moreover, the team found that van der Waals interactions between the alkyl chains were strongly attenuated by competitive dispersion interactions with the surrounding solvent molecules.
There are two ways to look at this, and they're not mutually exclusive. One, which the Chemistry World article takes (in a quote from lead author Scott Cockcroft), is that this could simplify computational approaches to compound interactions, because calculating van der Waals forces is a much more intensive process. If solvent interactions are just going to cancel them out, why spend the resources? And that's true, but it brings up the other question: why did we think that vdW forces were so strong in the first place? As the quote above indicates, a lot of the experimental evidence is from gas-phase measurements, and the addition of solvent molecules clearly means that those values aren't as generalizable as had been thought. But that brings up the next question: why haven't computational methods shown before now that the gas-phase experimental data could be leading things astray?
I don't know the literature of the field well enough to answer that question, but given the sorts of exchanges that were taking place back in that recent Nobel Prize post, I'll bet that there are some people out there who can. Have there been computational methods that pointed toward the experimental data? Or have some of those efforts been directed more towards just seeing if the gas-phase data could be reproduced?
As a medicinal chemist, naturally, I'm wondering how we need to be thinking about binding of molecules to the active sites of enzymes. That's certainly not a solvent-filled environment, but inside a protein, water molecules are the bridge between being in a vacuum and being in solution. It'll depend on how many you have to worry about, what their roles are interacting with the protein and the ligand, how defined they are spatially, and how much of the molecule will be exposed to solvent. These things we already knew - will these new experimental results help us to get better at it?
Update: see the comments - lead author Scott Cockcroft says that his group is looking for computational collaborators for some of these very purposes.
Here's a neat paper from Oliver Kappe's group on diazomethane flow chemistry. They're using the gas-permeable tube-in-tube technique (as pioneered by Steve Ley's group). Flow systems have been described for using diazomethane before, but this looks like a convenient lab-scale method.
Diazomethane is the perfect reagent to apply flow chemistry to. It's small, versatile, reactive, does a lot of very interesting and useful chemistry (generally quickly and in high yields). . .and it's also volatile, extremely toxic, and a really significant explosion hazard. Generating it as you use it (and reacting it quickly) is a very appealing thought. In this system, the commercial diazomethane precursor Diazald is mixed with a KOH flow in one tube, while a THF solution of the reactants flows past it on the other side of a gas-permeable membrane. Methyl ester formation, cyclopropanation, and Arndt–Eistert reactions all worked well.
Aldrich or someone should work this up into a small commercial apparatus, a bespoke diazomethane generator for general use. I think it would sell. I suggest, free of charge, the brand name "Diazoflow".
I wanted to mention a new blog, Totally Microwave, that's set up to cover all sorts of developments in microwave-assisted chemistry. Full disclosure: it's from a former colleague of mine. I don't know of another site that's working this area, so it could be a good addition to the list - have a look!
If you're in the mood for some truly 100-proof synthetic organic chemistry, this post from Mark Peczuh at UConn is going to be just what you need. He's going through the 2002 synthesis of ingenol from the Winkler group, line by line, in an effort to show his own students how to read such highly compressed reports. Here's a bit of it, to give you the idea:
Paragraph 7: The payment for using 6 in place of 5 has come due. In paragraph 7, the authors quickly move through a series of transformations that convert 16 to 22. The key player that enables these transformations is the hydroxymethyl group attached to C6. Oxidation of that group to the corresponding aldehyde allows sequential eliminations that create the diene in 22. The authors report flatly in this paragraph that the seven steps reported are “to introduce the A ring functionality present in ingenol”. They don’t put emphasis on it, but it’s logical to think that they’d have preferred to carry an oxygenated C3 up to this point and done only one or two steps to be in a much better position than they presently are. So it goes.
That is indeed exactly the sort of thinking that you have to do to follow one of these things, and it definitely requires time and effort. This chemistry is the sort of work that I don't do (and have, in fact, questioned the utility of), but there's no doubt that it's an extreme intellectual and practical challenge, which this view can really make you appreciate.
The 2013 Nobel Prize in Chemistry has gone to Martin Karplus of Harvard, Michael Levitt of Stanford, and Arieh Warshel of USC. This year's prize is one of those that covers a field by recognizing some of its most prominent developers, and this one (for computational methods) has been anticipated for some time. It's good to see it come along, though, since Karplus is now 83, and his name has been on the "Could easily win a Nobel" lists for some years now. (Anyone who's interpreted an NMR spectrum of an organic molecule will know him for a contribution that he's not even cited for by the Nobel committee, the relationship between coupling constants and dihedral angles).
Here's the Nobel Foundation's information on this year's subject matter, and it's a good overview, as usual. This one has to cover a lot of ground, though, because the topic is a large one. The writeup emphasizes (properly) the split between classical and quantum-mechanical approaches to chemical modeling. The former is easier to accomplish (relatively!), but the latter is much more relevant (crucial, in fact) as you get down towards the scale of individual atoms and bonds. Computationally, though, it's a beast. This year's laureates pioneered some very useful techniques to try to have it both ways.
This started to come together in the 1970s, and the methods used were products of necessity. The computing power available wouldn't let you just brute-force your way past many problems, so a lot of work had to go into figuring out where best to deploy the resources you had. What approximations could you get away with? How did you use your quantum-mechanical calculations to give you classical potentials to work with? Where should be boundaries between the two be drawn? Even with today's greater computational power these are still key questions, because molecular dynamics calculations can still eat up all the processor time you can throw at them.
That's especially true when you apply these methods to biomolecules like proteins and DNA, and one thing you'll notice about all three of the prize winners is that they went after these problems very early. That took a lot of nerve, given the resources available, but that's what distinguishes really first-rate scientists: they go after hard, important problems, and if the tools to tackle such things don't exist, they invent them. How hard these problems are can be seen by what we can (and still can't) do by computational simulations here in 2013. How does a protein fold, and how does it end up in the shape it has? What parts of it move around, and by how much? What forces drive the countless interactions between proteins and ligands, other proteins, DNA and RNA molecules, and all the rest? What can we simulate, and what can we predict?
I've said some critical things about molecular modeling over the years, but those have mostly been directed at people who oversell it or don't understand its limitations. People like Karplus, Levitt, and Warshel, though, know those limitations in great detail, and they've devoted their careers to pushing them back, year after year. Congratulations to them all!
More coverage:Curious Wavefunction and C&E News. The popular press coverage of this award will surely be even worse than usual, because not many people charged with writing the headlines are going to understand what it's about.
Addendum: for almost every Nobel awarded in the sciences, there are people that miss out due to the "three laureate" rule. This year, I'd say that it was Norman Allinger, whose work bears very much on the subject of this year's prize. Another prominent computational chemist whose name comes up in Nobel discussions is Ken Houk, whose work is directed more towards mechanisms of organic reactions, and who might well be recognized the next time computational chemistry comes around in Sweden.
Second addendum: for a very dissenting view of my "Kumbaya" take on today's news, see this comment, and scroll down for reactions to it. I think its take is worth splitting out into a post of its own shortly!
Does anyone know what the MIT Press Office is getting at with this intro?
In all the centuries that humans have studied chemical reactions, just 36 basic types of reactions have been found. Now, thanks to the work of researchers at MIT and the University of Minnesota, a 37th type of reaction can be added to the list.
I don't think I've ever heard of any scheme quite like that. Looking over the paper itself, it's an interesting piece of computational work on low-temperature oxidation pathways. It shows that gamma-keto hydroperoxides (as had been hypothesized) can form a cyclic peroxide intermediate, which then fragments into a carboxylic acid and an aldehyde. This would seem to clear up some discrepancies in the production of COOH compounds in several oxidation and combustion pathways, where they show up much more often than theory predicts.
And that's all fine, but what's this 36 reaction business? It appears nowhere in the paper, which makes me wonder if someone who worked on the press release got something tangled up. Or is there some classification scheme that I haven't heard of?
(Noted on the "Ask Science" section of Reddit), where a baffled reader of the press release tried to find out what was going on).
Element shortages are in the news these days. The US has been talking about shutting down its strategic helium reserve, and there are plenty of helium customers worried about the prospect. The price of liquid helium, not a commodity that you usually hear quoted on the afternoon financial report, has apparently more than tripled in the last year.
I think that this is more of a gap problem than a running-out-of-helium one, though. There's still a lot of helium in the world, and the natural gas boom of recent years has made even more of it potentially available. Trapping it, though, is not cheap - this is something that has to be done on a large scale to work at all, and substantial investment is needed. Air Liquide has a liquefaction plant starting up in Qatar, but that won't be running at full capacity for a while yet, it appears. I think, though, that this plant and other such efforts will end up providing enough helium for industry and research, at a price. We aren't running out of helium, but the cheap helium is going to be in short supply for a few years.
At the other end of the periodic table, though, it looks like we really are running out of plutonium-238. One's first impulse is to say "Good!", because the existing stockpiles are largely the result of nuclear weapons production in years past. But it's an excellent material to power radiothermal generators, since it has a reasonable half-life (87.7 years), a high decay energy, and is an alpha emitter (thus needing less heavy shielding). Note this picture of a pellet of the oxide glowing under its own heat. There are a number of proposed deep space missions that will only launch if they can use Pu-238 that no one seems to have. Russia sold about 16 kilos to the US in the early 1990s, but just a few years ago they backed out of a deal for another 10. No one's sure - or no one's saying - if that's because they would rather hold on to it themselves, or if they don't really have that amount left any more. To give you an idea, the proposed Europa mission to Jupiter would need about 22 kilos.
There are efforts to restart Pu-238 production, but as you would imagine, this is not the work of a moment. As opposed to helium, which is sitting around in natural gas underground, you're not going to be mining any plutonium. It has to be made from neptunium-237, which you only get from spent nuclear fuel rods, and the process is expensive, no fun, and hot as blazes in every sense of the word. Even if the proposed restart gets going, it'll only produce about 1.5kg per year. So if you have any plans that involve large amounts of plutonium - and they'd better involve space exploration, dude - you should take this into account.
Well, nearly nothing. That's the promise of a technique that's been published by the Ernst lab from the University of Basel. They first wrote about this in 2010, in a paper looking for ligands to the myelin-associated glycoprotein (MAG). That doesn't sound much like a traditional drug target, and so it isn't. It's part of a group of immunoglobulin-like lectins, and they bind things like sialic acids and gangliosides, and they don't seem to bind them very tightly, either.
One of these sialic acids was used as their starting point, even though its affinity is only 137 micromolar. They took this structure and hung a spin label off it, with a short chain spacer. The NMR-savvy among you will already see an application of Wolfgang Jahnke's spin-label screening idea (SLAPSTIC) coming. That's based on the effect of an unpaired electron in NMR spectra - it messes with the relaxation time of protons in the vicinity, and this can be used to determine whatever might be nearby. With the right pulse sequence, you can easily detect any protons on any other molecules or residues out to about 15 or 20 Angstroms from the spin label.
Jahnke's group at Novartis attached spin labels to proteins and used these the find ligands by NMR screening. The NMR field has a traditional bias towards bizarre acronyms, which sometimes calls for ignoring a word or two, so SLAPSTIC stands for "Spin Labels Attached to Protein Side chains as a Tool to identify Interacting Compounds". Ernst's team took their cue from yet another NMR ligand-screening idea, the Abbott "SAR by NMR" scheme. That one burst on the scene in 1996, and caused a lot of stir at the time. The idea was that you could use NMR of labeled proteins, with knowledge of their structure, to find sets of ligands at multiple binding sites, then chemically stitch these together to make a much more potent inhibitor. (This was fragment-based drug discovery before anyone was using that phrase).
The theory behind this idea is perfectly sound. It's the practice that turned out to be the hard part. While fragment linking examples have certainly appeared (including Abbott examples), the straight SAR-by-NMR technique has apparently had a very low success rate, despite (I'm told by veterans of other companies) a good deal of time, money, and effort in the late 1990s. Getting NMR-friendly proteins whose structure was worked out, finding multiple ligands at multiple sites, and (especially) getting these fragments linked together productively has not been easy at all.
But Ernst's group has brought the idea back. They did a second-site NMR screen with a library of fragments and their spin-labeled sialic thingie, and found that 5-nitroindole was bound nearby, with the 3-position pointed towards the label. That's an advantage of this idea - you get spatial and structural information without having to label the protein itself, and without having to know anything about its structure. SPR experiments showed that the nitroindole alone had affinity up in the millimolar range.
They then did something that warmed my heart. They linked the fragments by attaching a range of acetylene and azide-containing chains to the appropriate ends of the two molecules and ran a Sharpless-style in situ click reaction. I've always loved that technique, partly because it's also structure-agnostic. In this case, they did a 3x4 mixture of coupling partners, potentially forming 24 triazoles (syn and anti). After three days of incubation with the protein, a new peak showed up in the LC/MS corresponding to a particular combination. They synthesized both possible candidates, and one of them was 2 micromolar, while the other was 190 nanomolar.
That molecule is shown here - the percentages in the figure are magnetization transfer in STD experiments, with the N-acetyl set to 100% as reference. And that tells you that both ends of the molecule are indeed participating in the binding, as that greatly increased affinity would indicate. (Note that the triazole appears to be getting into the act, too). That affinity is worth thinking about - one part of this molecule was over 100 micromolar, and the other was millimolar, but the combination is 190 nanomolar. That sort of effect is why people keep coming back to fragment linking, even though it's been a brutal thing to get to work.
When I read this paper at the time, I thought that it was very nice, and I filed it in my "Break Glass in Case of Emergency" section for interesting and unusual screening techniques. One thing that worried me, as usual, was whether this was the only system this had ever worked on, or ever would. So I was quite happy to see a new paper from the Ernst group this summer, in which they did it again. This time, they found a ligand for E-selectin, another one of these things that you don't expect to ever find a decent small molecule for.
In this case, it's still not what an organic chemist would be likely to call a "decent small molecule", because they started with something akin to sialyl Lewis, which is already a funky tetrasaccharide. Their trisaccharide derivative had roughly 1 micromolar affinity, with the spin label attached. A fragment screen against E-selectrin had already identified several candidates that seemed to bind to the protein, and the best guess what that they probably wouldn't be binding in the carbohydrate recognition region. Doing the second-site screen as before gave them, as fate would have it, 5-nitroindole as the best candidate. (Now my worry is that this technique only works when you run it with 5-nitroindole. . .)
They worked out the relative geometry of binding from the NMR experiments, and set about synthesizing various azide/acetylene combinations. In this case, the in situ Sharpless-style click reactions did not give any measurable products, perhaps because the wide, flat binding site wasn't able to act as much of a catalyst to bring the two compounds together. Making a library of triazoles via the copper-catalyzed route and testing those, though, gave several compounds with affinities between 20x and 50x greater than the starting structure, and with dramatically slower off-rates.
They did try to get rid of the nitro group, recognizing that it's only an invitation to trouble. But the few modifications they tried really lowered the affinity, which tells you that the nitro itself was probably an important component of the second-site binding. That, to me, is argument enough to consider not having those things in your screening collection to start with. It all depends on what you're hoping for - if you just want a ligand to use as a biophysical tool compound, then nitro on, if you so desire. But it's hard to stop there. If it's a good hit, people will want to put it into cells, into animals, into who knows what, and then the heartache will start. If you're thinking about these kinds of assays, you might well be better off not knowing about some functionality that has a very high chance of wasting your time later on. (More on this issue here, here, here, and here). Update: here's more on trying to get rid of nitro groups).
This work, though, is the sort of thing I could read about all day. I'm very interested in ways to produce potent compounds from weak binders, ways to attack difficult low-hit-rate targets, in situ compound formation, and fragment-based methods, so these papers push several of my buttons simultaneously. And who knows, maybe I'll have a chance to do something like this all day at some point. It looks like work well worth taking seriously.
I mentioned ChemDraw for the iPad earlier this year, but the folks at PerkinElmer tell me that they've released a new version of it, and of Chem3D. (They're here and here at Apple, respectively). They've added text annotation, which seems to have been a highly requested feature, adjustable arrows, and a number of other features. Worth a look for the chemist-on-the-go.
Here's a paper from the Carreira group at the ETH, in collaboration with Roche, that falls into a category I've always enjoyed. I put these under the heading of "Synthetic routes into cute functionalized ring systems", and you can see my drug-discovery bias showing clearly.
Med-chem people like these kinds of molecules. (I have a few of them drawn here, but all the obvious variations are in the paper, too). They aren't in all the catalogs (yet), they're in no one's screening collection, and they have a particular kind of shape that might not be covered by anything else we already have in our files. There's no reason why something like this might not be the core of a bunch of useful compounds - small saturated nitrogen heterocycles fused to other rings sure do show up all over the place.
And the purpose of this sort of paper matches a drug discovery person's worldview exactly: here's a reasonable way into a large number of good-looking compounds that no one's ever screened, so go to it. (Here's an earlier paper from Carreira in the same area). The chemistry involved in making this things is good, solid stuff: it's not cutting-edge, but it doesn't have to be. It's done on a reasonable scale, and it certainly looks like it would work just fine. I can understand why readers from other branches of organic chemistry would skip over a paper like this. No theoretical concerns are addressed in the syntheses, no natural products are produced, no new catalysts are developed, and no new reactions are discovered. But new scaffolds are being made, and for a medicinal chemist, that's more than enough right there. This is chemistry that does just what it needs to do, quickly, and gets out of the way, and I wouldn't mind seeing a paper or two like this every time I open up my RSS feeds.
Acetate is used in vivo as a starting material for all sorts of ridiculously complex natural products. So here's a neat idea: why not hijack those pathways with fluoroacetate and make fluorinated things that no one's ever seen before? That's the subject of this new paper in Science, from Michelle Chang's lab at Berkeley.
There's the complication that fluoroacetate is a well-known cellular poison, so this is going to be synthetic biology all the way. (It gets processed all the way to fluorocitrate, which is a tight enough inhibitor of aconitase to bring the whole citric acid cycle to a shuddering halt, and that's enough to do the same thing to you). There a Streptomyces species that has been found to use fluoroacetate without dying (just barely), but honestly, I think that's about it for organofluorine biology.
The paper represents a lot of painstaking work. Finding enzymes (and enzyme variants) that look like they can handle the fluorinated intermediates, expressing and purifying them, and getting them to work together ex vivo are all significant challenges. They eventually worked their way up to 6-deoxyerythronolide B synthase (DEBS), which is a natural goal since it's been the target of so much deliberate re-engineering over the years. And they've managed to produce compounds like the ones shown, which I hope are the tip of a larger fluorinated iceberg.
It turns out that you can even get away with doing this in living engineered bacteria, as long as you feed them fluoromalonate (a bit further down the chain) instead of fluoroacetate. This makes me wonder about other classes of natural products as well. Has anyone ever tried to see if terpenoids can be produced in this way? Some sort of fluorinated starting material in the mevalonate pathway, maybe? Very interesting stuff. . .
We chemists have always looked at the chemical machinery of living systems with a sense of awe. A billion years of ruthless pruning (work, or die) have left us with some bizarrely efficient molecular catalysts, the enzymes that casually make and break bonds with a grace and elegance that our own techniques have trouble even approaching. The systems around DNA replication are particularly interesting, since that's one of the parts you'd expect to be under the most selection pressure (every time a cell divides, things had better work).
But we're not content with just standing around envying the polymerase chain reaction and all the rest of the machinery. Over the years, we've tried to borrow whatever we can for our own purposes - these tools are so powerful that we can't resist finding ways to do organic chemistry with them. I've got a particular weakness for these sorts of ideas myself, and I keep a large folder of papers (electronic, these days) on the subject.
So I was interested to have a reader send along this work, which I'd missed when it came out on PLOSONE. It's from Pehr Harbury's group at Stanford, and it's in the DNA-linked-small-molecule category (which I've written about, in other cases, here and here). Here's a good look at the pluses and minuses of this idea:
However, with increasing library complexity, the task of identifying useful ligands (the ‘‘needles in the haystack’’) has become increasingly difficult. In favorable cases, a bulk selection for binding to a target can enrich a ligand from non-ligands by about 1000-fold. Given a starting library of 1010 to 1015 different compounds, an enriched ligand will be present at only 1 part in 107 to 1 part in 1012. Confidently detecting such rare molecules is hard, even with the application of next-generation sequencing techniques. The problem is exacerbated when biologically-relevant selections with fold-enrichments much smaller than 1000-fold are utilized.
Ideally, it would be possible to evolve small-molecule ligands out of DNA-linked chemical libraries in exactly the same way that biopolymer ligands are evolved from nucleic acid and protein libraries. In vitro evolution techniques overcome the ‘‘needle in the haystack’’ problem because they utilize multiple rounds of selection, reproductive amplification and library re-synthesis. Repetition provides unbounded fold-enrichments, even for inherently noisy selections. However, repetition also requires populations that can self-replicate.
That it does, and that's really the Holy Grail of evolution-linked organic synthesis - being able to harness the whole process. In this sort of system, we're talking about using the DNA itself as a physical prod for chemical reactivity. That's also been a hot field, and I've written about some examples from the Liu lab at Harvard here, here, and here. But in this case, the DNA chemistry is being done with all the other enzymatic machinery in place:
The DNA brings an incipient small molecule and suitable chemical building blocks into physical proximity and induces covalent bond formation between them. In so doing, the naked DNA functions as a gene: it orchestrates the assembly of a corresponding small molecule gene product. DNA genes that program highly fit small molecules can be enriched by selection, replicated by PCR, and then re-translated into DNA-linked chemical progeny. Whereas the Lerner-Brenner style DNA-linked small-molecule libraries are sterile and can only be subjected to selective pressure over one generation, DNA-programmed libraries produce many generations of offspring suitable for breeding.
The scheme below shows how this looks. You take a wide variety of DNA sequences, and have them each attached to some small-molecule handle (like a primary amine). You then partition these out into groups by using resins that are derivatized with oligonucleotide sequences, and you plate these out into 384-well format. While the DNA end is stuck to the resin, you do chemistry on the amine end (and the resin attachment lets you get away with stuff that would normally not work if the whole DNA-attached thing had to be in solution). You put a different reacting partner in each of the 384 wells, just like in the good ol' combichem split/pool days, just with DNA as the physical separation mechanism.
In this case, the group used 240-base-pair DNA sequences, two hundred seventeen billion of them. That sentence is where you really step off the edge into molecular biology, because without its tools, generating that many different species, efficiently and in usable form, is pretty much out of the question with current technology. That's five different coding sequences, in their scheme, with 384 different ones in each of the first four (designated A through D), and ten in the last one, E. How diverse was this, really? Get ready for more molecular biology tools:
We determined the sequence of 4.6 million distinct genes from the assembled library to characterize how well it covered ‘‘genetic space’’. Ninety-seven percent of the gene sequences occurred only once (the mean sequence count was 1.03), and the most abundant gene sequence occurred one hundred times. Every possible codon was observed at each coding position. Codon usage, however, deviated significantly from an expectation of random sampling with equal probability. The codon usage histograms followed a log-normal distribution, with one standard deviation in log- likelihood corresponding to two-to-three fold differences in codon frequency. Importantly, no correlation existed between codon identities at any pair of coding positions. Thus, the likelihood of any particular gene sequence can be well approxi- mated by the product of the likelihoods of its constituent codons. Based on this approximation, 36% of all possible genes would be present at 100 copies or more in a 10 picomole aliquot of library material, 78% of the genes would be present at 10 copies or more, and 4% of the genes would be absent. A typical selection experiment (10 picomoles of starting material) would thus sample most of the attainable diversity.
The group had done something similar before with 80-codon DNA sequences, but this system has 1546, which is a different beast. But it seems to work pretty well. Control experiments showed that the hybridization specificity remained high, and that the micro/meso fluidic platform being used could return products with high yield. A test run also gave them confidence in the system: they set up a run with all the codons except one specific dropout (C37), and also prepared a "short gene", containing the C37 codon, but lacking the whole D area (200 base pairs instead of 240). When they mixed that in with the drop-out library (in a ratio of 1 to 384), and split that out onto a C-codon-attaching array of beads. They then did the chemical step, attaching one peptoid piece onto all of them except the C37 binding well - that one got biotin hydrazide instead. Running the lot of them past streptavidin took the ratio of the C37-containing ones from 1:384 to something over 35:1, an enhancement of at least 13,000-fold. (Subcloning and sequencing of 20 isolates showed they all had the C37 short gene in them, as you'd expect).
They then set up a three-step coupling of peptoid building blocks on a specific codon sequence, and this returned very good yields and specificities. (They used a fluorescein-tagged gene and digested the product with PDE1 before analyzing them at each step, which ate the DNA tags off of them to facilitate detection). The door, then, would now seem to be open:
Exploration of large chemical spaces for molecules with novel and desired activities will continue to be a useful approach in academic studies and pharmaceutical investigations. Towards this end, DNA-programmed combinatorial chemistry facilitates a more rapid and efficient search process over a larger chemical space than does conventional high-throughput screening. However, for DNA-programmed combinatorial chemistry to be widely adopted, a high-fidelity, robust and general translation system must be available. This paper demonstrates a solution to that challenge.
The parallel chemical translation process described above is flexible. The devices and procedures are modular and can be used to divide a degenerate DNA population into a number of distinct sub-pools ranging from 1 to 384 at each step. This coding capacity opens the door for a wealth of chemical options and for the inclusion of diversity elements with widely varying size, hydrophobicity, charge, rigidity, aromaticity, and heteroatom content, allowing the search for ligands in a ‘‘hypothesis-free’’ fashion. Alternatively, the capacity can be used to elaborate a variety of subtle changes to a known compound and exhaustively probe structure-activity relationships. In this case, some elements in a synthetic scheme can be diversified while others are conserved (for example, chemical elements known to have a particular structural or electrostatic constraint, modular chemical fragments that independently bind to a protein target, metal chelating functional groups, fluorophores). By facilitating the synthesis and testing of varied chemical collections, the tools and methods reported here should accelerate the application of ‘‘designer’’ small molecules to problems in basic science, industrial chemistry and medicine.
Anyone want to step through? If GSK is getting some of their DNA-coded screening to work (or at least telling us about the examples that did?), could this be a useful platform as well? Thoughts welcome in the comments.
Fragment-based screening comes up here fairly often (and if you're interested in the field, you should also have Practical Fragments on your reading list). One of the complaints both inside and outside the fragment world is that there are a lot of primary hits that fall into flat/aromatic chemical space (I know that those two don't overlap perfectly, but you know the sort of things I mean). The early fragment libraries were heavy in that sort of chemical matter, and the sort of collections you can buy still tend to be.
The UK-based 3D Fragment Consortium has a paper out now in Drug Discovery Today that brings together a lot of references to work in this field. Even if you don't do fragment-based work, I think you'll find it interesting, because many of the same issues apply to larger molecules as well. How much return do you get for putting chiral centers into your molecules, on average? What about molecules with lots of saturated atoms that are still rather squashed and shapeless, versus ones full of aromatic carbons that carve out 3D space surprisingly well? Do different collections of these various molecular types really have differences in screening hit rates, and do these vary by the target class you're screening against? How much are properties (solubility, in particular) shifting these numbers around? And so on.
The consortium's site is worth checking out as well for more on their activities. One interesting bit of information is that the teams ended up crossing off over 90% of the commercially available fragments due to flat structures, which sounds about right. And that takes them where you'd expect it to:
We have concluded that bespoke synthesis, rather than expansion through acquisition of currently available commercial fragment-sized compounds is the most appropriate way to develop the library to attain the desired profile. . .The need to synthesise novel molecules that expand biologically relevant chemical space demonstrates the significant role that academic synthetic chemistry can have in facilitating target evaluation and generating the most appropriate start points for drug discovery programs. Several groups are devising new and innovative methodologies (i.e. methyl activation, cascade reactions and enzymatic functionalisation) and techniques (e.g. flow and photochemistry) that can be harnessed to facilitate expansion of drug discovery-relevant chemical space.
And as long as they stay away from the frequent hitters/PAINS, they should end up with a good collection. I look forward to future publications from the group to see how things work out!
The Baran group at Scripps has a whopper of a total synthesis out in Science. They have a route to the natural product ingenol, which is isolated from a Euphorbia species, a genus that produces a lot of funky diterpenoids. A synthetic ester of the compound as recently been approved to treat actinic keratosis, a precancerous skin condition brought on by exposure to sunlight.
The synthesis is 14 steps long, but that certainly doesn't qualify it for the "whopper" designation that I used. There are far, far longer total syntheses in the literature, but as organic chemists are well aware, a longer synthesis is not a better one. The idea is to make a compound as quickly and elegantly as possible, and for a compound like ingenol, 14 steps is pretty darn quick.
I'll forgo the opportunity for chem-geekery on the details of the synthesis itself (here's a write-up at Chemistry World). it is, of course, a very nice approach to the compound, starting from the readily available natural product (+) 3-carene, which is a major fraction of turpentine. There's a pinacol rearrangement as a key step, and from this post at the Baran group blog, you can see that it was a beast. Most of 2012 seems to have been spent on that one reaction, and that's just what high-level total synthesis is like: you have to be prepared to spend months and months beating on reactions in every tiny, picky variation that you can imagine might help.
Let me speak metaphorically, for those outside the field or who have never had the experience. Total synthesis of a complex natural product is like. . .it's like assembling a huge balloon sculpture, all twists and turns, knots and bulges, only half of the balloons are rubber and half of them are made of blown glass. And you can't just reach in and grab the thing, either, and they don't give you any pliers or glue. What you get is a huge pile of miscellaneous stuff - bamboo poles, cricket bats, spiral-wound copper tubing, balsa-wood dowels, and several barrels of even more mixed-up junk: croquet balls, doughnuts, wadded-up aluminum foil, wobbly Frisbees, and so on.
The balloon sculpture is your molecule. The piles of junk are the available chemical methods you use to assemble it. Gradually, you work out that if you brace this part over here in a cradle of used fence posts, held together with turkey twine, you can poke this part over here into it in a way that makes it stick if you just use that right-angled metal doohicky to hold it from the right while you hit the top end of it with a thrown tennis ball at the right angle. Step by step, this is how you proceed. Some of the steps are pretty obvious, and work more or less the way you pictured them, using things that are on top of one of the junk piles. Others require you to rummage through the whole damn collection, whittling parts down and tying stuff together to assemble some tool that you don't have, maybe something that no one has ever made at all.
What I like most about this new synthesis is that it's done on a real scale. LEO Pharmaceuticals is the company that sells the ingenol gel, and they're interested in seeing if there's something better. That post from Baran's group shows people holding flasks with grams of material in them. Mind you, that's what you need to get all these reactions figured out; I can only imagine how much material they must have burned off trying to get some of these steps optimized. But now that it's worked out, real quantities of analogs can be produced. Everyone who does total synthesis talks about making analogs for testing, but the follow-through is sometimes lacking. This one looks like it'll be more robust. Congratulations to everyone involved - with any luck, you'll never have to do something like this again, unless it's by choice!
Since I was talking about microwave heating of reactions here the other week, I wanted to mention this correspondence in Angewandte Chemie. Oliver Kappe is the recognized expert on microwave heating in chemistry, and recently published an overview of the topic. One of the examples he cited was a report of some Friedel-Crafts reactions that were accelerated by microwave heating. The authors did not take this very well, and fired back with a correspondence in Ang. Chem., clearly feeling that their work had been mistreated in Kappe's article. They never claimed to be seeing some sort of nonthermal microwave effect, they say, and resent the implication that they were.
Kappe himself has replied now, and seems to feel that Dudley et al. are trying to have things both ways:
In their Correspondence, Dudley and co-workers have suggested that we attempt to impugn their credibility by associating their rationalization for the observed effect with the concept of nonthermal microwave effects. This is clearly not the case. On the contrary, we specifically state in the Essay that “The proposed effect perhaps can best be classified as a specific microwave effect involving selective heating of a strongly microwave-absorbing species in a homogeneous reaction mixture (”molecular radiators).“ As we have already pointed out, our Essay was mainly intended to provide an overview on the current state-of-affairs regarding microwave chemistry and microwave effects research. Not surprisingly, therefore, out of the incriminated 22 uses of the word ”nonthermal“ in our Essay, this word was used only twice in reference to the Dudley chemistry, and in both of these instances in conjunction with the term ”specific microwave effect“.
The confusion perhaps arises since in the original publication by Dudley, the authors provide no clear-cut classification (thermal, specific, nonthermal) of the microwave effect that they have observed. In fact, they do not unequivocally state that they believe the effect is connected to a purely thermal phenomenon, but rather invoke arguments about molecular collisions and the pre-exponential factor A in the Arrhenius equation (for example: “Chemical reactions arise from specific molecular collisions, which typically increase as a function of temperature but also result from incident microwave irradiation”). Statements like this that appear to separate a thermal phenomenon from a microwave irradiation event clearly invite speculation by non-experts about the involvement of microwave effects that are not purely thermal in nature. This is very apparent by the news feature in Chemistry World following the publication of the Dudley article entitled: “Magical microwave effects revived. Microwaves can accelerate reactions without heating”
Based on his own group's study of the reaction, Kappe believes that what's going on is local superheating of the solvent, not something more involved and/or mysterious. His reply is a lengthy, detailed schooling in microwave techniques - why the stated power output of a microwave reactor is largely meaningless, the importance (and difficulty) of accurate temperature measurements, and the number of variables that can influence solvent superheating. The dispute here seems to be largely a result of the original paper trying to sound coy about microwave effects - if they'd played things down a bit, I don't think this whole affair would have blown up.
But outside of this work, on the general topic of nonthermal microwave reaction effects, I side with Kappe (and, apparently, so do Dudley and co-authors). I haven't seen any convincing evidence for microwave enhancement of reactions that doesn't come down to heating (steep gradient, localized superheating, etc.)
Here it is 2013, and the last shot has just now been fired in the norbornyl cation debate. I'm too young to have lived through that one, although it was still echoing around as I learned chemistry. But now we have a low-temperature crystal structure of the ion itself, and you know what? It's nonclassical. Winstein was right, Olah and Schleyer were right, and H. C. Brown was wrong.
Everyone's been pretty sure of that for a long time now, of course. But that article from Chemistry World has a great quote from J. D. Roberts, who is now an amazing 95 years old and goes back a long, long way in this debate. He's very happy to see this new structure, but says that it still wouldn't have fazed H. C. Brown: "Herb would be Herb no matter what happened", he says, and from everything I've heard about him, that seems accurate.
Here's a paper in Nature Chemistry that addresses something that isn't explicitly targeted as often as it should be: the robustness of new reactions. The authors, I think, are right on target with this:
We believe a major hurdle to the application of a new chemical methodology to real synthetic problems is a lack of information regarding its application beyond the idealized conditions of the seminal report. Two major considerations in this respect are the functional group tolerance of a reaction and the stability of specific chemical motifs under reaction conditions. . .
Taking into account the limitations of the current methods, we propose that a lack of understanding regarding the application of a given reaction to non-idealized synthetic problems can result in a reluctance to apply new methodology. Confidence in the utility of a new reaction develops over time—often over a number of years—as the reaction is gradually applied within total syntheses, follow-up methodological papers are published, or personal experience is developed. Unfortunately, even when this information has evolved, it is often widely dispersed, fragmented and difficult to locate. To address this problem, both the tolerance of a reaction to chemical functionality and of the chemical functionality to the reaction conditions must be established when appropriate, and reported in an easily accessible manner, preferably alongside the new methodology.
This is as opposed to the current standard of one or two short tables of different substrates, and then a quick application to some natural product framework. Even those papers, I have to say, are better than some of the stuff in the literature, but we still could be doing better. This paper proposes an additional test: running the reaction in the presence of various added compounds, and reporting the % product that forms under these conditions, the % starting material remaining, and the % additive remaining as well. (The authors suggest using a simple, robust method like GC to get these numbers, which is good advice). This technique will give an idea of the tolerance of the reagents and catalysts to other functional groups, without incorporating them into new substrates, and can tell you if the reaction is just slowed down, or if something about the additive stops everything dead.
Applying this setup to a classic Buchwald amination reaction shows that free aliphatic and aromatic alcohols and amines kill the reaction. Esters and ketones are moderately tolerated. Extraneous heterocycles can slow things down, but not in all cases. But alkynes, nitriles, and amides come through fine: the product forms, and the additives aren't degraded.
I like this idea, and I hope it catches on. But I think that the only way it will is if editors and reviewers start asking for it. Otherwise, it'll be put in the "More work" category, which is easy for authors to ignore. If something like this became the standard, though, all of us synthetic chemists would be better off.
Here's a neat bit of reaction optimization from the Aubé lab at Kansas. Update: left the link out before - sorry!) They're trying to make one of their workhorse reactions, the intramolecular Schmidt, a bit less nasty by cutting down on the amount of acid catalyst. The problem with that is product inhibition: the amide that's formed in the reaction tends to vacuum up any Lewis acid around, so you've typically had to use that reagent in excess, which is not a lot of fun on scale.
By varying a number of conditions, they've found a new catalyst/solvent system that's quite a bit friendlier. I keep meaning to try some of these reactions out (they make some interesting molecular frameworks), and maybe this is my entry into them. But the general problem here is one that every working organic chemist has faced: reactions that, for whatever reason, stop partway through. In this situation, there's at least a reasonably hypothesis why things grind out, and there's always been a less-than-elegant way around it (dump in more Lewis acid).
I'm sure, though, that everyone out there at the bench has had reactions that just. . .stop, for reasons unknown, and can't be pushed forward by addition of more anything. I've always wondered what's going on in those situations (probably a lot of things, from case to case), and they're always a reminder of just how little we sometimes really understand about what's going on inside our reaction flasks. Aggregates or other supramolecular complexes? Solubility problems? Adsorption onto heterogeneous reactants? Getting a handle on these things isn't easy, and most people don't bother doing it, unless they're full-out process chemists in industry.
ChemBark has an interesting question here: who's the most respected and influential chemist, among chemists? He was taking nominations on Twitter, and has settled on Roald Hoffman as his choice. Other strong contenders included Nocera, Corey, Whitesides, Sharpless, Kroto, Grubbs, Gray, Hershbach, Zare, and Stoddart. Anyone over here have names to add to the list? Note again that we're talking influence and fame inside the field, because if you go to "among the general public", you pretty much cut everyone out right there, unfortunately. . .
Ionic liquids (molten salts at relatively low temperatures) have been a big feature of the chemical literature for the last ten or fifteen years - enough of a feature to have attracted a few disparaging comments here, from me and from readers. There's a good article out now that talks about the early days of the field and how it grew, and it has some food for thought in it.
The initial reports in the field didn't get much attention (as is often the case). What seems to have made things take off was the possibility of replacing organic solvents with reusable, non-volatile, and (relatively) non-toxic alternatives. "Green chemistry" was (and to an extent still is) a magnet for funding, and it was the combination of this with ionic liquid (IL) work that made the field. But not all of this was helpful:
The link with green chemistry during the development of the IL field, propelled both fields forward, but at times the link was detrimental to both fields when overgeneralizations eroded confidence. ILs were originally considered as green since many of these liquid salts possess a negligible vapor pressure and might replace the use of volatile organic solvents known to result in airborne chemical contamination. The reported water stability and non-volatility led to the misconception that these salts were inherently safe and environmentally friendly. This was exacerbated by the many unsubstantiated claims that ILs were ‘green’ in introductions meant to provide the motivation for the study, even if the study itself had nothing to do with green chemistry. While it is true that the replacement of a volatile organic compound (VOC) might be preferred, proper knowledge of the chemistry of the ions must also be taken into account before classifying anything as green. Nonetheless, the statement “Ionic Liquids are green” was widely published (and can still be found in papers published today). Given the number and nature of the possible ions comprising ILs, these statements are similar to “Water is green, therefore all solvents are green.”
There were many misunderstandings at the chemical level as well:
However, just as the myriad of molecular solvents (or any compounds) can have dramatic differences in chemical, physical, and biological properties based on their chemical identity, so too can ILs. With the potential for 10^18 ion combinations, a single crystal structure of one compound is not a good representation of the chemistry of the entire class of salts which melt below 100 °C and would be analogous to considering carbon tetrachloride as a model system for all known molecular solvents.
The realization that hexafluorophosphate counterions can indeed generate HF under the right conditions helped bring a dose of reality back to the field, although (as the authors point out), not without a clueless backlash that decided, for a while, that all ionic liquids were therefore intrinsically toxic and corrosive. The impression one gets is that the field has settled down, and that its practitioners are more closely limited to people who know what they are talking about, rather than having quite so many who are doing it because it's hot and publishable. And that's a good thing.
The topic of making hit compounds, leads, and drug candidates that are less flat/aromatic has come up several times around here, and constantly around the industry. A reader sent along the following question: supposing that you wanted to obtain a decent collection of molecules with a greater-than-normal number of nonaromatic carbons and chiral centers, where would you find them?
Are there some suppliers that have done a better job than others of rising to the demand for this sort of thing? If anyone has nominations for good sources, or for places that are at least showing signs of moving in that direction, they'd be welcome. My guess is that fragment-sized molecules would be a good place to start, since they're (presumably) more synthetically accessible, and have advantages in the amount of chemical space that can be covered per number of compounds, but all comers will be considered. . .
Chiral catalyst reactions seem to show up on both lists when you talk about new reactions: the list of "Man, we sure do need more of those" and the "If I see one more paper on that I'm going to do something reckless" list.
I sympathize with the latter viewpoint, but the former is closer to reality. What we don't need are more lousy chiral catalyst papers, though, on that I think we can all agree. So I wanted to mention a good one, from Erick Carreira's group at the ETH. They're trying something that we're probably going to be seeing more of in the future: a "dual-catalyst" approach:
In a conceptually different construct aimed at the synthesis of compounds with a pair of stereogenic centers, two chiral catalysts employed concurrently could dictate the configuration of the stereocenters in the product. Ideally, these would operate independently and set both configurations in a single transition state with minimal matched/mismatched interactions. Herein, we report the realization of this concept in the development of a method for the stereodivergent dual-catalytic α-allylation of aldehydes.
Shown is a typical reaction scheme. They're doing iridium-catalysed allylation reactions, which are already known via the work of Hartwig and others, but with a chiral catalyst to activate the nucleophilic end of the reaction and a separate one for the electrophilic end. That lets you more or less dial in the stereocenters you want in the product. It looks like the allyl alcohol need some sort of aryl group, although they can get it to work with a variety of those. The aldehyde component can vary more widely.
You'd expect a scheme like this to have some combinations that work great, but other mismatched ones that struggle a bit. But in this case the yields stay at 60 to 80%, and the ee values are >99% across the board as they switch things around, which is why we're reading this in Science rather than in, well, you can fill in the names of some other journals as well as I can. Making a quaternary chiral center next to a tertiary one in whatever configuration you want is not something you see every day.
I think that chiral multi-catalytic systems will be taking up even more journal pages than ever in the future. It really seems like a way to get things to perform, and there's certainly enough in the idea to keep a lot of people occupied for a long time. Those of us doing drug discovery should resist the urge to flip the pages too quickly, too, because if we really mean all that stuff about making more three-dimensional molecules, we're going to have to do better with chirality than "Run it down an SFC and throw half of it away".
If you're in iPad sort of chemist (one of Baran's customers?), you may well already know that app versions of ChemDraw and Chem3D came out yesterday for that platform. I haven't tried them out myself, not (yet) being a swipe-and-poke sort of guy, but at $10 for the ChemDraw app (and Chem3D for free), it could be a good way to get chemical structures going on your own tablet.
Andre the Chemist has a writeup on his experiences here. As an inorganic chemist, he's run into difficulties with text labels, but for good ol' organic structures, things should be working fine. I'd be interested in hearing hands-on reviews of the software in the comments: how does the touch-screen interface work out for drawing? Seems like it could be a good fit. . .
I see that Neil Withers is trying to start up a new discussion in that "Kudzu of Chemistry" comment thread. The main topic is what reactions and chemistry we see too much of, but he's wondering what we should see more of. It's a worthwhile question, but I wonder if it'll be hard to answer. Personally, I'd like to see more reactions that let me attach primary and secondary amines directly into unactivated alkyl CH bonds, but I'm not going to arrange my schedule around that waiting period.
So maybe we should stick with reactions (or reaction types) that have been reported, but don't seem to be used as much as they should. What are the unsung chemistries that should be more famous? What reactions have you seen that you can't figure out why no one's ever followed up on them? I'll try to add some of my own in the comments as the day goes on.
Chemistry, like any other human-run endeavor, goes through cycles and fads. At one point in the late 1970s, it seemed as if half the synthetic organic chemists in the world had made cis-jasmone. Later on, a good chunk of them switched to triquinane synthesis. More recently, ionic liquids were all over the literature for a while, and while it's not like they've disappeared, they're past their publishing peak (which might be a good thing for the field).
So what's the kudzu of chemistry these days? One of my colleagues swears that you can apparently get anything published these days that has to do with a BODIPY ligand, and looking at my RSS journal feeds, I don't think I have enough data to refute him. There are still an awful lot of nanostructure papers, but I think that it's a bit harder, compared to a few years ago, to just publish whatever you trip over in that field. The rows of glowing fluorescent vials might just have eased off a tiny bit (unless, of course, that's a BODIPY compound doing the fluorescing!) Any other nominations? What are we seeing way too much of?
For those who are into total synthesis of natural products, Arash Soheili has a Twitter account (Total_Synthesis) that keeps track of all the reports in the major journals. He's emailed me with a link to a searchable database of all these, which brings a lot of not-so-easily-collated information together into one place. Have a look! (Mostly, when I see these, I'm very glad that I'm not still doing them, but that's just me).
It's molecular imaging week! See Arr Oh and others have sent along this paper from Science, a really wonderful example of atomic-level work. (For those without journal access, Wired and PhysOrg have good summaries).
As that image shows, what this team has done is take a starting (poly) phenylacetylene compound and let it cyclize to a variety of products. And they can distinguish the resulting frameworks by direct imaging with an atomic force microscope (using a carbon monoxide molecule as the tip, as in this work), in what is surely the most dramatic example yet of this technique's application to small-molecule structure determination. (The first use I know of, from 2010, is here). The two main products are shown, but they pick up several others, including exotica like stable diradicals (compound 10 in the paper).
There are some important things to keep in mind here. For one, the only way to get a decent structure by this technique is if your molecules can lie flat. These are all sitting on the face of a silver crystal, but if a structure starts poking up, the contrast in the AFM data can be very hard to interpret. The authors of this study had this happen with their compound 9, which curls up from the surface and whose structure is unclear. Another thing to note is that the product distribution is surely altered by the AFM conditions: a molecule in solution will probably find different things to do with itself than one stuck face-on to a metal surface.
But these considerations aside, I find this to be a remarkable piece of work. I hope that some enterprising nanotechnologists will eventually make some sort of array version of the AFM, with multiple tips splayed out from each other, with each CO molecule feeding to a different channel. Such an AFM "hand" might be able to deconvolute more three-dimensional structures (and perhaps sense chirality directly?) Easy for me to propose - I don't have to get it to work!
Here's a question for the organic chemists in the crowd, and not just those in the drug industry, either. Over the last few years, though, there's been a lot of discussion about how drug company compound libraries have too many compounds with too many aromatic rings in them. Here are some examples of just the sort of thing I have in mind. As mentioned here recently, when you look at real day-to-day reactions from the drug labs, you sure do see an awful lot of metal-catalyzed couplings of aryl rings (and the rest of the time seems to be occupied with making amides to link more of them together).
Now, it's worth remembering that some of the studies on this sort of thing have been criticized for stacking the deck. But at the same time, it's undeniable that the proportion of "flat stuff" has been increasing over the years, to the point that several companies seem to be openly worried about the state of their screening collections.
So here's the question: if you're trying to break out of this, and go to more three-dimensional structures with more saturated rings, what are the best ways to do that? The Diels-Alder reaction has come up here as an example of the kind of transformation that doesn't get run so often in drug research, and it has to be noted that it provides you with instant 3-D character in the products. What we could really use are reactions that somehow annulate pyrrolidines or tetrahydropyrans onto other systems in one swoop, or reliably graft on spiro systems where there was a carbonyl, say.
I know that there are some reactions like these out there, but it would be worthwhile, I think, to hear what people think of when they think of making saturated heterocyclic ring systems. Forget the indoles, the quinolines, the pyrazines and the biphenyls: how do you break into the tetrahydropyrans, the homopiperazines, and the saturated 5,5 systems? Embrace the stereochemistry! (This impinges on the topic of natural-product-like scaffolds, too).
My own nomination, for what it's worth, is to use D-glucal as a starting material. If you hydrogenate that double bond, you now have a chiral tetrahydropyran triol, with differential reactivity, ready to be functionalized. Alternatively, you can go after that double bond to make new fused rings, without falling back into making sugars. My carbohydrate-based synthesis PhD work is showing here, but I'm not talking about embarking on a 27-step route to a natural product here (one of those per lifetime is enough, thanks). But I think the potential for library synthesis in this area is underappreciated.
There's a new paper out today in Nature on a very unusual way to determine the chirality of organic molecules. It uses an exotic effect of microwave spectroscopy, and I will immediately confess that the physics is (as of this morning, anyway) outside my range.
This is going to be one of those posts that comes across as gibberish to the non-chemists in the audience. Chirality seems to be a concept that confuses people pretty rapidly, even though the examples of right and left shoes or gloves (or right and left-handed screw threads) are familiar from everyday objects, and exactly the same principles apply to molecules. But the further you dig into the concept, the trickier it gets, and when you start dragging the physics of it in, you start shedding your audience quickly. Get a dozen chemists together and ask them how, exactly, chiral compounds rotate plane-polarized light and see how that goes. (I wouldn't distinguish myself by the clarity of my explanation, either).
But this paper is something else again. Here, see how you do:
Here we extend this class of approaches by carrying out nonlinear resonant phase-sensitive microwave spectroscopy of gas phase samples in the presence of an adiabatically switched non-resonant orthogonal electric field; we use this technique to map the enantiomer-dependent sign of an electric dipole Rabi frequency onto the phase of emitted microwave radiation.
The best I can do with this is that the two enantiomers have the same dipole moment, but that the electric field interacts with them in a manner that gives different signs. This shows up in the phase of the emitted microwaves, and (as long as the sample is cooled down, to cut back on the possible rotational states), it seems to give a very clear signal. This is a completely different way to determine chirality from the existing polarized-light ones, or the use of anomalous dispersion in X-ray data (although that one can be tricky).
Here's a rundown on this new paper from Chemistry World. My guess is that this is going to be one of those techniques that will be used rarely, but when it comes up it'll be because nothing else will work at all. I also wonder if, possibly, the effect might be noticed on molecules in interstellar space under the right conditions, giving us a read on chirality from a distance?
Put this one in the category of "reactions you probably wouldn't have thought of". There's a new paper in Organic Letters on cleaving a carbon-carbon triple bond, yielding the two halves as their own separate nitriles.
It seems to be a reasonable reaction, and someone may well find a use for it. I just enjoyed it because it was totally outside the way that I think about breaking and forming bonds. And it makes me wonder about the reverse: will someone find a way to take two nitriles and turn them into a linked alkyne? Formally, that gives off nitrogen, so you'd think that there would be some way to make it happen. . .
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
I wanted to mention a new reaction that's come out in a paper in Science. It's from the Betley lab at Harvard, and it's a new way to make densely substituted saturated nitrogen heterocycles (pyrrolidines, in particular).
You start from a four-carbon chain with an azide at one end, and you end up with a Boc-protected pyrrolidine, by direct activation/substitution of the CH bond at the other end of the chain. Longer chains give you mixtures of different ring sizes (4, 5, and 6), depending on where the catalyst feel like inserting the new bond. I'd like to see how many other functional groups this chemistry is compatible with (can you have another tertiary amine in there somewhere, or a hydroxy?) But we have a huge lack of carbon-hydrogen functionalization reactions in this business, and this is a welcome addition to a rather short list.
There was a paper last year from the Groves group at Princeton on fluorination of aliphatic CH bonds using a manganese porphyrin complex. These two papers are similar in my mind - they're modeling themselves on the CYP enzymes, using high-valent metals to accomplish things that normally we wouldn't think of being able to do easily. The more of this sort of thing, the better, as far as I'm concerned: new reactions will make us think of entirely new things
Over at the Baran group's "Open Flask" blog, there's a post on the number of total synthesis papers that show up in the Journal of the American Chemical Society. I'm reproducing one of the figures below, the percentage of JACS papers with the phrase "total synthesis" in their title.
You can see that the heights of the early 1980s have never been reached again, and that post-2000 there has been a marked drought. As the post notes, JACS seems to have begun publishing many more papers in total around that time (anyone notice this or know anything about it?), and it appears that they certainly didn't fill the new pages with total synthesis. 2013, though, already looks like an outlier, and it's only May.
My own feelings about total synthesis are a matter of record, and have been for some time, if anyone cares. So I'm not that surprised to see the trend in this chart, if trend it is.
But that said, it would be worth running the same analysis on a few other likely journal titles. Has the absolute number of total synthesis papers gone down? Or have they merely migrated (except for the really exceptional ones) to the lower-impact journals? Do fewer papers put the phrase "Total synthesis of. . ." in their titles as compared to years ago? Those are a few of the confounding variables I can think of, and there are probably more. But I think, overall, that the statement "JACS doesn't publish nearly as much total synthesis as it used to" seems to be absolutely correct. Is this a good thing, a bad thing, or some of each?
I wanted to mention a project of Prof. Phil Baran of Scripps and his co-authors, Yoshihiro Ishihara and Ana Montero. It's called the Portable Chemist's Consultant, and it's available for iPads here. And here's a web-based look at its features. Baran was good enough to send me an evaluation copy, so I've had a chance to look through it in detail.
It's clearly based on his course in heterocyclic chemistry, and the chapters on pyridines and other heterocycles read like very well-thought-out review articles. But they also take advantage of the iPad's interface, in that specific transformations are shown in detail (with color and animation), and each of these can be expanded to a wider presentation and a thorough list of references (which are linked in their turn). The "Consumer Reports" style tables of recommended synthetic methods at the end of each section seem very useful, too, although they might need some notation for how much experimental support there is for each combination. For an overview of these topics, though, I doubt if anyone could do this better; I became a more literate heterocyclic chemist just by flipping through things. (Here's a video clip of some of these features in action).
So, do I have any reservations? A few. One of the bigger ones (which I'm told that Baran and his team are addressing) might sound trivial: I'm not sure about the title. As it stands, "The Portable Heterocyclic Chemistry Consultant" would be a much more accurate one, because there are large swaths of chemistry that fall within its current subtitle ("A Survival Guide for Discovery, Process, and Radiolabeling") which are not even touched on. For example, scale-up chemistry is mentioned on the cover, but in the current version of the book I didn't really see anything that was of particular relevance to actual scale-up work (things like the feasibility of solvent switching, heat transfer effects and reaction thermodynamics, run-to-run variability and potential purification methods, reagent sourcing, etc.) For medicinal chemists, I can say that the focus is completely on just the synthetic organic end of things; there's nothing on the behavior of any of the heterocyclic systems in vivo (pharmacokinetic trends, routes of metabolism, known toxicity problems, and so on). There's also nothing on spectral characterization, or any analytical chemistry of any sort, and I found no mention of radiolabeling (although I'd be glad to be corrected on that).
So for these reasons, it's a very academic work, but a very good one of its type. And Prof. Baran tells me that it's being revised constantly (at no charge to previous purchasers), and that these sorts of topics are in the works for later versions. If this book is indeed one of those gifts that keeps on giving, then it's a bargain as it stands, but (at the same time) I think that potential buyers should be aware of what they're getting in the current version.
My second reservation is technological. The book is only available on the iPad, and I'm not completely sure that this is a good idea. There's no way that it could be as useful in print, but a web-based interface would still be fine. (Managing ownership and sales is a lot easier in Apple's ecosystem, to be sure). And I'm not sure how many organic chemists own iPads yet. Baran himself seemed a bit surprised when he found out that I don't own one myself (I borrowed a colleague's to have a look). The most common reaction I've had when I tell people about the "PCC" is to say that they don't own an iPad, either, and to ask if there's any other way they can read it. Another problem is that the people that do have iPads certainly don't take them to the lab bench, which is where a work like this would be most useful. On the other hand, plain old computers are ubiquitous at the bench, thanks to electronic lab notebooks and the like.
All this said, though, if you do own an iPad and need to know about heterocyclic chemistry, you should have a look at this work immediately. If not, well, it's well worth keeping an eye on - these are early days.
There's another paper in the Nature Chemical Biology special issue that I wanted to mention, this one on "Translational Synthetic Chemistry". I can't say that I like the title, which seems to me to have a problem with reification (the process of trying to make something a thing which isn't necessarily a thing at all). I'm not so sure that there is a separate thing called "Translational Synthetic Chemistry", and I'm a bit worried that it might become a catch phrase all its own, which I think might lead to grief.
But that said, I still enjoyed the article. The authors are from H3 Biomedicine in Cambridge, which as I understand it is an offshoot of the Broad Institute and has several Schreiber-trained chemists on board. That means Diversity-Oriented Synthesis, of course, which is an area that I've expressed reservations about before. But the paper also discusses the use of natural product scaffolds as starting materials for new chemical libraries (a topic that's come up here and here), and the synthesis of diverse fragment collections beyond what we usually see. "Fragments versus DOS" has been set up before as a sort of cage match, but I don't think that has to be the case. And "Natural products versus DOS" has also been taken as a showdown, but I'm not so sure about that, either. These aren't either/or cases, and I don't think that the issues are illuminated by pretending that they are.
The authors end up calling for more new compound libraries, made by more new synthetic techniques, and assayed by newer and better high-throughput screens. Coming out against such recommendations makes a person feel as if they're standing up to make objections against motherhood and apple pies. And it's not that I think that these are bad ideas, but I just wonder if they're sufficient. Chemical space, as we were discussing the other day, is vast - crazily, incomprehensibly vast. Trying to blast off into it at random (which is what the pure DOS approaches have always seemed like to me) looks like something that a person could do for a century or two without seeing much return.
So if there are ways to increase the odds, I'm all for them. Natural-product-like molecules look like as good a way as any to do this, since they at least have the track record of evolution on their side. Things that are in roughly these kinds of chemical space, but which living organisms haven't gotten around to making, are still part of a wildly huge chemical space, but one that might have somewhat higher hit rates in screening. So Paul Hergenrother at Illinois might have the right idea when he uses natural products themselves as starting materials and makes new compound libraries from them.
So, who else is doing something like that? And what other methods do we have to make "natural-product-like" structures? Suggestions are welcome, and I'll assemble them and any ideas I have into another post.
Here's another look at the vast universe of things that no chemist has ever made. Estimates of the number of compounds with molecular weights under 500 run as high as ten to the sixtieth, which is an incomprehensibly huge number. We're not going to be able to put any sort of dent in that figure even if we convert the whole mass of the solar system into compound sample vials, so the problem remains: what's out there in that territory, and how do we best approach it?
Well, numbers of that magnitude are going to need some serious computation paring-down before we can take a crack at them, and that's what this latest paper tries to do. I'll refer interested readers to it (and to its supplementary information) for the details, but in brief, it takes a seed structure or two, adds atoms to them, goes through rounds of mutations and parings (according to filters that can be set for functional groups, properties, etc.) and then sends the whole set back around for more. This is going to rapidly explode in size, naturally, so at each stage the program picks a maximally diverse subset to go on with and discards the rest.
There are some of the compounds that come out, just to give you the idea. And they're right; I never would have thought of some of these, and I hope some of them never cross my mind again. I presume that this set has been run with rather permissive structural filters, because there are things there that (1) I don't know how to make, and (2) I'm not sure if anyone else knows how to make yet, and (3) I'm not sure how stable and isolable they'd be even if anyone did. My first reaction is that there sure are a lot of acetals, ketals, hemithioketals and so on in this set, but I'm sure that's an artifact of some sort. Any selection of a set of 10^60 compounds is an artifact of some sort.
So my next question is, what might people use such a program for? Ideas that they wouldn't have come up with, something to stir the imagination? Synthetic challenges to try for, to realize some of these compounds? The authors point out that neither nature nor man has ever really taken advantage of chemical diversity, not compared to what's possible. And that's true, but the possible numbers of compounds are still so terrifying that I wonder what we'll accomplish with drops in the bucket. (There's another paper that bears on this that I'll comment on later this week; this theme will return shortly!)
Now here's one that I didn't know about: a reader sends along word that the former clinical candidate GW501516 is enjoying some popularity on the black market among cyclists and other athletes.
I remember that compound well from the days when I did PPAR nuclear receptor research. It's the very model of a PPAR-delta ligand - GlaxoSmithKline had it in the clinic for some time, until it slowly disappeared from their roster. In 2007, the Evans lab at the Salk Institute published a paper suggesting that the compound increased endurance, and that sent it right into the athletic underworld. I have no idea if it does what its users want, but I do know that I wouldn't touch the stuff. The PPAR compounds have a very, very wide range of effects, and unraveling those proved to be very difficult indeed. Long-term effects of a compound like this one are unknown - all we know is that GSK dropped it from the clinic, and that could well have been for tox. Taking this stuff to gain some time in a bicycle race is sheer foolhardiness.
And since that last post was about sirtuins, here's a new paper in press at J. Med. Chem. from the Sirtris folks (or the Sirtris folks that were, depending on who's making the move down to PA). They report a number of potent new sirtuin inhibitor compounds, which certainly do look drug-like, and there are several X-ray structures of them bound to SIRT3. It seems that they're mostly SIRT1/2/3 pan-inhibitors; if they have selective compounds, they're not publishing on them yet.
I should also note, after this morning's post, that the activities of these compounds were characterized by a modified mass spec assay! I would expect sirtuin researchers in other labs to gladly take up some of these compounds for their own uses. . .
Note: I should make it clear that these are more compounds produced via the DNA-encoded library technology. Note that these are yet another chemotype from this work.
Over the years, I've probably had more hits on my "Sand Won't Save You This Time" post than on any other single one on the site. That details the fun you can have with chloride trifluoride, and believe me, it continues (along with its neighbor, bromine trifluoride) to be on the "Things I Won't Work With" list. The only time I see either of them in the synthetic chemistry literature is when a paper by Shlomo Rozen pops up (for example), but despite his efforts on its behalf, I still won't touch the stuff.
And if anyone needs any more proof as to why, I present this video, made at some point by some French lunatics. You may observe the mild reactivity of this gentle substance as it encounters various common laboratory materials, and draw your own conclusions. We have Plexiglas, a rubber glove, clean leather, not-so-clean leather, a gas mask, a piece of wood, and a wet glove. Some of this, under ordinary circumstances, might be considered protective equipment. But not here.
The reaction discovery field continues to increase its throughput, on ever-smaller amounts of material. (That link has several previous discussions here imbedded in it). The latest report uses laser-assisted mass spec to analyze aliquots (less than a microliter each) of 696 different reactions and controls, pulled directly from the 96-well plates with no purification. That took the MALDI-TOF machine about two hours, in case you're wondering - setting up the experiments definitely took a lot longer (!)
The key to getting this to work was having a pyrene moeity attached to the back end of the substrate(s) for reaction discovery. This serves as a mass spec label - it ionizes very efficiently under the laser conditions, and allows excellent signal/noise coming out of all the other reaction gunk that might be in there. You can monitor the disappearance of the starting material and/or the appearance of new products, as you wish. In this case, the test bed was an electron-rich alkyne starting material, exposed to a variety of reacting partners and various metal catalysts. The screen picked up two previously unknown annulations, which were then optimized in a second round of experiments.
I continue to think that this sort of work has the potential to remake synthetic chemistry. Whenever there's some potential for new reactions to be found (and metal-catalyzed systems are a prime example) these techniques will let us survey the landscape much more quickly. There's no reason to think that we've managed to find even a good fraction of the useful chemistry out there.
Looks like the long-running ChemBark blog is shutting down. Paul Bracher has a new academic position to get off the ground, a move to another part of the country, labs to set up, and grant applications to write, all of which are good enough reasons by themselves. I hope that once he gets settled into the new position that he returns to the chem-blogging world under a new banner - his site has been well worth reading over the years.
Update: I see Paul has come clean this afternoon in the comments section to his post. Congratulations to him on a very convincing April 1 job - I now, naturally, retract all the nice things I said about him (!)
I wrote here about DNA-barcoding of huge (massively, crazily huge) combichem libraries, a technology that apparently works, although one can think of a lot of reasons why it shouldn't. This is something that GlaxoSmithKline bought by acquiring Praecis some years ago, and there are others working in the same space.
For outsiders, the question has long been "What's come out of this work?" And there is now at least one answer, published in a place where one might not notice it: this paper in Prostaglandins and Other Lipid Mediators. It's not a journal whose contents I regularly scan. But this is a paper from GSK on a soluble epoxide hydrolase inhibitor, and therein one finds:
sEH inhibitors were identified by screening large libraries of drug-like molecules, each attached to a DNA “bar code”, utilizing DNA-encoded library technology  developed by Praecis Pharmaceuticals, now part of GlaxoSmithKline. The initial hits were then synthesized off of DNA, and hit-to-lead chemistry was carried out to identify key features of the sEH pharmacophore. The lead series were then optimized for potency at the target, selectivity and developability parameters such as aqueous solubility and oral bioavailability, resulting in GSK2256294A. . .
That's the sum of the med-chem in the article, which certainly compresses things, and I hope that we see a more complete writeup at some point from a chemistry perspective. Looking at the structure, though, this is a triaminotriazine-derived compound (as in the earlier work linked to in the first paragraph), so yes, you apparently can get interesting leads that way. How different this compound is from the screening hit is a good question, but it's noteworthy that a diaminotriazine's worth of its heritage is still present. Perhaps we'll eventually see the results of the later-generation chemistry (non-triazine).
I've written several times about flow chemistry here, and a new paper in J. Med. Chem. prompts me to return to the subject. This, though, is the next stage in flow chemistry - more like flow med-chem:
Here, we report the application of a flow technology platform integrating the key elements of structure–activity relationship (SAR) generation to the discovery of novel Abl kinase inhibitors. The platform utilizes flow chemistry for rapid in-line synthesis, automated purification, and analysis coupled with bioassay. The combination of activity prediction using Random-Forest regression with chemical space sampling algorithms allows the construction of an activity model that refines itself after every iteration of synthesis and biological result.
Now, this is the point at which people start to get either excited or fearful. (I sometimes have trouble telling the difference, myself). We're talking about the entire early-stage optimization cycle here, and the vision is of someone topping up a bunch of solvent reservoirs, hitting a button, and leaving for the weekend in the expectation of finding a nanomolar compound waiting on Monday. I'll bet you could sell that to AstraZeneca for some serious cash, and to be fair, they're not the only ones who would bite, given a sufficiently impressive demo and slide deck.
But how close to this Lab of the Future does this work get? Digging into the paper, we have this:
Initially, this approach mirrors that of a traditional hit-to-lead program, namely, hit generation activities via, for example, high-throughput screening (HTS), other screening approaches, or prior art review. From this, the virtual chemical space of target molecules is constructed that defines the boundaries of an SAR heat map. An initial activity model is then built using data available from a screening campaign or the literature against the defined biological target. This model is used to decide which analogue is made during each iteration of synthesis and testing, and the model is updated after each individual compound assay to incorporate the new data. Typically the coupled design, synthesis, and assay times are 1–2 h per iteration.
Among the key things that already have to be in place, though, are reliable chemistry (fit to generate a wide range of structures) and some clue about where to start. Those are not givens, but they're certainly not impossible barriers, either. In this case, the team (three UK groups) is looking for BCL-Abl inhibitors, a perfectly reasonable test bed. A look through the literature suggested coupling hinge-binding motifs to DFG-loop binders through an acetylene linker, as in Ariad's ponatinib. This, while not a strategy that will earn you a big raise, is not one that's going to get you fired, either. Virtual screening around the structure, followed by eyeballing by real humans, narrowed down some possibilities for new structures. Further possibilities were suggested by looking at PDB structures of homologous binding sites and seeing what sorts of things bound to them.
So already, what we're looking at is less Automatic Lead Discovery than Automatic Patent Busting. But there's a place for that, too. Ten DFG pieces were synthesized, in Sonogashira-couplable form, and 27 hinge-binding motifs with alkynes on them were readied on the other end. Then they pressed the button and went home for the weekend. Well, not quite. They set things up to try two different optimization routines, once the compounds were synthesized, run through a column, and through the assay (all in flow). One will be familiar to anyone who's been in the drug industry for more than about five minutes, because it's called "Chase Potency". The other one, "Most Active Under Sampled", tries to even out the distributions of reactants by favoring the ones that haven't been used as often. (These strategies can also be mixed). In each case, the model was seeded with binding constants of literature structures, to get things going.
The first run, which took about 30 hours, used the "Under Sampled" algorithm to spit out 22 new compounds (there were six chemistry failures) and a corresponding SAR heat map. Another run was done with "Chase Potency" in place, generating 14 more compounds. That was followed by a combined-strategy run, which cranked out 28 more compounds (with 13 failures in synthesis). Overall, there were 90 loops through the process, producing 64 new products. The best of these were nanomolar or below.
But shouldn't they have been? The deck already has to be stacked to some degree for this technique to work at all in the present stage of development. Getting potent inhibitors from these sorts of starting points isn't impressive by itself. I think the main advantage to this is the time needed to generated the compound and the assay data. Having the synthesis, purification, and assay platform all right next to each other, with compound being pumped right from one to the other, is a much tighter loop than the usual drug discovery organization runs. The usual, if you haven't experienced it, is more like "Run the reaction. Work up the reaction. Run it through a column (or have the purification group run it through a column for you). Get your fractions. Evaporate them. Check the compound by LC/MS and NMR. Code it into the system and get it into a vial. Send it over to the assay folks for the weekly run. Wait a couple of days for the batch of data to be processed. Repeat."
The science-fictional extension of this is when we move to a wider variety of possible chemistries, and perhaps incorporate the modeling/docking into the loop as well, when it's trustworthy enough to do so. Now that would be something to see. You come back in a few days and find that the machine has unexpectedly veered off into photochemical 2+2 additions with a range of alkenes, because the Chase Potency module couldn't pass up a great cyclobutane hit that the modeling software predicted. And all while you were doing something else. And that something else, by this point, is. . .what, exactly? Food for thought.
I wanted to mention that Phil Baran's group at Scripps now has a blog, where group members are putting up posts on several topics, from recent syntheses to jelly beans. Well worth keeping an eye on. And I'd like to take the opportunity to welcome Baran and his group to the blogging world.
Which reminds me: the blogroll at left is (as usual) wildly overdue for updating. I've got a list of sites that I'll put up there, but I'd be glad to take recommendations, because it wouldn't surprise me if I've missed some, too. Please add some to the comments, and I promise to actually clean things up this week (!)
I wanted to point out what looks like the resolution of the Blog Syn story about IBX oxidations. See Arr Oh seems to have discovered the discrepancy that's been kicking the results around all over the place: water in the IBX itself. So it looks like this whole effort has ended up discovering something important that we didn't know about the reaction, and nailed down yet another variable. Congratulations!
While I'm on the subject of editorials, Takashi Tsukamoto of Johns Hopkins has one out in ACS Medicinal Chemistry Letters. Part of it is a follow-up to my own trumpet call in the journal last year (check the top of the charts here; the royalties are just flowing in like a river of gold, I can tell you). Tsukamoto is wondering, though, if we aren't exploring chemical space the way that we should:
One of the concerns is the likelihood of identifying drug-like ligands for a given therapeutic target, the so-called “druggability” of the target, has been defined by these compounds, representing a small section of drug-like chemical space. Are aminergic G protein coupled receptors (GPCRs) actually more druggable than other types of targets? Or are we simply overconcentrating on the area of chemical space which contains compounds likely to hit aminergic GPCRs? Is it impossible to disrupt protein–protein interactions with a small molecule? Or do we keep missing the yet unexplored chemical space for protein–protein interaction modulators because we continue making compounds similar to those already synthesized?
. . .If penicillin-binding proteins are presented as new therapeutic targets (without the knowledge of penicillin) today, we would have a slim chance of discovering β-lactams through our current medicinal chemistry practices. Penicillin-binding proteins would be unanimously considered as undruggable targets. I sometimes wonder how many other potentially significant therapeutic targets have been labeled as undruggable just because the chemical space representing their ligands has never been explored. . .
Good questions. I (and others) have had similar thoughts. And I'm always glad to see people pushing into under-represented chemical space (macrocycles being a good example).
The problem is, chemical space is large, and time (and money) are short. Given the pressures that research has been under, it's no surprise that everyone has been reaching for whatever will generate the most compounds in the shortest time - which trend, Tsukamoto notes, makes the whole med-chem enterprise that much easier to outsource to places with cheaper labor. (After all, if there's not so much skill involved in cranking out amides and palladium couplings, why not?)
My advice in the earlier editorial about giving employers something they can't buy in China and India still holds, but (as Tsukamoto says), maybe one of those things could (or should) be "complicated chemistry that makes unusual structures". Here's a similar perspective from Derek Tan at Sloan-Kettering, also referenced by Tsukamoto. It's an appealing thought, that we can save medicinal chemistry by getting back to medicinal chemistry. It may even be true. Let's hope so.
Last year I mentioned the "good ol' Diels-Alder reaction", and talked about how it doesn't get used as much in drug discovery and industrial chemistry as one might think.
Now Stefan Abele from Actelion (in Switzerland) sends along this new paper, which will tell you pretty much all you need to know about the reaction's industrial side. The scarcity of D-A chemistry on scale that I'd noticed was no illusion (links below added by me):
According to a survey by Dugger et al. in 2005 of the type of reaction scaled in a research facility at Pfizer, and an analysis of the reactions used for the preparation of drug candidate molecules by Carey et al. in 2006, the DA reaction falls into the “miscellaneous” category that accounts for only 5 to 11 % of C-C bond-forming reactions performed under Good Manufacturing Practice. This observation mirrors the finding that C-C bond-forming reactions account for 11.5% of the entire reaction repertoire used by medicinal chemists in the pursuit of drug candidates. In this group, palladium-catalyzed reactions represent about 60% of the occurrences, while the “other” category, into which the DA reaction falls, represents only 1.8% of the total number of reactions. Careful examination of the top 200 pharmaceutical products by US retail sales in 2010 revealed that only one marketed drug, namely Buprenorphine, is produced industrially by using the DA reaction. Two other drugs were identified in the top 200 generic drugs of US retail sales in 2008: Calcitriol and its precursor Calciferol. Since 2002, Liu and co-workers have been compiling the new drugs introduced each year to the market. From 2002 to 2010, 174 new chemical entities were reported. Among them, two examples (Varenicline from Pfizer in 2006 and Peramivir by Shionogi in 2010) have been explicitly manufactured through a DA reaction. Similarly, and not surprisingly, our consultation with a large corpus of peers, colleagues, and experts in industry and academia worldwide revealed that the knowledge of such examples of the DA reaction run on a large scale is scarce, except perhaps in the field of fragrance chemistry.
But pretty much every reaction that has been run on large scale is in this review, so if you're leaning that way, this is the place to go. It doesn't shy away from the potential problems (chief among them being potential polymerization of one or both of the starting materials, which would really ruin your afternoon). But it's a powerful enough reaction that it really would seem to have more use than it gets.
The topic of compound purity has come up here before, as well it should. Every experienced medicinal chemist knows that when you have an interesting new hit compound, that one of the first things to do is go back and make sure that it really is what it says on the label. Re-order it from the archive (in both powder and DMSO stock), re-order it if it's from a commercial source, and run it through the LC/MS and the NMR. (And as one of those links above says, if you have any thought that metal reagents were used to make the compound, check for those, too - they can be transparent to LC and NMR).
Recently, we selected a random set of commercial fragment compounds for analysis, and closely examined those that failed to better understand the reasons behind it. The most common reason for QC failure was insolubility (47%), followed by degradation or impurities (39%), and then spectral mismatch (17%) [Note: Compounds can acquire multiple QC designations, hence total incidences > 100% ]. Less than 4% of all compounds assayed failed due to solvent peak overlap or lack of non-exchangeable protons, both requirements for NMR screening. Failure rates were as high as 33% per individual vendor, with an overall average of 16%. . .
I very much wish that they'd identified that 33% failure rate vendor. But overall, they're suggesting that of 10 to 15% compounds will wipe out, regardless of source. Now, you may not feel that solubility is a key criterion for your work, because you're not doing NMR assays. (That's one that will only get worse as you move out of fragment-sized space, too). But that "degradation or impurities" category is still pretty significant. What are your estimates for commercial-crap-in-a-vial rates?
Here's a paper at the intersection of two useful areas: natural products and fragments. Dan Erlanson over at Practical Fragments has a good, detailed look at the the work. What the authors have done is tried to break down known natural product structures into fragment-sized pieces, and cluster those together for guidance in assembling new screening libraries.
I'm sympathetic to that goal. I like fragment-based techniques, and I think that too many fragment libraries tend to be top-heavy with aromatic and heteroaromatic groups. Something with more polarity, more hydrogen-bonding character, and more three-dimensional structures would be useful, and natural products certainly fit that space. (Some of you may be familiar with a similar approach, the deCODE/Emerald "Fragments of Life", which Dan blogged about here). Synthetically, these fragments turn out to be a mixed bag, which is either a bug or a feature depending on your point of view (and what you have funding for or a mandate to pursue):
The natural-product-derived fragments are often far less complex structurally than the guiding natural products themselves. However, their synthesis will often still require considerable synthetic effort, and for widespread access to the full set of natural-product-derived fragments, the development of novel, efficient synthesis methodologies is required. However, the syntheses of natural-product-derived fragments will by no means have to meet the level of difficulty encountered in the multi-step synthesis of genuine natural products.
But take a look at Dan's post for the real downside:
Looking at the structures of some of the phosphatase inhibitors, however, I started to worry. One strong point of the paper is that it is very complete: the chemical structures of all 193 tested fragments are provided in the supplementary information. Unfortunately, the list contains some truly dreadful members; 17 of the worst are shown here, with the nasty bits shown in red. All of these are PAINS that will nonspecifically interfere with many different assays.
Boy, is he right about that, as you'll see when you take a look at the structures. They remind me of this beast, blogged about here back last fall. These structures should not be allowed into a fragment screening library; there are a lot of other things one could use instead, and their chances of leading only to heartbreak are just too high.
I linked recently to the latest reaction check at Blog Syn, benzylic oxidation by IBX. Now Prof. Baran (a co-author on the original paper, from his Nicoloau days) has written See Arr Oh with a detailed repeat of the experiment. He gets it to work, so I think it's fair to say that (1) the reaction is doable, but (2) it's not as easy to reproduce right out of the box as it might be.
I'd like to congratulate him for responding like this. The whole idea of publicly rechecking literature reactions is still fairly new, and (as the comments here have shown), there's a wide range of opinion on it. Getting a detailed, prompt, and civil response from the Baran lab is the best outcome, I think. After all, the point of a published procedure - the point of science - is reproducibility. The IBX reaction is now better known than it was, the details that could make it hard to run are now there for people who want to try it, and Prof. Baran's already high reputation as a scientist actually goes up a bit among the people who've been following this story.
Public reproducibility is an idea whose time, I think, has come, and Blog Syn is only one part of it. When you think about the increasingly well-known problems with reproducing big new biological discoveries, things that could lead to tens and hundreds of millions being spent on clinical research, reproducing organic chemistry reactions shouldn't be controversial at all. As they say to novelists, if you're afraid of bad reviews, there's only one solution: don't show anyone your book.
I wanted to mention that there are two more entries up on Blog Syn: one of them covering this paper on alkenylation of pyridines. (It's sort of like a Heck reaction, only you don't have to have an iodo or triflate on the pyridine; it just goes right into the CH bond). The short answer: the reaction works, but there are variables that seem crucial for its success that were under-reported in the original paper (and have been supplied, in part, by responses from the original author to the Blog Syn post). Anyone thinking about running this reaction definitely needs to be aware of this information.
The latest is a re-evaluation of a older paper on the use of IBX to (among many other things) oxidize arylmethane centers. It's notable for a couple of reasons: it's been claimed that this particular reaction completely fails across multiple substrates, and the reaction itself is from the Nicolau lab (with Phil Baran as a co-author). Here's the current literature situation:
A day in the library can save you a week in the lab, so let’s examine this paper’s impact using SciFinder: it's been cited 179 times from 2002-2013. Using the “Get Reactions" tool, coupled with SciFinder’s convenient new “Group by Transformation” feature, we identified 54 reactions from the citing articles that can be classified as “Oxidations of Arylmethanes to Aldehydes/Ketones" (the original reaction's designation). Of these 54 reactions, only four (4) use the conditions reported in this paper, and all four of those come from one article: Binder, J. T.; Kirsch, S. F. Org. Lett. 2006, 8, 2151–2153, which describes IBX as “an excellent reagent for the selective oxidation to generate synthetically useful 5-formylpyrroles.” Kirsch's yields range from 53-79% for relatively complex substrates, not too shabby.
I'll send you over to Blog Syn for the further details, but let's just say that not many NMR peaks are being observed around 10 ppm. Phil Baran himself makes an appearance with more details about his recollection of the work (to his credit). Several issues remain, well, unresolved. (If any readers here have ever tried the reaction, or have experience with IBX in general, I'm sure comments would be very welcome over there as well).
Well, Cambridge is quiet today, as are many workplaces across the US. My plan is to go out for some good Chinese food and then spend the afternoon in here with my family; my kids haven't been there for at least a couple of years now.
And that brings up a thought that I know many chemists have had: how ill-served chemistry is by museums, science centers, and so on. Physics has a better time of it, or at least some parts of it. You can demo Newtonian mechanics with a lot of hands-on stuff, and there's plenty to do with light, electricity and magnetism and so on. (Quantum mechanics and particle physics, well, not so much). Biology at least can have some live creatures (large and small), and natural-history type exhibits, but its problems for public display really kick in when it shades over to biochemistry.
Chemistry, though, is a tough sell. Displays of the elements aren't bad, but many of them are silvery metals that can't be told apart by the naked eye. Crystals are always good, so perhaps we can claim some of the mineral displays for our own. But physical chemistry, organic chemistry, and analytical chemistry are difficult to show off. The time scales tend to be either too fast or too slow for human perception, or the changes aren't noticeable except with the help of instrumentation. There are still some good demonstrations, but many of these have to be run with freshly prepared materials, and by a single trained person. You can't just turn everyone loose with the stuff, and it's hard to come up with an automated, foolproof display that can run behind glass (and still attract anyone's interest). An interactive "add potassium to water to see what happens" display would be very popular, but rather hard to stage, both practically and from an insurance standpoint. You'd also run through a lot of potassium, come to think of it.
Another problem is that chemistry tends to deal with topics that people either don't see, or don't notice. Cooking food, for example, is sheer chemistry, but no one thinks of it like that - well, except Harold McGee and now the molecular gastronomy people. (Speaking of which, if any of you are crazy enough to order this from Amazon, I'll be very impressed indeed). Washing with soap or detergent, starting a fire, using paint or dye - there are plenty of everyday processes that illustrate chemistry, but they're so familiar that it's hard to use them as demonstrations. Products as various as distilled liquor, plastic containers, gasoline, and (of course) drugs of all sorts are pure examples of all sorts of chemical ideas, but again, it's hard to show them as such. They're either too well-known (think of Dustin Hoffman being advised to go into plastics), or too esoteric (medicinal chemistry, for most people).
So I started asking myself, what would I do if I had to put up some chemistry exhibits in a museum? How would I make them interesting? For med-chem, I'm imagining some big video display that starts out with a molecule and lets people choose from some changes they can make to it (oxidation, adding a fluorine, changing a carbon to nitrogen, etc.) The parts of the molecule where these change are allowed could glow or something when an option is chosen, then when you make the change, the structure snazzily shifts and the display tells you if you made a better drug, a worse one, something inactive, or a flat-out poison. You'd have to choose your options and structures carefully, but you might be able to come up with something.
But other things would just have to be seen and demonstrated, which is tricky. Seen on a screen, the Belousov-Zhabotinskii reaction just looks like a special effect, and a rather cheap one at that. But seeing it done by mixing up real chemicals and solutions right in front of you is much more impressive, but it's hard for me to think of a way to do that which would be done often enough (and on large enough scale) for people to see it, and wouldn't cost too much to do (supplies, staff, flippin' insurance, etc.)
If you had to build out the chemistry hallway at the museum, then, what would you fill it with? Suggestions welcome.
My grad school work was chiral-pool synthesis; trying to make a complex natural product from carbohydrate starting materials. There was quite a bit of that around in those days, but you have to wonder about its place in the world by now. It's true that everyone likes to be able to buy their chiral centers, especially if they're from the naturally-occuring series (nobody's keen to use L-glucose as their starting material if they can avoid it!) We certainly love to do that in the drug industry, and we can often get away with such syntheses, since our compounds generally don't have too many chiral centers.
But how developed are the multicenter methods? I certainly did not enjoy manipulating the multiple chiral centers on a sugar molecule, although you can (with care and attention) do some interesting gymnastics on that framework. But I think that asymmetric synthesis, especially catalytic variations, is more widely used today to build things up, rather than starting with a lot of chirality and working it around to what you want. The synthetic difficulties of that latter method often seem to get out of hand, and the methods aren't as general as the build-up-your-chirality ones.
Is my impression correct? And if so, is this the way things should be? My tendency is to say "yes" to both questions, but I'd like to see what the general opinions are.
I wrote here and here about Luca Turin's theory that our perception of smell is partly formed by sensing vibrational modes. (Turin is the author of an entertaining book on the subject of olfaction, The Secret of Scent, and also co-author of Perfumes: The A-Z Guide). His theory is still controversial, to say the least, but Turin and co-workers have a new paper out trying to shore it up.
A previous report from Vosshall and Keller at Rockfeller University had shown that human subjects were unable to distinguish acetophenone from its deuterated analog, which is not what you'd expect if we were sensing bond vibrations. Interestingly, this paper confirms this result. (References to all these studies are in the original paper, which is open-access, being in PLoSONE):
In principle, odorant isotopomers provide a possible test of shape vs. vibration mechanisms: replacing, for example, hydrogen with deuterium in an odorant leaves the ground-state conformation of the molecule unaltered while doubling atomic mass and so altering the frequency of all its vibrational modes to a greater or lesser extent. To first order, deuteration should therefore have little or no effect on the smell character of an odorant recognized by shape, whereas deuterated isotopomers should smell different if a vibrational mechanism is involved.
The experimental evidence on this question to date is contradictory. Drosophila appears able to recognize the presence of deuterium in odorant isotopomers by a vibrational mechanism. Partial deuteration of insect pheromones reduces electroantennogram response amplitudes. Fish have been reported to be able to distinguish isotopomers of glycine by smell. However, human trials using commercially available deuterated odorants [benzaldehyde and acetophenone] have yielded conflicting results, both positive and negative. Here, using GC-pure samples and a different experimental technique, we fully confirm Keller and Vosshall’s finding that humans, both naive and trained subjects, are unable to discriminate between acetophenone isotopomers.
But the paper goes on to show that humans apparently are able to discriminate deuterated musk compounds from their H-analogs. Cyclopentadecanone, for example, was deuterated to >95% next to the carbonyl, and to 90% at the other methylenes. It and three other commercial musks were purified and checked versus their native forms:
After silica gel purification, aliquots of the deuterated musks were diluted in ethanol and their odor character assessed on smelling strips. The parent compounds have classic powerful musk odor characters, with secondary perfumer descriptors as follows: animalic [Exaltone], sweet [Exaltolide], oily [Astrotone] and sweet [Tonalid]. In all the deuterated musks, the musk character, though still present was much reduced, and a new character appeared, variously described by the trained evaluators [NR, DG, LT and Christina Koutsoudaki, Vioryl SA] as “burnt,” “roasted,” “toasted,” or “nutty.” Naive subjects most commonly described the additional common character as “burnt.”
They found, by stopping the deuterium exchange early, that this smell showed up even at around 50% D-exchange or less. For more rigorous tests, they went to a "smelling GC", and double-blinded the tests. This gave clean compound peaks, and they were able to diminish the need to keep a memory of the previous smell in mind by capturing the eluted peak vapors in Eppendorf tube for side-by-side comparison.
This protocol showed that people are indeed unable to discriminate deuterated acetophenone from undeuterated - the Keller and Vosshall paper stands up, which will come as a relief to the author of the unusually celebratory editorial in Nature Neuroscience that accompanied it. To be sure, it also makes moot Turin's own objections to their work at the time, which questioned its experimental design and rigor.
But the deuterated musk experiment done this way are quite interesting. I'm going to just quote the entire section here:
All trials were performed with GC-pure catalytically deuterated [D fraction >90%] cyclopentadecanone [See Methods]. Each trial consisted of the assessment of 4 pairs of odorants, one deuterated and one sham-deuterated. The subjects were presented with a deuterated sample and their attention was drawn to the “burnt, nutty, roasted” character of the deuterated compound. Several other sample pairs were presented until the subjects were sure they could tell the difference between the two sample types.
The Eppendorf tubes were heated in a solid heating block to 50C. The samples were arranged in rows according to their type. The experimenter randomized the order of the tubes within the rows by means of two flips of a coin (first flip: first or second two positions, second flip: first or second spot within those). The rows were then mixed randomly by a further coin flip per d/H pair (heads: swap positions, tails leave in situ).
Subjects were first given a training pair and told which was deuterated and which sham-deuterated. The experimenter then left to watch the experiment through a window. Subjects were then presented with the unlabeled, position-randomized pairs of deuterated and sham-deuterated GC-pure samples and asked to say which was which.
The subject, wearing nitrile gloves to avoid contamination, smelled first one and then the other sample. Multiple sniffs at each sample were allowed. The subject was asked to identify the deuterated sample and to place it to one side. After four trials the tubes were placed under the UV light source and identified. The subject was not informed of the outcome. To avoid habituation, the subject then rested for 15 minutes before attempting the next trial.
The results are shown in table 2. Eleven subjects were used. Two subjects tired before reaching the desired number of 12 trials. Two were able to go beyond to 13 and 17 trials respectively. The binomial p values range between 0.109 [6/8 correct] to 7.62×10−6 [17/17 trials]. These are independent trials, and an aggregate probability for all trials [119/132 correct] can be calculated: it is equal to 5.9×10−23.
As it happens, musks are at nearly the top of the molecular weight range for odorant compounds. The paper mentions a rule of thumb among fragrance chemists that compounds with more than 18 carbons rarely have any perceptible odor, even when heated (and different people's noses can top out even before that). Musks tend to smell quite similar even with rather different structures, which suggests that a small number of receptors are involved in their perception. Here's Turin's unified theory of musk:
We suggest therefore that a musk odor is achieved when three conditions are simultaneously fulfilled: First, the molecule is so large that only one or a very few receptors are activated. Second, one or more of these receptors detects vibrations in the 1380–1550 cm-1 range. Third, the molecule has intense bands in that region, caused either by a few nitro groups or, equivalently, many CH2 groups. A properly quantitative account of musk odor will require better understanding of the shape selectivity of the receptors at the upper end of the molecular weight scale, and of the selection rules of a biological IETS spectrometer to calculate the intensity of vibrational modes.
It's safe to say that this controversy is very much alive, no matter what the explanation might be. Leslie Vosshall of Rockefeller has already commented on this latest paper, wondering if compounds might be enzymatically altered in the nose (which would also be expected to show a large difference with deuterated compounds). I'll await the next round with interest!
We medicinal chemists talk a good game when it comes to the the hydrophobic effect. It's the way that non-water-soluble molecules (or parts of molecules) like to associate with each other, right? Sure thing. And it works because of. . .well, van der Waals forces. Or displacement of water molecules from protein surfaces. Or entropic effects. Or all of those, plus some other stuff that, um, complicated to explain. Something like that.
Here's a paper in Angewandte Chemie that really bears down on the topic. The authors study the binding of simple ligands to thermolysin, a well-worked-out system for which very high-resolution X-ray structures are available. And what they find is, well, that things really are complicated to explain:
In summary, there are no universally valid reasons why the hydrophobic effect should be predominantly “entropic” or “enthalpic”; small structural changes in the binding features of water molecules on the molecular level determine whether hydrophobic binding is enthalpically or entropically driven.
Admittedly, this study reaches the limits of experimental accuracy accomplishable in contemporary protein–ligand structural work. . .Surprising pairwise systematic changes in the thermodynamic data are experienced for complexes of related ligands, and they are convincingly well reflected by the structural properties. The present study unravels small but important details. Computational methods simulate molecular properties at the atomic level, and are usually determined by the summation of many small details. However, details such as those observed here are usually not regarded by these computational methods as relevant, simply because we are not fully aware of their importance for protein–ligand binding, structure–activity relationships, and rational drug design in general. . .
I think that there are a lot of things in this area of which we're not fully aware. There are many others that we treat as unified phenomena, because we've given them names that make us imagine that they are. The hydrophobic effect is definitely one of these - George Whitesides is right when he says that there are many of them. But when all of these effects, on closer inspection, break down into tiny, shifting, tricky arrays of conflicting components, can you blame us for simplifying?
Here's a recent paper in J. Med. Chem. on halogen bonding in medicinal chemistry. I find the topic interesting, because it's an effect that certainly appears to be real, but is rarely (if ever) exploited in any kind of systematic way.
Halogens, especially the lighter fluorine and chlorine, are widely used substituents in medicinal chemistry. Until recently, they were merely perceived as hydrophobic moieties and Lewis bases in accordance with their electronegativities. Much in contrast to this perception, compounds containing chlorine, bromine, or iodine can also form directed close contacts of the type R–X···Y–R′, where the halogen X acts as a Lewis acid and Y can be any electron donor moiety. . .
What seems to be happening is that the electron density around the halogen atom is not as smooth as most of us picture it. You'd imagine a solid cloud of electrons around the bromine atom of a bromoaromatic, but in reality, there seems to be a region of slight positivecharge (the "sigma hole") out on the far end. (As a side effect, this give you more of a circular stripe of negative charge as well). Both these effects have been observed experimentally.
Now, you're not going to see this with fluorine; that one is more like most of us picture it (and to be honest, fluorine's weird enough already). But as you get heavier, things become more pronounced. That gives me (and probably a lot of you) an uneasy feeling, because traditionally we've been leery of putting the heavier halogens into our molecules. "Too much weight and too much hydrophobicity for too little payback" has been the usual thinking, and often that's true. But it seems that these substituents can actually earn out their advance in some cases, and we should be ready to exploit those, because we need all the help we can get.
Interestingly, you can increase the effect by adding more fluorines to the haloaromatic, which emphasizes the sigma hole. So you have that option, or you can take a deep breath, close your eyes, and consider. . .iodos:
Interestingly, the introduction of two fluorines into a chlorobenzene scaffold makes the halogen bond strength comparable to that of unsubstituted bromobenzene, and 1,3-difluoro-5-bromobenzene and unsubstituted iodobenzene also have a comparable halogen bond strength. While bromo and chloro groups are widely employed substituents in current medicinal chemistry, iodo groups are often perceived as problematic. Substituting an iodoarene core by a substituted bromoarene scaffold might therefore be a feasible strategy to retain affinity by tuning the Br···LB (Lewis base) halogen bond to similar levels as the original I···LB halogen bond.
As someone who values ligand efficiency, the idea of putting in an iodine gives me the shivers. A fluoro-bromo combo doesn't seem much more attractive, although almost anything looks good compared to a single atom that adds 127 mass units at a single whack. But I might have to learn to love one someday.
The paper includes a number of examples of groups that seem to be capable of interacting with halogens, and some specific success stories from recent literature. It's probably worth thinking about these things similarly to the way we think about hydrogen bonds - valuable, but hard to obtain on purpose. They're both directional, and trying to pick up either one can cause more harm than good if you miss. But keep an eye out for something in your binding site that might like a bit of positive charge poking at it. Because I can bet that you never thought to address it with a bromine atom!
Update: in the spirit of scientific inquiry, I've just sent in an iodo intermediate from my current work for testing in the primary assay. It's not something I would have considered doing otherwise, but if anyone gives me any grief, I'll tell them that it's 2013 already and I'm following the latest trends in medicinal chemistry.
Here's something I've been following for the last couple of weeks in the chemical blogging world, and now it's up on its own site: "Blog Syn", an initiative of the well-known chemblogger See Arr Oh. The idea here is to take interesting new reactions that appear in the literature, and. . .well, see if they actually work. (A radical concept, I know, but stick with me here).
The first example is this recent paper in JACS, which shows an unusual iron-sulfur reaction that ends up generating a benz-azole directly from an active methyl group in one pot. There are now three repeats of the reaction, and the verdict (so far) is that it works, but not quite as well as hoped for. You probably have to be careful to exclude oxygen (the paper itself just says "under argon"), and the yield of the test reaction is not has high as reported. As you'll see, there are spectral data, sources of reagents, photos of experimental setups - everything you'd need to see how this reaction is actually being (re)run.
I like this idea very much, and I look forward to seeing it applied to new reactions as they appear (and I hope to contribute the occasional run myself, when possible). They're accepting nominations for the next reaction to test, so if you have something you've seen that you're wondering about, put it into the hopper.
If you haven't seen it yet, this video tour through the DayGlo company's facilities is quite a sight. For us chemists, be sure to check out things at about the 5:30 mark, where they head into the wet chemistry area. You'll see some of the most well-used batch reactors you can picture (their largest one was bought used in the early 1970s, to give you some idea). As the chemist giving the tour says, "This is not like the pharmaceutical industry. . ."
This looks like an interesting reaction; let's see what gets made of it. David Milstein's group at the Weizmann Institute in Israel report a new catalytic system to oxidize alcohols to carboxylic acids, with water as the oxygen donor (as shown from labeling experiments). Hydrogen gas bubbles out of the mixture. The catalyst is a ruthenium complex, and although the reaction is not especially fast (18 hour timescale), the turnover numbers seem to be good (0.2% catalyst loading). Interestingly, oxygen actually seems to hurt the catalyst; the system runs better under argon. One possible drawback is that the ruthenium catalyst can serve as a hydrogenation catalyst - alkenes are reduced, what with all the hydrogen around.
Getting rid of (most of) the metals and the high-valent reagents will be worth the trouble industrially, as will getting rid of the need for pressurized oxygen. As it is now, many carboxylic acid compounds are produced on scale via either alkenes (hydroformylation and then oxidation of the aldehydes with a catalyst, or carbonylation), from alkanes via nonselective oxidation in air, or from alcohols via carbonylation.
We're still a long way from ditching the current processes, but if this reaction is robust enough, it could open up some new industrial feedstock routes. (One that I wonder about is replacing the current route to adipic acid, used in Nylon production. It's currently made through a rather foul nitric acid process - if there's enough hexanediol in the world. (Not sure if that's feasible, though - it looks like most of the hexanediol is made instead by reducing adipic acid! Makes you wonder if there's a potential biological route, as there is for butanediol). Edit - fixed this part, due to dropped some carbons between my brain and the keyboard this morning). Someone may also find a nice use for the hydrogen that's given off, and get some sort of two-for-one process. At the very least, this is a reminder of just how much more metal-catalyzed chemistry there is to be discovered. . .
Update: one of the paper's authors has dropped by the comments section, with interesting further details. . .
Here's an interesting challenge: over at Synthetic Remarks, there's a need for a couple of grams of 3,4-difluorothiophene. But you can't buy that much, and the literature has very little useful to say about how one would make it. So is there a practical route to the stuff (at least on paper) that's worth trying? Note that Dr. Freddy stipulates "No Sandmeyer crap, for heaven's sake", so no Balz-Schiemann chemistry, folks.
The prize? Any chemistry book worth up to $150 from Amazon, sent to your door. (Just think of the possibilities) So if any of you have any bright fluorination ideas, have a crack at it, and good luck!
This is a lesson that everyone should have learned many times before, but those colorful atoms are just so. . .colorful and everything. If anyone knows what element is supposed to be colored "silvery purple", See Arr Oh would like to hear from you.
Here's a funny-looking compound for you - Ivorenolide A, isolated from mahogany tree bark, it has an 18-membered ring with conjugated acetylenes in it. That makes the 3-D structure quite weird; it's nearly flat. And it has biological activity, too (immunosuppression, as measured by T-cell and B-cell proliferation assay in vitro). Got anything that looks like this in your compound libraries? Me neither.
You've probably seen the story that a substantial quantity (roughly fifty pounds!) of gold dust seems to have gone missing from Pfizer's labs in St. Louis. No report I've seen has any details, though, on just what Pfizer was doing with that much gold dust - the company isn't saying. I can tell you that I've never found a laboratory use for it myself dang it all.
So let's speculate! Why would a drug company need gold dust on that scale? Buying it in that form makes you think that a large surface area might have been important, unless there was some gold refinery running Double Coupon Wednesday on the stuff. Making a proprietary catalyst? Starting material for functionalized gold nanoparticles? Solid support(s) for some biophysical assay? Classy replacement for Celite for those difficult filtrations? Your ideas are welcome in the comments. . .
Update: out of many good comments, my favorite so far is: "Knowing Pfizer, I'm guessing they were planning on turning it into lead."
Word comes that Fluorous is shutting down. The company had been trying for several years to make a go of it with its polyfluorinated materials, used for purification and reaction partitioning, but the commercial side of the business has apparently been struggling for a while. It's a tough market, and there hasn't, as far as I know, been what the software people would call a "killer app" for fluorous techniques - they're interested, often useful, but it's been hard to persuade enough people to take a crack at them.
The company is still taking orders for its remaining stock, and the link above will allow you to download their database of literature references for fluorous techniques, among other things. I wish the people involved the best, and I wish that things had worked out better.
Here's another next-generation X-ray crystal paper, this time using a free electron laser X-ray source. That's powerful enough to cause very fast and significant radiation damage to any crystals you put in its way, so the team used a flow system, with a stream of small crystals of T. brucei cathepsin B enzyme being exposed in random orientations to very short pulses of extremely intense X-rays. (Here's an earlier paper where the same team used this technique to obtain a structure of the Photosystem I complex). Note that this was done at room temperature, instead of cryogenically. The other key feature is that the crystals were actually those formed inside Sf9 insect cells via baculovirus overexpression, not purified protein that was then crystallized in vitro.
Nearly 4 million of these snapshots were obtained, with almost 300,000 of them showing diffraction. 60% of these were used to refine the structure, which out at 2.1 Angstroms, and clearly showed many useful features of the enzyme. (Like others in its class, it starts out inhibited by a propeptide, which is later cleaved - that's one of the things that makes it a challenge to get an X-ray structure by traditional means).
I'm always happy to see bizarre new techniques used to generate X-ray structures. Although I'm well aware of their limitations, such structures are still tremendous opportunities to learn about protein functions and how our small molecules interact with them. I wrote about the instrument used in these papers here, before it came on line, and it's good to see data coming out of it.
Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .
We have a late entry in this year's "Least Soluble Molecule - Dosed In Vivo Division" award. Try feeding that into your cLogP program and see what it tells you about its polarity. (This would be a good ChemDraw challenge, too). What we're looking at, I'd say, is a sort of three-dimensional asphalt, decorated around its edges with festive scoops of lard.
The thing is, such structures are perfectly plausible building blocks for various sorts of nanotechnology. It would not, though, have occurred to me to feed any to a rodent. But that's what the authors of this new paper managed to do. The compound shown is wildly fluorescent (as well you might think), and the paper explores its possibilities as an imaging agent. The problem with many - well, most - fluorescent species is photobleaching. That's just the destruction of your glowing molecule by the light used to excite it, and it's a fact of life for almost all the commonly used fluorescent tags. Beat on them enough, and they'll stop emitting light for you.
But this beast is apparently more resistant to photobleaching. (I'll bet it's resistant to a lot of things). Its NMR spectrum is rather unusual - those two protons on the central trypticene show up at 8.26 and 8.91, for example. And in case you're wondering, the M+1 peak in the mass spec comes in at a good solid 2429 mass units, a region of the detector that I'm willing to bet most of us have never explored, or not willingly. The melting point is reported as ">300 C", which is sort of disappointing - I was hoping for something in the four figures.
The paper says, rather drily, that "To direct the biological application of our 3D nanographene, water solubilization is necessary", but that's no small feat. They ended up using Pluronic surfactant, which gave them 100nm particles of the stuff, and they tried these out on both cells and mice. The particles showed very low cytotoxicity (not a foregone conclusion by any means), and were actually internalized to some degree. Subcutaneous injection showed that the compound accumulated in several organs, especially the liver, which is just where you'd expect something like this to pile up. How long it would take to get out of the liver, though, is a good question.
The paper ends with the usual sort of language about using this as a platform for chemotherapy, etc., but I take that as the "insert technologically optimistic conclusion here" macro that a lot of people seem to have loaded into their word processing programs. The main reason this caught my eye is that this is quite possibly the least drug-like molecule I've ever seen actually dosed in an animal. When will we see its like again?
I've decided this year that I'll be posting some recommendations for science-themed gifts, since this is the season that people will be looking around for them. This article at Smithsonian has a look at the history of the good ol' chemistry set. As I mentioned in this old post, I had one as a boy, augmented by a number of extra reagents, some of which (potassium permanganate!) were in rather too high an oxidation state for a ten-year-old. I can't report that I did much in the way of systematic experiments with all my material, but I did have a good time with it. Once in a while some combination of reagents will remind me of the smell of those bottles, and I'm instantly transported back to the early 1970s, out in a corner of the shop building in back of our house. (Elemental sulfur is a component of that smell; the rest I'm not sure about).
The Smithsonian article mentions that Thames and Kosmos chemistry sets get good reviews from people who've seen them. So if you're in the market for a gift for the kids, that might be a line to try! The potassium permanganate I'll leave up to individual discretion. . .
As mentioned the other day, this will be a post for people to ask questions directly to Philip Skinner (SDBioBrit) of Perkin-Elmer/Cambridgesoft. He's doing technical support for ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, etc., and will be monitoring the comments and posting there. Hope it helps some people out!
Note - he's out on the West Coast of the US, so allow the poor guy time to get up and get some coffee in him!
I don't know how many readers have been following this, but there's been some interesting work over the last few years in using streptavidin (a protein that's an old friend of chemical biologists everywhere) as a platform for new catalyst systems. This paper in Science (from groups at Basel and Colorado State) has some new results in the area, along with a good set of leading references. (One of the authors has also published an overview in Accounts of Chemical Research). Interestingly, this whole idea seems to trace back to a George Whitesides paper from back in 1978, if you can believe that.
(Strept)avidin has an extremely well-characterized binding site, and its very tight interaction with biotin has been used as a set of molecular duct tape in more experiments than anyone can count. Whitesides realized back during the Carter administration that the site was large enough to accommodate a metal catalyst center, and this latest paper is the latest in a string of refinements of that idea, this time using a rhodium-catalyzed C-H activation reaction.
A biotinylated version of the catalyst did indeed bind streptavidin, but this system showed very low activity. It's known, though, that the reaction needs a base to work, so the next step was to engineer a weakly basic residue nearby in the protein. A glutamate sped things up, and an aspartate even more (with the closely related asparagine showing up just as poorly as the original system, which suggests that the carboxylate really is doing the job). A lysine/glutamate double mutant gave even better results.
The authors then fine-tuned that system for enantioselectivity, mutating other residues nearby. Introducing aromatic groups increased both the yield and the selectivity, as it turned out, and the eventual winner was run across a range of substrates. These varied quite a bit, with some combinations showing very good yields and pretty impressive enantioselectivities for this reaction, which has never until now been performed asymmetrically, but others not performing as well.
And that's promise (and the difficulty) with enzyme systems. Working on that scale, you're really bumping up against individual parts of your substrates on an atomic level, so results tend, as you push them, to bin into Wonderful and Terrible. An enzymatic reaction that delivers great results across a huge range of substrates is nearly a contradiction in terms; the great results come when everything fits just so. (Thus the Codexis-style enzyme optimization efforts). There's still a lot of brute force involved in this sort of work, which makes techniques to speed up the brutal parts very worthwhile. As this paper shows, there's still no substitute for Just Trying Things Out. The structure can give you valuable clues about where to do that empirical work (otherwise the possibilities are nearly endless), but at some point, you have to let the system tell you what's going on, rather than the other way around.
We'll start off with a little extraterrestrial chemistry. As many will have heard, there are all sorts of hints being dropped that the sample analyzing equipment on the Mars Curiosity rover has detected something very interesting. We'll have to wait until the first week of December to find out what it is, but my money is on polycyclic aromatic hydrocarbons or something other complex abiotic organics.
Here's a detailed look at the issue. The Martian surface has a pretty vigorous amount of perchlorate in it, which was not realized for a long time (and rather complicates the interpretation of some of the past experiments on it). But Curiosity's analytical suite was designed to deal with this, and my guess is that these techniques have worked and that organic material has been detected.
I would very much bet against any sort of strong signature of life-as-we-know-it, though. For one thing, finding that in a random sand dune would seem pretty unlikely. Actually, finding good traces anywhere in the top layer of Martian rock and dust seems unlikely (as opposed to deeper underground, where I'm willing to speculate freely on the possible existence/persistence of bacteria and such). And I'm not sure the Curiosity would be well equipped to discriminate abiotic versus biotic compounds, anyway.
But organic compounds in general, absolutely. This brings up an interestingly false idea that underlies a lot of casual thinking about Mars (and space in general). Many people have this mental picture of everywhere outside Earth being sort of like the surface of our moon. It leads to a false dichotomy: here we have temperate air, liquid water, life and the byproducts of life (oil and coal, for example). Out there is all cold barren rock directly exposed to vacuum and hard radiation. We associate "space" with clean, barren, surfaces and knife-edge shadows, whereas "down here" it's all wet and messy. Not so.
There's plenty of irradiated rock, true, but there's water all over the outer solar system, in huge amounts. And while what we see out there is frozen, it's a near-certainty that there are massive oceans of the liquid stuff down under the various crusts of the larger outer-planet moons. All those alien-invasion movies, the ones with the extraterrestrials after our planet's water, are fun but ridiculous examples of that false dichotomy in action. There's plenty of organic chemistry, too - I've written before about how the colors of Jupiter's clouds remind me of reaction byproducts, and it's no coincidence that they do. The gas giant planets are absolutely full of organic chemicals of all varieties, and they're getting heated, pressurized, mixed, irradiated, and zapped by huge lightning storms all the hours of their days. What isn't in there?
Everything came that way. The solar system has plenty of hydrocarbons, plenty of small carbohydrates, and plenty of amines and other nitrogen-containing compounds in it. The carbonaceous chrondrites are physical evidence that's fallen to Earth - some of these have clearly never been heated since their formation (since they're full of water and volatile organics), so the universe would seem to be awash in small-molecule gorp. There's another false dichotomy, that the materials for life are very rare and precious and only found down here on Earth. But they're everywhere.
Via a reader, here's an excellent YouTube video for those of you who use ChemDraw. I've been using the software since it came out, and there are several useful tricks here that I didn't know were even in the software. Did you know that you could give your common structures nicknames, so that the program would immediately draw them when you typed in the name? Or how to use the "Sprout" tool for drawing bonds without going to the bond-drawing tool? There's also an detailed look at customizing hotkeys, which for a heavy ChemDraw user will make you look like you have magic powers. Well worth a look. Update: see the comments for more if you're into this sort of thing!
I'd still like to see how quickly all these would allow you to draw something like this (well, other than giving it a nickname - I'd suggest "Jabba" or "Chemzilla" - and having it appear instantly). Of course, those of us old enough to remember the pre-ChemDraw (or any-other-draw) days will have a different perspective on the whole field. I remember the first time I saw the program being used, which would have been 1986, not an awful long time after it came out (see the timeline of computers in chemistry here). Like every other practicing organic chemist, as soon as I saw the program I knew that I had to have it. It was, as they say, a "killer app", and ChemDraw sold Macs, albeit on a smaller scale than VisiCalc sold Apple IIs. But it's hard to get across how those programs felt, unless you've actually rubbed Helvetica capital letters from a transfer sheet into an ink-drawn chair-conformation ring to make a drawing of a carbohydrate, or had to go back and manually erase (and write in) half a column of figures because you had recalculate them. It feels like, instead of hitting "Print", being given instead a slab of hardwood and some sharp tools which which to start carving out a block for inking. Or instead of hitting "Send", having someone bring you a horse.
Here's a paper that I missed in Organic Process Research and Development earlier this year, extolling the virtues of sulfolane as a high-temperature polar solvent. I have to say, I've never used it, although I hear of it being used once in a while, mainly by people who are really having to crank the temperature on some poor reaction.
The only bad thing I've heard about it is its difficulty of removal. That high-boiling polar aprotic group all has this problem, of course (DMSO is no treat to get out of your sample sometimes, either, although it's so water-soluble that you always have sheer extraction on your side). But sulfolane is higher-boiling than all the rest (287C!), and it also freezes at about 28C, which could be a problem, too. (The paper notes that small amounts of water lower the freezing temperature substantially, and that 97/3 sulfolane/water is an article of commerce itself, probably for that reason). It has an unusual advantage, though, from a safety standpoint: it stands out from all the other polar aprotics as having remarkably poor skin penetration (as contrasted very much with DMSO, for example). It's more toxic than the others, but the skin penetration makes up for that, as long as you're not ingesting it some other way, which is Not Advised.
The paper gives a number of examples where this solvent proved to be just the thing, so I'll have to keep it in mind. Anyone out there care to share any hands-on experiences?
For those wanting a timeline of the whole hexacyclinol business, with links to the articles, blogs, and commentary that's surrounded it, allow me to recommend Carmen Drahl's "History of the Hexacyclinol Hoo-Hah". (And no, the whole thing is not written in alliteration; for that, you'll be wanting this).
The retraction has been agreed due to lack of sufficient Supporting Information. In particular, the lack of experimental procedures and characterization data for the synthetic intermediates as well as copies of salient NMR spectra prevents validation of the synthetic claims. The author acknowledges this shortcoming and its potential impact on the community
Potential? After six years? There were people taking their first undergraduate organic course when this controversy hit who are now thinking about how to start tying together their PhD dissertations. It seems that Angewandte Chemie is very loath to go the full-retraction route (there haven't been many), but that retraction notice doesn't bring up anything that wasn't apparent after the first ten minutes of reading the paper.
Here's a general organic chemistry question for the crowd, inspired by a recent discussion among colleagues. We were whiteboarding around some structures, and the statement was made that "By this time in the history of organic chemistry, unknown heterocycles are probably unknown for a very good reason". So, true or false? Are the rings that we haven't made yet mostly unmade because they're very hard (or impossible), or mostly because no one's ever cared about them (or realized that they'd made them at all)?
Note that this problem was the subject of some thorough theme-and-variations work a few years ago. That paper would suggest that that as many as 90% of the unknown heterocycles are simply not feasible to make, but that still leaves you with three thousand or so that are. So the answer to the question above might turn out to be "Both at the same time. . ."
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
That title should bring in the hits. But don't get your hopes up! This is medicinal chemistry, after all.
"Can't you just put the group in your molecule that does such-and-such?" Medicinal chemists sometimes hear variations of that question from people outside of chemistry - hopeful sorts who believe that we might have some effective and instantly applicable techniques for fixing selectivity, brain penetration, toxicity, and all those other properties we're always trying to align.
Mostly, though, we just have general guidelines - not so big, not so greasy (maybe not so polar, either, depending on what you're after), and avoid a few of the weirder functional groups. After that, it's art and science and hard work. A recent J. Med. Chem. paper illustrates just that point - the authors are looking at the phenomenon of molecular promiscuity. That shows up sometimes when one compound is reasonably selective, but a seemingly closely related one hits several other targets. Is there any way to predict this sort of thing?
"Probably not", is the answer. The authors looked at a range of matched molecular pairs (MMPs), structures that were mostly identical but varied only in one region. Their data set is list of compounds in this paper from the Broad Institute, which I blogged about here. There are over 15,000 compounds from three sources - vendors, natural product collections, and Schreiber-style diversity-oriented synthesis. The MMPs are things like chloro-for-methoxy on an aryl ring, or thiophene-for-pyridyl with other substituents the same. That is, they're just the sort of combinations that show up when medicinal chemists work out a series of analogs.
The Broad data set yielded 30954 matched pairs, involving over 8000 compounds and over seven thousand different transformations. Comparing these compounds and their reported selectivity over 100 different targets (also in the original paper), showed that most of these behaved "normally" - over half of them were active against the same targets that their partners were active against. But at the other end of the scale, 829 compounds showed different activity over at least ten targets, and 126 of those compounds different in activity by fifty targets or more. 33 of them differed by over ninety targets! So there really are some sudden changes out there waiting to be tripped over; they're not frequent, but they're dramatic.
How about correlations between these "promiscuity cliff" compounds and physical properties, such as molecular weight, logP, donor/acceptor count, and so on? I'd have guessed that a change to higher logP would have accompanied this sort of thing over a broad data set, but the matched pairs don't really show that (nor a shift in molecular weight). On the other hand, most of the highly promiscuous compounds are in the high cLogP range, which is reassuring from the standpoint of Received Med-Chem Wisdom. There are still plenty of selective high-logP compounds, but the ones that hit dozens of targets are almost invariably logP > 6.
Structurally, though, no particular substructure (or transformation of substructures) was found to be associated with sudden onset of promiscuity, so to this approximation, there's no actionable "avoid sticking this thing on" rule to be drawn. (Note that this does not, to me at least, say that there are no such things are frequent-hitting structures - we're talking about changes within some larger structure, not the hits you'd get when screening 500 small rhodanine phenols or the like). In fact, I don't think the Broad data set even included many functional groups of that sort to start with.
On the basis of the data available to us, it is not possible to conclude with certainty to what extent highly promiscuous compounds engage in specific and/or nonspecific interactions with targets. It is of course unlikely that a compound might form specific interactions with 90 or more diverse targets, even if the interactions were clearly detectable under the given experimental conditions. . .
. . .it has remained largely unclear from a medicinal chemistry perspective thus far whether certain molecular frameworks carry an intrinsic likelihood of promiscuity and/or might have frequent hitter character. After all, promiscuity is determined for compounds, not their frameworks. Importantly, the findings presented herein do not promote a framework-centric view of promiscuity. Thus, for the evaluation and prioritization of compound series for medicinal chemistry, frameworks should not primarily be considered as an intrinsic source of promiscuity and potential lack of compound specificity. Rather, we demonstrate that small chemical modifications can trigger large-magnitude promiscuity effects. Importantly, these effects depend on the specific structural environment in which these modifications occur. On the basis of our analysis, substitutions that induce promiscuity in any structural environment were not identified. Thus, in medicinal chemistry, it is important to evaluate promiscuity for individual compounds in series that are preferred from an SAR perspective; observed specificity of certain analogs within a series does not guarantee that others are not highly promiscuous."
Point taken. I continue to think, though, that some structures should trigger those evaluations with more urgency than others, although it's important never to take anything for granted with molecules you really care about.
That's the word-for-word title of a provocative article by Rolf Carlson and Tomas Hudlicky in Helvetica Chimica Acta. That journal's usually not quite this exciting, but it is proud of its reputation for compound characterization and experimental accuracy. That probably helped this manuscript find a home there, where it's part of a Festschrift issue in honor of Dieter Seebach's 75th birthday.
The authors don't hold back much (and Hudlicky has not been shyabout these issues, either, as some readers will know). So, for the three categories of malfeasance described in the title, the first (hype) includes the overblown titling of many papers:
As long as the foolish use of various metrics continues there is little hope of return to integrity. Young scientists entering academia and competing for resources and recognition are easily infected with the mantra of importance of
publishing in 'high-impact journals' and, therefore, strive to make their work as noticeable as possible by employing excess hype.
It is the reader, not the author, of papers describing synthetic method who should evaluate its merits. Therefore, self-promoting words like 'novel', 'new', 'efficient', 'simple', 'high-yielding', 'versatile', 'optimum' should not be used in the title of the paper if such qualities are not covered by the actual content of the paper.
It also includes the inflation of reaction yields (see that link in the second paragraph above for more on that topic). This is another one that's going to be hard to fix:
Unfortunately, the community has chosen and continues to choose the yield values in submitted manuscripts as a measure of overall quality and/or utility of the report. This, of course, encourages the 'adjustment' in the values in order to avoid critique. An additional problem in the reported values is the fact that synthesis is performed on small scales, thanks to advances in NMR and other techniques available for structure determination. On milligram scales it is extremely difficult to accurately determine weight and content of a sample, given the equipment available in typical academic laboratory.
The second category, malpractice, is sloppy work, but not outright fraud:
Malpractice, as explained above, is usually not deliberate and derives primarily from ignorance or professional incompetence. The most frequent cases involve improper experimental protocols, improper methods used in characterization of compounds, and the lack of correct citations to previous work.
For example, the authors point out that very, very rarely are any new synthetic methods given a proper optimization. One-variable one-at-a-time changes are worthwhile, but they're not sufficient to explore a reaction manifold, not when these changes can interact with each other. As process chemists in industry know, the only way to explore such landscapes is with techniques such as Design of Experiments (DoE), which try to find out what factors in a multivariate system produce the greatest change in results. Here's an example; the process chemistry literature furnishes many more.
And finally, you have outright scientific misconduct - fraud, poaching of ideas from grant applications and the like, plagiarism in publications, etc. It's hard to get a handle on these - they seem to be increasing, but the techniques to find and expose them are also getting better. Over time, thought, these techniques might just have the effect of making fraud more sophisticated; that would be in line with human behavior as I understand it, and with selection pressure as well. The motives for such acts are with us still, and do not seem to be abating much, so I tend to think that determined miscreants will find ways to do what they want to do.
Thoughts? Some of this paper's points could be put in the "grumblings about the good old days" category, but I think that a lot of it is accurate. I'm not sure how good the old days were, myself, since they were also filled with human beings, but the pressures found today do seem to be bringing on a lot of behaviors we could do without.
The recent discussions here about ugly tool compounds have prompted an alert reader to send in this example, a putative agonist of the orphan receptor GPR35. Will someone rise to the defense of this one?
My post the other day on a very unattractive screening hit/tool compound prompted a reader to mention this paper. It's one from industry this time (AstraZeneca), and at first it looks like similarly foul chemical matter. But I think it's worth a closer look, to see how they dealt with what they'd been given by screening.
This team was looking for hits against PIM kinases, and the compound shown was a 160nM hit from high-throughput screening. That's hard to ignore, but on the other hand, it's another one of those structures that tell you that you have work to do. It's actually quite similar to the hit from the previous post - similar heterocycle, alkylidene branching to a polyphenol.
So why am I happier reading this paper than the previous one? For one, this structure does have a small leg up, because this thiazolidinedione heterocycle doesn't have a thioamide in it, and it's actually been in drugs that have been used in humans. TZDs are certainly not my first choice, but they're not at the bottom of the list, either. On the other hand, I can't think of a situation where a thioamide shouldn't set off the warning bells, and not just for a compound's chances of becoming a drug. The chances of becoming a useful tool compound are lower, too, for the same reasons (potential reactivity / lack of selectivity). Note that these compounds are fragment-sized, unlike the diepoxide we were talking about the other day, which means that they're likely to be able to fit into more binding sites.
But there's still that aromatic ring. In this case, though, the very first thing this paper says after stating that they decided to pursue this scaffold is: "We were interested to determine whether or not we could remove the phenol from the series, as phenols often give poor pharmacokinetic and drug-like properties.". And that's what they set about doing, making a whole series of substituted aryls with less troublesome groups on them. Basic amines branching off from the ortho position led to very good potency, as it turned out, and they were able to ditch the phenol/catechol functionality completely while getting well into (or below) single-digit nanomolar potency. With these compounds, they also did something else important: they tested the lead structures against a panel of over four hundred other kinases to get an idea of their selectivity. These is just the sort of treatment that I think the Tdp-1 inhibitor from the Minnesota/NIH group needs.
To be fair, that other paper did show a number of attempts to get rid of the thioamide head group (all unsuccessful), and they did try a wide range of aryl substituents (the polyphenols were by far the most potent). And it's not like the Minnesota/NIH group was trying to produce a clinical candidate; they're not a drug company. A good tool compound to figure out what selective Tdp-1 inhibition does is what they were after, and it's a worthy goal (there's a lot of unknown biology there). If that had been a drug company effort, those two SAR trends taken together would have been enough to kill the chemical series (for any use) in most departments. But even the brave groups who might want to take it further would have immediately profiled their best chemical matter in as many assays as possible. Nasty functional groups and lack of selectivity would surely have doomed the series anywhere.
And it would doom it as a tool compound as well. Tool compounds don't have to have good whole-animal PK, and they don't have to be scalable to pilot plant equipment, and they don't have to be checked for hERG and all the other in vivo tox screens. But they do have to be selective - otherwise, how do you interpret their results in an assay? The whole-cell extract work that the group reported is an important first step to address that issue, but it's just barely the beginning. And I think that sums up my thoughts when I saw the paper: if it had been titled "A Problematic Possible Tool Compound for Tdp-1", I would have applauded it for its accuracy.
The authors say that they're working on some of these exact questions, and I look forward to seeing what comes out of that work. I'd have probably liked it better if that had been part of the original manuscript, but we'll see how it goes.
S. Raj Govindarajan makes his case (some of which recapitulates points made by readers here). I doubt if he's convinced anyone who holds the view of the original authors, but I think he's on target with this part:
Should your son be forced to take chemistry? Absolutely. But the curriculum needs to be rethought in a way that would instill practical knowledge, curiosity about the world, and an appetite for at least understanding scientific achievement and its necessity/implications.
People don’t have to become scientists if they don’t want to, but they should have a fundamental understanding of scientific concepts. That way, people like myself need not be terrified that an ignorant public will vote to slash funding for scientific research and understanding. . .
Here's a blog post at The Washington Post in which a parent asks the musical question: "Why Are You Forcing My Son to Take Chemistry?"
It's short, but it can be summarized as My son will not be a chemist. He will not be a scientist. A year of chemistry class will do nothing for him but make him miserable. He could be taking something else that would be doing him more good. And this father is probably right about his son, who's 15, not becoming any sort of scientist. But his argument breaks down a bit after that.
That's because the same objections could apply to most other things that his son could be taking. He says that his son "could be really good at" public speaking, or music, or creative writing, for example. Or not. Perhaps one of them would make him miserable, or simply do nothing for him, and be an opportunity cost as well. The difference is that the boy (and/or his father) are already pretty sure that chemistry will be a waste, and they haven't had the chance to find that out about the others yet.
But again, I take him at his word when he says that his son will be lousy at chemistry (leaving aside the self-fulfilling prophecy aspect, although that's definitely something to consider). This gets back to questions that I wrote about here, namely: how much science should people know? How much should they get in school? How much will do them some good? I think, in this case, that everyone should know that there are such things as chemical elements, and that they combine to form compounds. They should know about reactions like combustion, and a bit about energy and thermodynamics. Knowing an acid from a base would be nice, but the list just keeps on going from there, and where does one draw the line?
I think, after a basic list of facts and concepts, that what I'd like for kids to get out of a science class is the broader idea of experimentation - that the world runs by physical laws which can be interrogated. Isolating variables, varying conditions, generating new hypotheses: these are habits of mind that actually do come in handy in the real world, whether you remember what an s orbital is or not. I'm not sure how well these concepts get across, though.
Do you need a year of high school chemistry to learn these things? I doubt it. A lot of it will be balancing acid-base equations, learning about the columns and rows of the periodic table, oxidations states, Lewis structures, and so on. And the son probably will have no use for any of that - the father has no memory of any of it himself, and although I'd like people to know some of these things, I wonder if not knowing them has harmed him too much. What might have harmed him, though, is a lack of knowledge of those broader points. Or a general attitude that science is That Stuff Those Other People Understand. You make yourself vulnerable to being taken in if you carry that worldview around with you, because claiming scientific backing is a well-used ploy. You should know enough to at least not be taken in easily.
There may be no more R. B. Woodwards, but never let it be said that there's nothing more to be found in organic synthesis. Until we can make natural products the way that they're made in nature, at room temperature, atom by atom, our skills don't stand comparison with what we know is possible. But that's not going to be the work of a single genius, for sure, although applications are always being accepted.
New reactions, though, are always out there. Here's an example of one, in a field (the Diels-Alder reaction) that you'd think would have been pretty well worked over. This will win no Nobels, and only synthetic organic chemists will pay attention. But I'm always glad to see discoveries like this, and to know that they're still out there.
Explaining R. B. Woodward to a non-chemist, via jazz. And they were probably right, too. But that brings up an interesting question, one that applies to organic synthesis as well as every other human activity. At what point does a field become incapable of supporting Titans?
Consider organic chemistry. There were many major discoveries that had to be made before we (as a civilization) even understood what was going on. Atomic theory itself, valences, tetrahedral carbons, spectroscopy: without these and similar foundational work, you're not going to get very far. But at some point you've got enough material for the next genius to come along and make the most out of it, and I'd put Woodward in that category. He had just enough tools to make his work barely possible, and he had to invent quite a few more along the way. This gave him plenty of room to demonstrate just how good he was at organic synthesis. Complex molecules that would have been beyond structural determination in years past were now there (in theory) for the taking, but these were still well out of reach for all but the most inspired.
To switch fields of achievement, I recall reading someone's opinion once that Bach wasn't all that good, all things considered. I didn't care for that when I saw it (I like Bach very much), but the argument was that he got there "firstest with the mostest", as they said in the Gold Rush, and did such a thorough job on the musical styles of his day (counterpoint, the fugue form, etc.) that no one could ever stand on his level again. You couldn't, because Bach had already Been There and Composed That. Now, that undermines the author's original point, I think, because only a musical genius could have covered so much ground so well, but his second point stands: once Bach had done it, it was done. Anyone who composes a theme-and-variations in contrapuntal style will be compared to him, and probably unfavorably.
Similar arguments can be made across the arts and sciences. But the sciences have the advantage of not being subject so much to the whims of fashion. Picasso, I've long thought, helped create the art world in which he would be considered a great painter. (It reminds me of the way that successful organisms set up a positive feedback loop with their environment, helping to induce the conditions in which they can thrive). There are catastrophic events in both ecologies, of course, that change the requirements of fitness - Burne-Jones (to pick one example) went so far out of fashion by the 1950s and 60s that people were throwing his paintings away with the trash. But the art world has set itself up with fashion as part of its motor. Styles of painting come and go, because styles of painting have to come and go. But Newton's discoveries stand right where they were when he made them - si monumentum requiris, circumspice. Newtonian mechanics do not go out of fashion.
The only thing that can be done to alter great scientific discoveries of the past is to show how they fit into previously-unrealized larger contexts (as Einstein did with Newton). That, naturally, tends to get harder and harder as time goes on. Once the brush is cleared in science, it tends to stay cleared. That process can uncover new problems, but those are the tougher ones. This line of thought brings on talk of the End Of Science, as John Horgan put it, with which you may contrast Vannevar Bush's "Endless Frontier" (which helped establish the modern funding system for academic science in the US. My own take is that the frontier is endless for practical purposes for the foreseeable future, but not similarly endless in every direction at once.
There will, I'd say, never be another R.B. Woodward. Heraclitus aside, we have stepped into that river already, and crossed it. That's not to say that there are not great challenges in synthetic organic chemistry - there are - but it means that there is much less scope for a sky-filling fireworks display like Woodward's career. Anyone trying to recapitulate it is wasting time and effort that could be better applied.
Update: Wavefunction has thoughts on the issue here.
A deserved Nobel? Absolutely. But the grousing has already started. The 2012 Nobel Prize for Chemistry has gone to Bob Lefkowitz (Duke) and Brian Kobilka (Stanford) for GPCRs, G-protein coupled receptors.
Everyone who's done drug discovery knows what GPCRs are, and most of us have worked on molecules to target them at one point or another. At least a third of marketed drugs, after all, are GPCR ligands, so their importance is hard to overstate. That's why I say that this Nobel is completely deserved (and has been anticipated for some time now). I've written about them numerous times here over the years, and I'm going to forgo the chance to explain them in detail again. For more information I can recommend the Nobel site's popular background and their more detailed scientific background - they've already done the explanatory work.
I will say a bit about where GPCRs fit into the world of drug targets, though, since they've been so important to pharma R&D. Everyone had realized, for decades (more like centuries), that cells had to be able to send signal to each other somehow. But how was this done? No matter what, there had to be some sort of transducer mechanism, because any signal would arrive on the outside of the cell membrane and then (somehow) be carried across and set off activity inside the cell. As it became clear that small molecules (both the body's own and artificial ones from outside) could have signaling effects, the idea of a "receptor" became inescapable. But it's worth remembering that up until the mid-1970s you could find people - in print, no less - warning readers that the idea of a receptor as a distinct physical object was unproven and could be an unwarranted assumption. Everyone knew that molecular signals were being handled somehow, but it was very unclear what (or how many) pieces there were to the process. This year's award recognizes the lifting of that fog.
It also recognizes something else very important, and here I want to rally my fellow chemists. As I mentioned above, the complaints are already starting that this is yet another chemistry prize that's been given to the biologists. But this is looking at things the wrong way around. Biology isn't invading chemistry - biology is turning into chemistry. Giving the prize this year to Lefkowitz and Kobilka takes us from the first cloning of a GPCR (biology, biology all the way) to a detailed understanding of their molecular structure (chemistry!) And that's the story of molecular biology for you, right there. As it lives up to its name, its practitioners have had to start thinking of their tools and targets as real, distinct molecules. They have shapes, they have functional groups, they have stereochemistry and localized charges and conformations. They're chemicals. That's what kept occurring to me at the recent chemical biology conference I attended: anyone who's serious about understanding this stuff has to understand it in terms of chemistry, not in terms of "this square interacts with this circle, which has an arrow to this box over here, which cycles to this oval over here with a name in the middle of it. . ." Those old schematics will only take you so far.
So, my fellow chemists, cheer the hell up already. Vast new territories are opening up to our expertise and our ways of looking at the world, and we're going to be needed to understand what to do next. Too many people are making me think of those who objected to the Louisiana Purchase or the annexation of California, who wondered what we could possibly ever want with those trackless wastelands to the West and how they could ever be part of the country. Looking at molecular biology and sighing "But it's not chemistry. . ." misses the point. I've had to come around to this view myself, but more and more I'm thinking it's the right one.
I wanted to take a moment to mention this conference, coming up on November 6 at Northeastern in Boston. They have a wide-ranging program on drug discovery scheduled, with some people that I know from experience to be good speakers. Worth a look if you're in the area.
I can't resist pointing out this compound, which recently showed up in J. Med. Chem.. Now, that's a Bcl-2/Bcl-xl inhibitor, the star of the protein-protein interaction world, and there's probably never going to be a nice-looking compound that does the job in that system. The interacting surfaces are too wide and too shallow; it's a real triumph that people have compounds for this system at all. But people have, and there are compounds in the clinic.
But man, will you look at the things. This is one from Bristol-Myers Squibb the University of Michigan, and it is a beast in all directions. It weighs a mere 811 daltons, and is actually one of the more svelte compounds in the paper. Solubility, formulation, absorption, clearance. . .it all looks like fun. But we may well have to start learning how to deal with compounds like these, so we'd better steel ourselves.
According to the Wall Street Journal, the periodic table is now cool. It's shown up as a design, uh, element in TV shows, on T-shirts, and so on. (The article even gets quotes from Tom Lehrer, who I'm glad to hear is still with us). And Theodore Gray's coffee-table book The Elements
has now sold 650,000 copies (one of them to me - I recommend it). Of course, Gray has the ultimate periodic-table fan item, if you can afford it:
People who lacked patience for a chemistry set can now buy periodic table shower curtains, T-shirts, coffee mugs and even a periodic coffee table. The furniture piece, made of burred oak with samples of inlaid elements, costs $8,550, plus shipping, which gets pricey. For safety reasons, fluorine, chlorine and bromine are forbidden on airplanes, says Max Whitby in London, who produces the table.
I'd add my own, if I had 9 long ones to spend on one of these. Thick-walled ampoules would do the job, although the fluorine would still present a problem (doesn't it always?) But I suppose most of the radioactive ones (except depleted uranium) are still out. Hand-rubbed varnish would probably stop alpha particles, but not much else.
Since we've been talking about the ACS around here recently, I wanted to highlight a decision in a long-running court case the society has been involved in, American Chemical Society v. Leadscope. Rich Apodaca has a summary here of the earlier phases of the suit, which is now in its tenth year in the courts. Basically, three employees of Chemical Abstracts left to form their own chemical information company, and ended up with a patent on a particular variety of software that would display structure-activity and structure-property relationships. The ACS felt that this was too similar to the (discontinued) Pathfinder software they'd developed, and sued.
The ACS lost in a jury trial - in fact, they did more than just lose. The jury found that the society had competed unfairly, filing suit maliciously and defaming Leadscope in the process, and they awarded the latter company $26.5 million in damages. The ACS then lost in the Court of Appeals (and the damages were increased). So they took things all the way to the Ohio Supreme Court, and now they've lost there, too. The defamation ruling (and award) was reversed, and will be vacated by the lower court, but the finding of unfair competition stands. It looks like the society still owes $26.5 million. As this post by an IP lawyer shows, they were going all out:
As for the issue of ACS's subjective intent, the Supreme Court found ample support for the jury's finding that ACS had the intent to injure Leadscope and its founders. It noted that ACS's president had closely monitored Leadscope and had even sent out an email to then-Ohio-Governor Robert Taft to abort a visit by the governor to Leadscope's offices. ACS's former information technology director also provided damaging testimony documenting ACS's president's hostility towards Leadscope. In addition, ACS took actions or made statements that interfered with Leadscope's ability to get funding (for example, by dissuading an venture capitalist interested in investing in Leadscope by telling him that there were legal issues with Leadscope's technology) and took actions in the litigation to disrupt Leadscope's ability to get insurance coverage for the dispute.
As detailed here at ChemBark, it's not like there's been a lot of coverage about this (I've never written about it myself). These are things that every member of the ACS should at least be aware of, but it's not like the ACS is going to do that job, for obvious reasons. One of the main venues for such stories would be. . .Chemical and Engineering News, so that's not going to happen. And it's not a story that resonates much with a general newspaper/magazine readership, so what does that leave us with? Well, mentions like that Nature News article to get the word out, and the blogs to go into the details.
That ChemBark post has a whole series of questions that would be very much worth answering. How the the ACS get into this fix in the first place? Was the original suit ill-advised? How much will that $26.5 million affect the society's finances - is that a big deal, or not? How much further money went down the drain in legal fees along the way? Are there any lessons to be learned from all this, or could the same thing start happening again next month?
And beyond those immediate questions, there are the bigger ones that the ACS (and other scientific societies) should be asking. Can a single entity be (A) a publisher of a large stable of high-profile scientific journals, and (B) the curator and disseminator of the (very profitable) primary database of all the reported chemical matter in the world, and (C) the voice of its own membership, who are simultaneously paying money for access to A and B, and (D) the lobbying organization for chemistry in general, as well as (E) a scientific society dedicated to the spread of knowledge? I'm not sure that all these are possible, at the same time, for the same organization. But sites like ChemBark, and this one, and the rest of the chemical blogworld) are the only places that seem to be available to talk about these things.
There's a paper out in Science from a team led by the IBM-Zürich folks, who have been pushing the capabilities of atomic-force microscopy for some time now. These are the people who published the paper in 2009 with those images of pentacene, and now they're back with even higher resolution.
One of their images is shown here. This is a big polycyclic aromatic hydrocarbon, hexabenzocoronene. One of the things that students note when they first try drawing such things is where the "holes" are. Aromatic benzene rings are special (different electron densities, different bond lengths), and if you connect one to another by a single bond (biphenyl), that connecting bond is of ordinary length. But a structure like this one - is it six benzene rings connected by a network of those ordinary bonds? Or are the electrons spread out over the whole surface in a great big delocalized cloud? Or something in between?
Calculations suggest that "in between, but still different" is the right answer, with some of the bonds having more double-bond character than others. And that's what this paper has determined by reaching down and feeling the bonds with an AFM tip. There's a single CO molecule at the end of the probe, and they've gotten to the point where they can see that they get greater sensitivity if that carbon monoxide molecule is tilted over rather than pointing straight down. I am not making that up. Running this single-molecule finger over the surface of hexabenzocoronene gives you the images shown.
"A" is the structure of the molecule, with the two different kinds of bond (i-bonds and j-bonds) noted. "B" is an AFM image at a constant height of 0.35 angstrom, which is really putting your atomic thumb down. The dark parts of the image correspond to attractive forces (van der Waals), and the light parts correspond to repulsive push-back. In this case, the pushback is due to the Pauli exclusion principle - those electrons cannot occupy the same quantum states, and they are quite adamant about that when you try to force them together. The electron density is highest around the outer part of the structure, but you can clearly see the bonds all the way through the internal structure as well. Take a look at the central aromatic ring - its bonds show up more more clearly than the bonds leading out from it, reflecting the greater electron density in there. "C" is an AFM image at 3.5A height in a "pseudo-3d representation", and "D" is the calculated electron density in between these two heights (at 2.5A above the molecule). Note that the two different kinds of bonds are also apparent in panel C, where some of them are brighter and shorter.
This kind of thing continues to give me a funny feeling when I read about it. Actually using things like Pauli repulsion to make pictures of molecules, well. . .maybe I am living in someone's science fiction novel, at that.
When I was clearing a space on my desk the other day, I came across this paper, which I'd printed several months ago to read later. Later's finally here! A brief look at the manuscript will make clear why I didn't immediately dig into it - it's titled "Modifying Chemical Landscapes by Coupling to Vacuum Fields", and it's about as physics-heavy as anything that Angewandte Chemie would be willing to publish. The scary part is, this is one of a pair of papers from the same group (Thomas Ebbesen's at Strasbourg), and it's the other one that really gets into the physics. (If you can't get the first paper, here's a summary of it, the only mention I've been able to find of this work).
But it's worth a bit of digging, because this is very strange and interesting work. So bear with me for a paragraph - I always thought that someone should write a textbook titled "Quantum Mechanics: A Hand-Waving Approach", and that's what you're about to get here. The theory tells us, among many other weird things, that the vacuum between molecules is not what we might think it is. That's more properly the quantum electrodynamic vacuum, the ground state of electromagnetic fields. Because the Planck constant is not zero - tiny, but crucially not zero - the QED vacuum is not the empty nothingness that we think of classically. It's a dielectric, it's diamagnetic, and its properties can be altered. The theory that tells us such odd things is to be taken very seriously indeed, though, since it has made some of the most detailed and accurate predictions in the history of science.
And the vacuum-field fluctuation part of the theory has to be taken very seriously, too, because these effects have actually been measured. This was first accomplished via the Lamb shift and the Casimir effect is the latest poster child. That relates to the properties of the vacuum between two very closely spaced physical plates, and we're now to the point, technologically, where we actually make structures of this kind, measure their sizes and compositions, and determine what's going on inside them.
So what, those few of you who are still reading would like to know, does this have to do with chemistry? Well, when a real molecule is placed between such plates, its energy levels behave in strange ways. And this latest paper demonstrates that with a photochemical rearrangement - the reaction rates change completely depending on whether or not the starting material is confined in the right sort of space, and they change exactly as the cavity is tuned more closely to the absorption taking place. In effect, the molecule is now part of a completely new system (molecule-plus-cavity), and this new system has different energy levels - and can do different chemistry.
The photochemistry shown is not exciting per se, but the fact that it can be altered just by putting the molecule in a very tiny box is exciting indeed:
The rearrangement of the molecular energy levels by coupling to the vacuum field has numerous important consequences for molecular and material sciences. As we have shown here, it can be used to modify chemical energy landscapes and in turn the reaction rates and yields. Strong coupling can either speed up or slow down a reaction depending on the reorganization of specific energy levels relative to the overall energy landscape. Both rates and the thermodynamics of the reaction will be modified. . .The coupling was done here to an electronic transition but it could also be done to a specific vibrational transition for instance to modify the reactivity of a bond. In this way it can be seen as analogous to a catalyst which changes the reaction rate by modifying the energy landscape.
I look forward to seeing how this field develops. If we end up being able to make reactions go the way we want them to by coupling our starting materials to actual fabric of space, I will officially decide that I am, in fact, living in someone's science fiction novel, and I will be very happy about that. I can picture a vacuum-field flow chemistry machine, pumping reactants through various ridiculously small and convoluted lattices, as someone turns a chrome-plated crank on the side to adjust the geometry of the cavities to change the product distributions. OK, there are perhaps a couple of engineering challenges there, but you get the idea.
And speaking as an organic chemist, I have a few other questions: can these vacuum field effects occur in some of the other confined spaces that we're more used to thinking about? The insides of zeolites, for example? The interior of a cyclodextrin? Between sheets of graphene? Inside the active site of an enzyme? I'm sure that there are reasons why not all of these would be able to show such an effect (irregular geometry, just to pick one), but it does make you wonder.
Well, an alert commenter to this post sent along this link to the Cancer Prevention Research Institute of Texas grant site. And if you search for the phrase "R12KCN", you'll see six million dollars set aside for "Recruitment of Established Faculty", which Nicoloau's name attached.
So if this is going to happen, is it a good idea? I'm not asking if it's a good idea for K. C. Nicoloau; he's more than capable of looking after his own career. Is it a good idea for Rice, and for the CPRIT? The answer to that one depends on what everyone is looking for. If Rice is looking to make a big splash, that'll work just fine. But as another comment a bit further down in that above thread notes, this would be a departure for their chemistry department, because they'd actually de-emphasized organic synthesis a while back. Bringing in KCN will certainly re-emphasize it for them, if that's what they're after.
It's not where I would put my money, but (fortunately) I am not in charge of laying out millions to stock up a chemistry department. I've written several times (most recently here) about what I think of total synthesis at this point in the history of the science. If malevolent aliens suddenly filled our skies, threatening to vaporize the planet if we did not synthesize maitotoxin, I would unhesitatingly vote to give K. C. Nicolaou unlimited funding. That's what he does, and he's damn good at it. I just don't think - without the alien pressure and all - that it provides as much return for the time and money as other areas of science.
There's a rumor making the rounds that K. C. Nicolaou is leaving Scripps (La Jolla), with the most often-mentioned destination being Rice University. That's striking many people as a bit unlikely, unless Rice has decided to really throw the money (and facilities spending) around, and has decided to start off with a big splash. But there is at least a bit of a Scripps-to-South migration going on, with M. G. Finn heading to Georgia Tech. So we shall see. . .anyone heard more?
As we head into Nobel Season, Chembark and Wavefunction have their latest odds up. The biology side of the chemistry prize seems to be getting a lot of betting this year, with nuclear hormone signaling, chaperone proteins, oncogenes, Western/Southern blotting, and various bioinorganic discoveries all being mentioned. I'll do a full post on my own predictions (and what I wouldn't like to see get the prize), but there's a lot of good material in those two posts to start thinking about.
Here you go, from IKA. If you can make it up to about 1:52 or so, that's when the traditional hard-sell starts. But up until then, it's pretty painful, and not least because the model playing a chemist is evaporating a bright green solution (sure thing) and the receiving flask is light blue (oh yeah). More unlikely colors are to be seen in the sales-pitch part of the video that follows, though, but at least there's no acting, or whatever that's supposed to be. Yikes.
There's an odd retraction in the synthetic chemistry literature. A synthesis of the lundurine alkaloid core from the Martin group at Texas was published last year, and its centerpiece was a double-ring-closing olefin metathesis reaction. (Coincidentally, that reaction was one of the "Black Swan" examples in the paper I blogged about yesterday - the initial reports of it from the 1960s weren't appreciated by the synthetic organic community for many years).
Now the notice says that the paper is being retracted because that RCM reaction is "not reproducible". (The cynical among you will already be wondering when that became a criterion for retraction in the literature - if it works once, it's in, right?)
There are more details at The Heterocyclist, a blog by the well-known synthetic chemist Will Pearson that I've been remiss in not highlighting before now. While you're there, fans of the sorts of chemicals I write about in "Things I Won't Work With" might enjoy this post on the high explosive RDX, and the Michigan chemist (Werner Bachmann) who figured out how to synthesize it on scale during World War II.
What's a Black Swan Event in chemistry? Longtime industrial chemist Bill Nugent has a very interesting article in Angewandte Chemie with that theme, and it's well worth a look. He details several examples of things that all organic chemists thought they knew that turned out not to be so, and traces the counterexamples back to their first appearances in the literature. For example, the idea that gold (and gold complexes) were uninteresting catalysts:
I completed my graduate studies with Prof. Jay Kochi at Indiana University in 1976. Although research for my thesis focused on organomercury chemistry, there was an active program on organogold chemistry, and our perspective was typical for its time. Gold was regarded as a lethargic and overweight version of catalytically interesting copper. More- over, in the presence of water, gold(I) complexes have a nasty tendency to disproportionate to gold(III) and colloidal gold(0). Gold, it was thought, could provide insight into the workings of copper catalysis but was simply too inert to serve as a useful catalyst itself. Yet, during the decade after I completed my Ph.D. in 1976 there were tantalizing hints in the literature that this was not the case.
One of these was a high-temperature rearrangement reported in 1976, and there was a 1983 report on gold-catalyzed oxidation of sulfides to sulfoxides. Neither of these got much attention, as the Nugent's own chart of the literature on the subject shows. (I don't pay much attention when someone oxidizes a sulfide, myself). Apparently, though, a few people had reason to know that something was going on:
However, analytical chemists in the gold-mining industry have long harnessed the ability of gold to catalyze the oxidation of certain organic dyes as a means of assaying ore samples. At least one of these reports actually predates the (1983) Natile publication. Significantly, it could be shown that other precious metals do not catalyze the same reactions, the assays are specific for gold. It is safe to say that the synthetic community was not familiar with this report.
I'll bet not. It wasn't until 1998 that a paper appeared that really got people interested, and you can see the effect on that chart. Nugent has a number of other similar examples of chemistry that appeared years before its potential was recognized. Pd-catalyzed C-N bond formation, monodentate asymmetric hydrogenation catalysts, the use of olefin metathesis in organic synthesis, non-aqueous enzyme chemistry, and many others.
The phrase “Black Swan event” comes from the writings of the statistician and philosopher Nassim Nicholas Taleb. The term derives from a Latin metaphor that for many centuries simply meant something that does not exist. But also implicit in the phrase is the vulnerability of any system of thought to conflicting data. The phrase's underlying logic could be undone by the observation of a single black swan.
In 1697, the Dutch explorer Willem de Vlamingh discovered black swans on the Swan River in Western Australia. Not surprisingly, the phrase underwent a metamorphosis and came to mean a perceived impossibility that might later be disproven. It is in this sense that Taleb employs it. In his view: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct an explanation for its occurrence after the fact, making it explainable and predictable.”
Taleb has documented this last point about human nature through historical and psychological evidence. His ideas remain controversial but seem to make a great deal of sense when one attempts to understand the lengthy interludes between the literature antecedents and the disruptive breakthroughs shown. . .At the very least, his ideas represent a heads up as to how we read and mentally process the chemical literature.
I have no doubt that unwarranted assumptions persist in the conventional wisdom of organic synthesis. (Indeed, to believe otherwise would suggest that disruptive break- throughs will no longer occur in the future.) The goal, it would seem, is to recognize such assumptions for what they are and to minimize the time lag between the appearance of Black Swans and the breakthroughs that follow.
One difference between Nugent's examples and Taleb's is the "extreme impact" part. I think that Taleb has in mind events in the financial industry like the real estate collapse of 2007-2008 (recommended reading here
), or the currency events that led to the wipeout of Long-Term Capital Management in 1998. The scientific literature works differently. As this paper shows, big events in organic chemistry don't come on as sudden, unexpected waves that sweep everything before them. Our swans are mute. They slip into the water so quietly that no one notices them for years, and they're often small enough that people mistake them for some other bird entirely. Thus the time lag.
How to shorten that? It'll be hard, because a lot of the dark-colored birds you see in the scientific literature aren't amazing black swans; they're crows and grackles. (And closer inspection shows that some of them are engaged in such unusual swan-like behavior because they're floating inertly on their sides). The sheer size of the literature now is another problem - interesting outliers are carried along in a flood tide of stuff that's not quite so interesting. (This paper mentions that very problem, along with a recommendation to still try to browse the literature - rather than only doing targeted searches - because otherwise you'll never see any oddities at all).
Then there's the way that we deal with such things even when we do encounter them. Nugent's recommendation is to think hard about whether you really know as much as you think you do when you try to rationalize away some odd report. (And rationalizing them away is the usual reponse). The conventional wisdom may not be as solid as it appears; you can probably put your foot through it in numerous places with a well-aimed kick. As the paper puts it: "Ultimately, the fact that something has never been done is the flimsiest of evidence that it cannot be done."
That's worth thinking about in terms of medicinal chemistry, as well as organic synthesis. Look, for example, at Rule-Of-Five type criteria. We've had a lot of discussions about these around here (those links are just some of the more recent ones), and I'll freely admit that I've been more in the camp that says "Time and money are fleeting, bias your work towards friendly chemical space". But it's for sure that there are compounds that break all kinds of rules and still work. Maybe more time and money should go into figuring out what it is about those drugs, and whether there are any general lessons we can learn about how to break the rules wisely. It's not that work in this area hasn't been done, but we still have a poor understanding of what's going on.
Over at Chemistry Blog, there's a post by Quintus on the synthesis of a complex natural product, FR-182877. The route is interesting in that it features a key Diels-Alder reaction, and the post mentions that this isn't a reaction that gets used much in industry.
True enough - that one and the Claisen rearrangement are the first reactions I think of in the category of "taught in every organic chemistry course, haven't run one in years". In the case of the Claisen, the number of years is now getting up to. . .hmm, about 26, I think. The Diels-Alder has shown up a bit more often for me, and someone in my lab was running one last year, but it was the first time she'd ever done it (after many years of drug discovery experience).
Why is that? The post I linked to suggested a good reason that one isn't done too often on scale: it can be unpredictably exothermic, and some of the reactants can decide to polymerize instead, which you don't want, either. That can be very exothermic, too, and leaves you with a reactor full of useless plastic gunk which will have to be removed with tools ranging from a scoop to a saw. This is a good time to adduce the benefits of flow chemistry, which has been successfully applied in such cases, and is worth thinking about any time you have a batch reaction that might take off on you.
But to scale something up, you need to have an interest in that structure to start with. There's another reason that you don't see so many Diels-Alders in drug synthesis, and it has to do with the sorts of molecules we tend to make. The cycloaddition gives you a three-dimensional structure with stereocenters, and medicinal chemistry, notoriously, tends to favor flat aromatic rings, sometimes very much to its detriment. Many drug discovery departments have taken the pledge over the years to try to cut back on the flatness and introduce more sp3 carbons, but it doesn't always take. (For one thing, if your leads are coming out of your screening collection, odds are you'll be starting with something on the flat end of the scale, because that's what your past projects filled the files with).
I think that fragment-based drug discovery has a better chance of giving you 3-D leads, but only if you pay attention while you're working on it. Those hits can sometimes be prosecuted in the flat-and-aryl style, too, if you insist. And I think it's fair to say that a lot of fragment hits have an aryl (especially a heteroaryl) ring in them, which might reflect the ease of assembling a fragment-sized library of compounds full of such. Even the fragment folks have been talking over the years about the need to get more three-dimensionality into the collections, and vendors have been pitching this as a feature of their offerings.
The other rap on the classic Diels-Alder reaction is that it gives you substituted cyclohexanes, which aren't always the first place you look for drug leads. But the hetero-Diels-Alder reactions can give you a lot of interesting compounds that look more drug-like, and I think that they deserve more play than they get in this business. I'll go ahead and take a public pledge to run a series of them before the year is out!
I didn't even know that you could make those things - no doubt someone will be inspired to try the three-boron version next. Diarylmethanes aren't the most preferred drug structures in the world (that carbon is just waiting to be oxidized), but I can't say that I've always avoided them on those grounds. I was on a project where we made a whole series of the things, actually - didn't work out so well for the intended target, but the compounds went on to hit in a completely different assay, so the company did probably get its money's worth.
This paper from GlaxoSmithKline uses a technology that I find very interesting, but it's one that I still have many questions about. It's applied in this case to ADAMTS-5, a metalloprotease enzyme, but I'm not going to talk about the target at all, but rather, the techniques used to screen it. The paper's acronym for it is ELT, Encoded Library Technology, but that "E" could just as well stand for "Enormous".
That's because they screened a four billion member library against the enzyme. That is many times the number of discrete chemical species that have been described in the entire scientific literature, in case you're wondering. This is done, as some of you may have already guessed, by DNA encoding. There's really no other way; no one has a multibillion-member library formatted in screening plates and ready to go.
So what's DNA encoding? What you do, roughly, is produce a combinatorial diversity set of compounds while they're attached to a length of DNA. Each synthetic step along the way is marked by adding another DNA sequence to the tag, so (in theory) every compound in the collection ends up with a unique oligonucleotide "bar code" attached to it. You screen this collection, narrow down on which compound (or compounds) are hits, and then use PCR and sequencing to figure out what their structures must have been.
As you can see, the only way this can work is through the magic of molecular biology. There are so many enzymatic methods for manipulating DNA sequences, and they work so well compared with standard organic chemistry, that ridiculously small amounts of DNA can be detected, amplified, sequenced, and worked with. And that's what lets you make a billion member library; none of the components can be present in very much quantity (!)
This particular library comes off of a 1,3,5-triazine, which is not exactly the most cutting-edge chemical scaffold out there (I well recall people making collections of such things back in about 1992). But here's where one of the big questions comes up: what if you have four billion of the things? What sort of low hit rate can you not overcome by that kind of brute force? My thought whenever I see these gigantic encoded libraries is that the whole field might as well be called "Return of Combichem: This Time It Works", and that's what I'd like to know: does it?
There are other questions. I've always wondered about the behavior of these tagged molecules in screening assays, since I picture the organic molecule itself as about the size of a window air conditioner poking out from the side of a two-story house of DNA. It seems strange to me that these beasts can interact with protein targets in ways that can be reliably reproduced once the huge wad of DNA is no longer present, but I've been assured by several people that this is indeed the case.
In this example, two particular lineages of compounds stood out as hits, which makes you much happier than a collection of random singletons. When the team prepared a selection of these as off-DNA "real organic compounds", many of them were indeed nanomolar hits, although a few dropped out. Interestingly, none of the compounds had the sorts of zinc-binding groups that you'd expect against the metalloprotease target. The rest of the paper is a more traditional SAR exploration of these, leading to what one has to infer are more tool/target validation compounds rather than drug candidates per se.
I know that GSK has been doing this sort of thing for a while, and from the looks of it, this work itself was done a while ago. For one thing, it's in J. Med. Chem., which is not where anything hot off the lab bench appears. For another, several of the authors of the paper appear with "Present Address" footnotes, so there has been time for a number of people on this project to have moved on completely. And that brings up the last set of questions, for now: has this been a worthwhile effort for GSK? Are they still doing it? Are we just seeing the tip of a large and interesting iceberg, or are we seeing the best that they've been able to do? That's the drug industry for you; you never know how many cards have been turned over, or why.
The controversy I wrote about last week, about whether (some) enzymes work by using extremely fast movements (rather than by putting things into their place and letting them do their thing) may remind some folks of the supposed medieval arguments about angels dancing on the heads of pins. But it also reminds me a bit of some other arguments in organic chemistry over the years. The horrible prototype is, of course, the norbornyl cation.
There was a time when people would simply leave the room when that topic came up, because they knew that they were in for another round of fruitless wrangling. Was its structure that of two rapidly interconverting standard carbocations, or a single bridged "non-classical" one that broke the previously accepted rules? George Olah and H. C. Brown, Nobel laureates both, were on opposite sides of that one, but every physical organic chemist from about 1950 to about 1980 probably had to take a stand one way or the other. (It is commonly accepted that Olah's side won), but the arguments got pretty esoteric by the end. Update: the battle was first joined by Saul Winstein, who did not live to see his proposal vindicated by Olah's spectroscopic studies).
Another one, which came along a few years later, was the "synchronous / asynchronous" mechanism of the Diels-Alder reaction. Do the new bonds in that one form at the same time, or does one form, and then the other? That one involved the physical organic people again, as well as plenty of computational chemists. I stopped following the debate after a while, but I believe that the final reckoning was that most standard Diels-Alder reactions were synchronous, within the limits of detection, but that messing with the electron density of the two reactants could easily push the reaction into asynchronous (or flat-out stepwise) territory.
So why does this level of detail matter? The problem is, chemistry is all about things like bond formation and bond breaking, and about interactions between individual molecules (and parts of molecules) that change the energies of the systems involved. And those things are nothing but picky details, all the way down. Thermodynamics, which runs chemical reactions and runs the rest of the universe, is the most rigorous branch of accounting there is. Totaling up those energies to see which side of the ledger wins out can easily involve the fate of single water molecules, or even to single protons, and you don't get much pickier than that.
This sort of thing is one argument used against the feasibility of molecular nanotechnology. How are we to harness such fine distinctions, at such levels? But it's worth remembering that we ourselves, and every other living creature, are nanotech machines at heart. Our enzymes are constantly breaking bonds, twisting single molecules, altering reaction rates, and generating specific, defined molecular products. If they weren't, we'd fall right over. We eventually fall over anyway, because none of these machines work perfectly. But they work pretty well, and they make our own chemical efforts look like stone axes and deer-bone hammers.
So we may find getting down to this level of things to be a lot of work, and hard to understand, and frustrating to deal with. But that's where we're going to have to be if we're ever going to do real chemistry, the kind that's that's indistinguishable from magic.
How do enzymes work? People have been trying to answer that, in detail, for decades. There's no point in trying to do it without running down all those details, either, because we already know the broad picture: enzymes work by bringing reactive groups together under extremely favorable conditions so that reaction rates speed up tremendously. Great! But how do they bring those things together, how does their reactivity change, and what kinds of favorable conditions are we talking about here?
And some of this we know, too. You can see, in many enzyme active sites, that the protein is stabilizing the transition state of the reaction, lowering its energy so it's easier to jump over the hump to product. It wouldn't surprise me to see the energies of some starting materials being raised to effect that same barrier-lowering, although I don't know of any examples of that off the top of my head. But even this level of detail raises still more questions: what interactions are these that lower and raise these energies? How much of a price is paid, thermodynamically, to do these things, and how does that break out into entropic and enthalpic terms?
Some of those answers are known, to some degree, in some systems. But still more questions remain. One of the big ones has been the degree to which protein motion contributes to enzyme action. Now, we can see some big conformational changes taking place with some proteins, but what about the normal background motions? Intellectually, it makes sense that enzymes would have learned, over the millennia, to take advantage of this, since it's for sure that their structures are always vibrating. But proving that is another thing entirely.
Modern spectroscopy may have done the trick. This new paper from groups at Manchester and Oxford reports painstaking studies on B-12 dependent ethanolamine ammonia lyase. Not an enzyme I'd ever heard of, that one, but "enzymes I've never heard of" is a rather roomy category. It's an interesting one, though, partly because it goes through a free radical mechanism, and partly because it manages to speed things up by about a trillion-fold over the plain solution rate.
Just how it does that has been a mystery. There's no sign of any major enzyme conformational change as the substrate binds, for one thing. But using stopped-flow techniques with IR spectroscopy, as well as ultrafast time-resolved IR, there seem to be structural changes going on at the time scale of the actual reaction. It's hard to see this stuff, but it appears to be there - so what is it? Isotopic labeling experiments seem to say that these IR peaks represent a change in the protein, not the B12 cofactor. (There are plenty of cofactor changes going on, too, and teasing these new peaks out of all that signal was no small feat).
So this could be evidence for protein motion being important right at the enzymatic reaction itself. But I should point out that not everyone's buying that. Nature Chemistry had two back-to-back articles earlier this year, the first advocating this idea, and the second shooting it down. The case against this proposal - which would modify transition-state theory as it's usually understood - is that there can be a number of conformations with different reactivities, some of which take advantage of quantum-mechanical tunneling effects, but all of which perform "traditional" transition-state chemistry, each in their own way. Invoking fast motions (on the femtosecond time scale) to explain things is, in this view, a layer of complexity too far.
I realize that all this can sound pretty esoteric - it does even to full-time chemists, and if you're not a chemist, you probably stopped reading quite a while ago. But we really do need to figure out exactly how enzymes do their jobs, because we'd like to be able to do the same thing. Enzymatic reactions are, in most cases, so vastly superior to our own ways of doing chemistry that learning to make them to order would revolutionize things in several fields at once. We know this chemistry can be done - we see it happen, and the fact that we're alive and walking around depends on it - but we can't do it ourselves. Yet.
After mentioning the natural product Shootmenowicene yesterday, I note that See Arr Oh is reporting that the total synthesis of this compound is now down to only 47 steps. I think the purity could be improved with a prep GC of one of the early intermediates (or perhaps a spinning band distillation), but that's about all his synthesis is missing. . .
Here are two papers in Angewandte Chemie on "rewiring" synthetic chemistry. Bartosz Grzybowski and co-workers at Northwestern have been modeling the landscape of synthetic organic chemistry for some time now, looking at how various reactions and families of reactions are connected. Now they're trying to use that information to design (and redesign) synthetic sequences.
This is a graph theory problem, a rather large graph theory problem, if you assign chemical structures to nods and transformations to the edges connecting them. And it quickly turns into one that is rather computationally demanding, as are all these "find the shortest path" types, but that doesn't mean that you can't run through a lot of possibilities and find a lot of things that you couldn't by eyeballing things. That's especially true when you add in the price and availability of the starting materials, as the second paper linked above does. If you're a total synthetic chemist, and you didn't feel at least a tiny chill running down your back, you probably need to think about the implications of all this again. People have been trying to automate synthetic chemistry planning since the days of E. J. Corey's LHASA program, but we're getting closer to the real deal here:
We first consider the optimization of syntheses leading to one specified target molecule. In this case, possible syntheses are examined using a recursive algorithm that back-propagates on the network starting from the target. At the first backward step, the algorithm examines all reactions leading to the target and calculates the minimum cost (given by the cost function discussed above) associated with each of them. This calculation, in turn, depends on the minimum costs of the associated reactants that may be purchased or synthesized. In this way, the cost calculation continues recursively, moving backward from the target until a critical search depth is reached (for algorithm details, see the Supporting Information, Section 2.3). Provided each branch of the synthesis is independent of the others (good approximation for individual targets, not for multiple targets), this algorithm rapidly identifies the synthetic plan which minimizes the cost criterion.
That said, how well does all this work so far? Grzybowski owns a chemical company (ProChimia), so this work examined 51 of its products to see if they could be made easily and/or more cheaply. And it looks like this optimization worked, partly by identifying new routes and partly by sending more of the syntheses through shared starting materials and intermediates. The company seems to have implemented many of the suggestions.
The other paper linked in the first paragraph is a similar exercise, but this time looking for one-pot reaction sequences. They've added filters for chemical compatibility of functional groups, reagents, and solvents (miscibility, oxidizing versus reducing conditions, sensitivity to water, acid/base reactions, hydride reagents versus protic conditions, and so on). The program tries to get around these problems, when possible, by changing the order of addition, and can also evaluate its suggestions versus the cost and commercial availability of the reagents involved.
Of course, the true value of any theoretical–chemical algorithm is in experimental validation. In principle, the method can be tested to identify one-pot reactions from among any of the possible 1.8 billion two-step sequences present within the NOC (Network of Organic Chemistry). While our algorithm has already identified over a million (and counting!) possible sequences, such randomly chosen reactions might be of no real-world interest, and so herein we chose to illustrate the performance of the method by “wiring” reaction sequences within classes of compounds that are of popular interest and/or practical importance.
They show a range of reaction sequences involving substituted quinolines and thiophenes, with many combinations of halogenation/amine displacement/Suzuki/Sonogashira reactions. None of these are particularly surprising, but it would have been quite tedious to work out all the possibilities by hand. Looking over the yields (given in the Supporting Information), it appears that in almost every case the one-pot sequences identified by the program are equal to or better than the stepwise yields (sometimes by substantial margins). It doesn't always work, though:
Having discussed the success cases, it is important to outline the pitfalls of the method. While our algorithm has so far generated over a million structurally diverse one-pot sequences, it is clearly impossible to validate all of them experimentally. Instead, we estimated the likelihood of false-positive predictions by closely inspecting about 500 predicted sequences and cross-checking them against the original research describing the constituent/individual reactions. In few percent of cases, the predicted sequences turned out to be unfeasible because the underlying chemical databases did not report, or reported incorrectly, the key reagents or reaction conditions present in the original reports. This result underscores the need for faithful translation of the literature data into chemical database content. A much less frequent source of errors (only few cases we encountered so far) is the algorithm's incomplete “knowledge” of the mechanistic details of the reactions to be wired. One illustrative example is included in the Supporting Information, Section 5, where a predicted sequence failed experimentally because of an unforeseen transformation of Lawesson's reagent into species reactive toward one of the intermediates. We recognize that there is an ongoing need to improve the filters/rules that our algorithm uses; the goal is that such improvements will ultimately render the algorithm on a par with the detailed synthetic knowledge of experienced organic chemists. . .
And you know, I don't see any reason at all why that can't happen, or why it won't. It might be this program, or one of its later versions, or someone else's software entirely, but I truly don't see how this technology can fail. Depending on the speed with which that happens, it could transform the way that synthetic chemistry is done. The software is only going to get better - every failed sequence adds to its abilities to avoid that sort of thing next time; every successful one gets a star next to it in the lookup table. Crappy reactions from the literature that don't actually work will get weeded out. The more it gets used, the more useful it becomes. Even if these papers are presenting the rosiest picture possible, I still think that we're looking at the future here.
Put all this together with the automated random-reaction-discovery work that I've blogged about, and you can picture a very different world, where reactions get discovered, validated, and entered into the synthetic armamentarium with less and less human input. You may not like that world very much - I'm not sure what I think about it myself - but it's looking more and more likely the be the world we find ourselves in.
Now here's one of those structures that you don't see very often in a drug molecule. It wasn't intended to be a drug, though - it's a photolabel tool compound based on the general anesthetic mephobarbital, which is what that trifluoromethyldiazirine group is doing in there. (When those are exposed to light, nitrogen gas takes off, leaving behind a reactive carbene that generally attacks something nearby as quickly as possible).
But when the two enantiomers were tested, it turns out that one of them is about as potent as the best compounds in its class, while the other (the R enantiomer) is ten-fold better. And when used for its intended purpose, as a photolabeling agent, it does show up stuck to specific sites on human GABA receptors, as hoped. So this should provide some interesting information about barbituate binding, although I sort of doubt if anyone's going to try to develop it into a general anesthetic all on its own.
In a related topic, note that the model for this series, mephobarbital itself, is disappearing from the market. It's one of those ancient compounds that never really went through the modern regulatory process, but the FDA has stated that it's not going to let it be grandfathered in. Its manufacturer, Lundbeck, said earlier this year (PDF) that it saw no path forward other than a completely new NDA filing, which didn't seem feasible, so it was abandoning the product. Existing stocks have expired by now, so mephobarbital is no more, at least in the US.
Looks like my "Things I Won't Work With" series (and John Clark's book "Ignition") has inspired science fiction author Charles Stross
- check out this story, and prepare to see several compounds that you never expected to see mixed together (!)
I wrote here about the Cronin lab at Glasgow and their work on using 3-D printing technology to make small chemical reactors. Now there's an article on this research in the Observer that's getting some press attention (several people have e-mailed it to me). Unfortunately, the headline gets across the tone of the whole piece: "The 'Chemputer' That Could Print Out Any Drug".
To be fair, this was a team effort. As the reporter notes, Prof. Cronin "has a gift for extrapolation", and that seems to be a fair statement. I think that such gifts have to be watched carefully in the presence of journalists, though. The whole story is a mixture of wonderful-things-coming-soon! and still-early-days-lots-of-work-to-be-done, and these two ingredients keep trying to separate and form different layers:
So far Cronin's lab has been creating quite straightforward reaction chambers, and simple three-step sequences of reactions to "print" inorganic molecules. The next stage, also successfully demonstrated, and where things start to get interesting, is the ability to "print" catalysts into the walls of the reactionware. Much further down the line – Cronin has a gift for extrapolation – he envisages far more complex reactor environments, which would enable chemistry to be done "in the presence of a liver cell that has cancer, or a newly identified superbug", with all the implications that might have for drug research.
In the shorter term, his team is looking at ways in which relatively simple drugs – ibuprofen is the example they are using – might be successfully produced in their 3D printer or portable "chemputer". If that principle can be established, then the possibilities suddenly seem endless. "Imagine your printer like a refrigerator that is full of all the ingredients you might require to make any dish in Jamie Oliver's new book," Cronin says. "Jamie has made all those recipes in his own kitchen and validated them. If you apply that idea to making drugs, you have all your ingredients and you follow a recipe that a drug company gives you. They will have validated that recipe in their lab. And when you have downloaded it and enabled the printer to read the software it will work. The value is in the recipe, not in the manufacture. It is an app, essentially."
What would this mean? Well for a start it would potentially democratise complex chemistry, and allow drugs not only to be distributed anywhere in the world but created at the point of need. It could reverse the trend, Cronin suggests, for ineffective counterfeit drugs (often anti-malarials or anti-retrovirals) that have flooded some markets in the developing world, by offering a cheap medicine-making platform that could validate a drug made according to the pharmaceutical company's "software". Crucially, it would potentially enable a greater range of drugs to be produced. "There are loads of drugs out there that aren't available," Cronin says, "because the population that needs them is not big enough, or not rich enough. This model changes that economy of scale; it could makes any drug cost effective."
Not surprisingly Cronin is excited by these prospects, though he continually adds the caveat that they are still essentially at the "science fiction" stage of this process. . .
Unfortunately, "science fiction" isn't necessarily a "stage" in some implied process. Sometimes things just stay fictional. Cronin's ideas are not crazy, but there are a lot of details between here and there, and if you don't know much organic chemistry (as many of the readers of the original article won't), then you probably won't realize how much work remains to be done. Here's just a bit; many readers of this blog will have thought of these and more:
First, you have to get a process worked out for each of these compounds, which will require quite a bit of experimentation. Not all reagents and solvents are compatible with the silicone material that these microreactors are being fabricated from. Then you have to ask yourself, where do the reagents and raw materials come in? Printer cartridges full of acetic anhydride and the like? Is it better to have these shipped around and stored than it is to have the end product? In what form is the final drug produced? Does it drip out the end of the microreactor (and in what solvent?), or is a a smear on some solid matrix? Is it suitable for dosing? How do you know how much you've produced? How do you check purity from batch to batch - in other words, is there any way of knowing if something has gone wrong? What about medicines that need to be micronized, coated, or treated in the many other ways that pills are prepared for human use?
And those are just the practical considerations - some of them. Backing up to some of Prof. Cronin's earlier statements, what exactly are those "loads of drugs out there that aren't available because the population that needs them is not big enough, or not rich enough"? Those would be ones that haven't been discovered yet, because it's not like we in the industry have the shelves lined with compounds that work that we aren't doing anything with for some reason. (Lots of people seem to think that, though). Even if these microreactors turn out to be a good way to make compounds, though, making compounds has not been the rate-limiting step in discovering new drugs. I'd say that biological understanding is a bigger one, or (short of that), just having truly useful assays to find the compounds you really want.
Cronin has some speculations on that, too - he wonders about the possibility of having these microreactors in some sort of cellular or tissue environment, thus speeding up the whole synthesis/assay loop. That would be a good thing, but the number of steps that have to be filled in to get that to work is even larger than for the drug-manufacture-on-site idea. I think it's well worth working on - but I also think it's well worth keeping out of the newspapers just yet, too, until there's something more to report.
I'm pressed for time this morning, so I wanted to put up a quick link to Adam Feuerstein's thoughts on media embargoes of scientific results (and how they're becoming increasingly useless).
And I also wanted to note this odd bit of news: I'll bet you thought that fluorine, elemental gaseous fluorine, wasn't found in nature. Too reactive, right? But we're all wrong: it's found in tiny cavities in an unusually stinky mineral. And part (or all) of that smell is fluorine itself, which I'll bet that very few people have smelled in the lab. I hope not, anyway.
A lot of natural product structures have been misassigned over the years. In the old days, it was a wonder when you were able to assign a complex one at all. Structure determination, pre-NMR, could be an intellectual challenge at the highest level, something like trying to reconstruct a position on a chess board in the dark, based on acrostic clues in a language you don't speak. The advent of modern spectroscopy turned on the lights, which is definitely a good thing, but many people who'd made their careers under the old system missed the thrill of the old hunt when it was gone.
But even now, it's possible to get structures wrong - even with high-field 2-D NMR, even with X-ray spectroscopy. Natural products can be startlingly weird by the standards of human chemistry, and I still have a lot of sympathy for anyone who's figuring them out. My sympathy goes only so far, though.
Specifically, this case. I have to agree with the BRSM Blog, which says: "I have to say that I think I could have done a better job myself. Drunk." Think that's harsh? Check out the structures. The proposed structure had two napthalenes, with two methoxys and four phenols. But the real natural product, as it turns out, has one methoxy and one phenol. And no napthyls. And four flipping bromine atoms. Why the vengeful spirit of R. B. Woodward hasn't appeared, shooting lightning bolts and breaking Scotch bottles over people's heads, I just can't figure.
It's not anything to shake the earth, but I'm actually happy to see new variations being discovered for ancient reactions like the Friedel-Crafts. It makes sense that an activated amide could participate in the reaction, but it looks like no one's ever quite explored the idea like this.
And yes, I know that a large Friedel-Crafts can be a pain, what with all that aluminum gunk. The biggest one I ever ran used protic conditions (methanesulfonic acid), so I haven't had the complete experience, but I've still managed to work my way through some gooey aluminum milkshakes. But it's still a useful reaction on the bench scale.
And somehow, I can read a paper like this one and be pleased, while a paper on yet another way to dehydrate an oxime to a nitrile makes me roll my eyes. I'm still trying to work out why that might be - a bit broader scope? More possible utility? Just the fact that this was something that no one had quite thought of, as opposed to another way to take the same starting material to the same product? I should figure out what my boundaries are.
Those of you who are fans of high-throughput reaction discovery have another paper to check out - and those who aren't have another reason to grit your teeth. (Previous examples here, here, and here). The authors, a collaboration between the Bellomo lab at Penn and the Merck process group at Rahway, have gotten the reaction screen size down to 20 microliters with 1 mg of compound, which allows you to go through 96-well plates pretty rapidly.
Their test bed was a pyrimidone synthesis reaction. They screened 475 reaction conditions (95 different additives/catalysts in 5 different solvents). Each of them got one hour at 60 C to show what it could do, and the entire analysis was completed in one day. A phenantholine/copper bromide catalyst in dioxane showed the best results from that run, so it was taken to a separate series of experiments, to see how low the loading could go, what similar solvents might work out, how low they could push the reaction temperature, and so on. As it turned out, 2-methyl THF at RT overnight with 5% catalyst gave an 84% yield, which already represented a significant improvement over the known conditions (which used no catalyst at 140 C).
Moving to different substrates, they found that these conditions gave product each time, but in varying yields. A further catalyst screen, with 112 phosphine ligands, gave another set of conditions that could also be applied to the diverse substrates. A re-screen of solvents together with the best phosphine gave (along with the initial optimization conditions) high-yielding reactions for each of the new substrates. There was no set of one-size-fits-all conditions, though, which certainly fits with organic synthesis as I know it.
With these in hand, they did some work on the mechanism. It appears that the copper is participating in a single-electron transfer reaction, but further details aren't clear. It's not the sort of thing that you would have been able to think your way to on a blackboard, which (to me) is the whole point of doing chemistry this way. As the authors put it:
We envision that similar, generally useful platform tools will soon become more widely available, thus dramatically impacting chemistry development and enabling increased access to chemical diversity and lower-cost synthesis. Most importantly, we believe that such platforms will lead to the discovery of new and potentially useful chemical reactivity and reaction mechanisms.
Exactly. We should be finding easier ways to make compounds, and new ways to make compounds we've never been able to prepare. I think that searching for them in this way is an efficient way to do that, and will also open up new areas of research as we stumble across things we never realized were even there. If there's a downside here, I'm not seeing it.
Here's an excellent article, with copious references, tracing the history of what we now know as the metal-catalyzed coupling field. Victor Snieckus of Queen's University, Thomas Colacot (Johnson Matthey) and co-authors go back to the Wurtz and Glaser reactions of the 1850s and 60s, up through the Ullmann reaction (1891, and still very much with us) and Kharasch and Cadiot-Chodkiewicz couplings (1940s) before breaking into the world of palladium with the Wacker oxidation.
Along the way, one learns that the discoverer of palladium (Wollaston) could never interest anyone in the metal, and almost all of it that he'd extracted was still sitting on the shelf, unsold, at his death. Time vindicated him, and how - it's now perhaps the most essential catalytic metal in the world. The late 1960s were a turning point:
Entry of Richard Heck: Following post-doctoral studies, Heck accepted a position at Hercules Powder Co where he was afforded freedom that is seldom experienced by the modern industrial chemist. Briefed with the task of “doing something with transition metals,” Heck investigated the chemistry of cobalt carbonyl complexes. Although this work generated many interesting observations, finding profitable applications for his research proved difficult. Inspired by his colleague Pat Henry's work on the Wacker oxidation, Heck's attention turned in the direction of arylpalladium chemistry.
He tried Wacker-type conditions with other reagents around to try to intercept the palladium intermediate, and organomercurys obliged with an immediate reaction. The story from there is a trip through a good swath of the periodic table, and the development of an awful lot of knowledge and expertise in metal complexes. Enter then Mizoroki, Kumada, Sonogashira, Negishi, Stille, Suzuki and many others. It's a long, complex, story, but this paper should serve as the definitive overview, and an excellent look at how chemistry (and science in general) go about discovering and developing things.
I've written here before about reaction discovery schemes, and the reaction to those reactions has been, well, mixed. I like them, some other people like them, but some other people are quite offended by the "random search" mentality behind these ideas.
Well, prepare yourselves for another technology for exploring the wild blue yonder. A new paper in Angewandte Chemie from a group at the CEA (Gif sur Yvette, France) outlines an immunological detection scheme. They have antibodies to an imidazole derivative, and antibodies to a phenolic moeity as well. So both structures are attached to a range of functional groups and combined with heat and/or metal catalysts to see if anything happens. A sandwich assay at the end with the different antibodies gives you a yellow color only if a compound has been formed that has both ends present; that is, if a coupling reaction of some sort has occurred.
They ran 3360 reactions, each on a 100 nmol scale (there's the sensitivity of the antibodies for you). Two new reactions were discovered - an isourea synthesis (which can lead to benzoxazoles) and an alkyne reaction leading to thiazole derivatives. Neither of those is going to set the world of organic chemistry ablaze, but as a proof of concept, I'm convinced that this technique can work. So what do you do with it next?
One plan looks to be discovering new bioorthogonal reactions, couplings that can take place either inside or on the surface of living cells. The immunological detection is so sensitive that products can be teased out of all sorts of messy mixtures, apparently even cell lysates. I'd also encourage them to try some other conditions, such as various photochemical setups, to see what might be out there - it's a much less explored field than copper-catalyzed coupling reactions.
Like it or not, I think we're going to be seeing more of this sort of work. We might as well make the most of it!
Now these are the funkiest structures I've seen in quite a while. I won't spoil the surprise; if you're an organic chemist, go ahead and click on the link. This is one of those "No one's made compounds like this, so let's see if they do anything" papers, and I'd say that if you're going to do that sort of thing, you should go pretty far off the beaten path. That they have.
These compounds are - not surprisingly - said to be cytotoxic, with activity against a range of cancer cell lines. A couple of passes through the paper, and I haven't found any normal cells used as controls for all that cytotoxicity. Sad to say, the betting would be that there's no window at all. But at least I've seen a class of compounds that I'll bet has never made it into J. Med. Chem. before.
I'd be interested in hearing people's thoughts on this technology, from the Cronin group at the University of Glasgow. (Here's a press release, and a piece from Chemistry World if you can't get in to Nature Chemistry).
They're adapting 3-D printing technology to make small reaction vessels out of silicone polymer. The design of these can be changed to directly alter the mixing, timing, and stoichiometry of reactions, and they've also gone as far as incorporating palladium catalyst into the walls of the newly formed reactors, making them active for hydrogenation reactions.
I can see this eventually being useful for multistep flow chemistry, a micro-scale analog of the sorts of systems that Steve Ley's group has published on. Perhaps an array of identical vessels could be used in parallel for scale-up if the design is taking advantage of the small size of the chambers (again, as is done in industrial flow applications). The speed with which new doped polymeric materials could be prototyped seems to be a real advantage as well, which should allow experimentation with immobilized reagents and catalysts which would be incompatible with each other in solution. Other ideas?
We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).
Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.
And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:
In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.
This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?
Update: for the non-chemists in the audience who are wondering why one doesn't stroll in as advertised, check out what happens when you deal with the nastier end of fluorine chemistry. This new chemistry isn't anything like those examples - thank goodness - but it'll give you some idea of why we respect and fear the fluorine.
C&E News has an article on some of the recent fluorination methods that have been appearing in the literature. (Some of these have come up on this site here, here, and here).
These methods are all quite interesting (I've tried some of them out myself, with success), but what I also found interesting was the sociological angle that the article brought in. Organofluorine chemistry has not, over the years, been the sort of thing that one takes up lightly, for a lot of good reasons. Some of the real advances in the field have come from making it more accessible to more chemists. Very few people will use elemental fluorine other than at near-gunpoint, and some of the other classic reagents are still quite unfriendly, tending to leave cursing chemists swearing never to touch them again.
But making the field more open makes it, well, more open. And some of the people who've been there a while aren't quite sure what to make of the newcomers. They don't always cite the literature in appropriate depth, which is a real concern, and there can be a general feeling that they haven't paid their fluorine dues. (But the whole point is to keep people from paying those in the first place).
Since I'm not having to make my reputation discovering fluorination conditions, though, I'm just happy to deal with the results of all this work, both from the hardy pioneers as well as from the flashy new immigrants. These are useful reactions, and the rest of us are glad to have 'em.
Courtesy of C&E News, here's an interesting look inside the Chinese labs of HEC Pharm, a company making APIs and generics. The facilities look good. I have to say, that's an awful lot of HPLC capacity, starting at 0:41.
The idea of company housing, though, is a bit harder to get used to. . .
Whoever's behind the Journal of Apocryphal Chemistry is trying to do everyone a good deed before we get into allergy season. After detailing the ever-more-stringent controls on the sale of pseudephedrine, they propose a synthetic route based on a more readily available starting material: methamphetamine.
A quick search of several neighborhoods of the United States revealed that while pseudephedrine is difficult to obtain, N-methylmethamphetamine can be procured at almost any time on short notice and in quantities sufficient for synthesis of useful amounts of the desired material. Moreover, according to government statistics, N-methylmethamphetamine is becoming an increasingly attractive starting material for pseudephedrine, as the availability of N-methylmethamphetamine has remained high while prices have dropped and purity has increased. We present here a convenient series of transformations using reagents which can be found in most well stocked organic chemistry laboratories. . .
Their route, based on a 1985 paper in J. Chem. Soc. Chem. Comm., is not exactly trailer-park chemistry, though. (I note that they have the reference a bit wrong as well; there was no plain J. Chem. Soc. in 1985). It involves a chromium carbonyl complex of the aryl ring, formation of a chiral lithium dianion, and oxidation of that with MoOPH, which would give you pseudephedrine after decomplexation. There's no way to tell if these reactions have actually been run, of course. Based on the literature precedent, it might work, although I'd be worried about maintaining the chirality of the dianion. (For what it's worth, the authors are also aware of this problem, and claim that the selectivity was unaffected).
Their larger point stands. I look forward to seeing more from this paper's authors, O. Hai and I. B. Hakkenshit. I see less interesting stuff in my RSS feed every day of the week.
Stuart Cantrill has a post on one of those vast dendrimer structures - you know, those mandala-like things that weigh as much as a beer truck. He says that if you can draw the structure on his page in ChemDraw (or the like) in under three hours, you are clearly a wonder-worker.
He's asking on his Twitter feed for examples of the worst chemical structure anyone's had to draw, so I thought I'd throw the same question out to the crowd. You're going to have had to have lead an evil past life to be able to beat his dendrimer, though.
I don't know how many of you out there like to form azides, but if you do, you've probably used (or thought about using) imidazole-1-sulfonyl azide hydrochloride. This reagent appeared in Organic Letters a few years ago as a safe-to-handle shelf-stable azide transfer reagent, and seems to have found popularity. (I've used it myself).
So it was with some alarm that I noted this new paper on the stability and handling characteristics of the reagent. It's a collaboration between the University of Western Australia (where the reagent was developed, partly by the guy whose lab bench I took over in grad school back in 1983, Bob Stick), the University of British Columbia, and the Klapötke group at Munich. That last bunch is known to readers of "Things I Won't Work With", as experts in energetic materials, and when I saw that name I knew I'd better read the paper pronto.
As it turns out, the hydrochloride isn't quite as optimal as thought. It's impact-sensitive, for one thing, and not shelf-stable. The new paper mentions that it decomposes with an odor of hydrazoic acid on storage - you don't want odors of hydrazoic acid, believe me - and I thought while reading that, "Hmm. My bottle of the stuff is white crystalline powder; that's strange." But then I realized that I hadn't looked at my bottle for a few months. And as if by magic, there it was, turning dark and gooey. I had the irrational thought that the act of reading this paper had suddenly turned my reagent into hazardous waste, but no, it's been doing that slowly on its own.
So if you have some of this reagent around, take care. The latest work suggests that the hydrogensulfate salt, and especially the fluoroborate, are less sensitive and more stable alternatives to the hydrochloride, and I guess I'll have to make some at some point. (They also made the perchlorate - just for the sake of science, y'know - and report, to no one's surprise, that it "should not be prepared by those without expertise in handling energetic materials"). But it needs no ghost come from the grave to tell us this.
So, back to my lab and my waste-disposal problem! And here's a note on the literature. We have the original prep of the reagent, a follow-up note on stability problems, and this latest paper on alternatives. But when you go back to the original paper, there is no mention of the later hazard information. Shouldn't there be a note, a link, or something? Why isn't there? Anyone at Organic Letters or the ACS care to comment on that?
Update: I've successfully opened my bottle, with tongs and behind a blast shield, just to be on the safe side, and defanged the stuff off by dilution.
Here's a YouTube look at a periodic table, laid out with high-quality samples of the real elements. I want one, although I'm willing to compromise on some of the radioactive items; completeness can be taken a bit too far.
Looking through the literature this morning, I thought about another technique that, although you see it published on, no organic chemist I know has ever actually used: electrochemistry. There are all sorts of odd reactions that can apparently be made to go at electrode surfaces, but what synthetic organic chemist has ever run one, besides someone in a group that concentrates on publishing papers on electrochemical reactions? Since a few inconclusive cyclic voltammetry scans in 1984, I sure haven't.
That's more harsh-sounding than I intended. I definitely don't think that the technique is useless, but it surely doesn't get used much. One problem is that there are so many different conditions - solvents, electrolytes, electrode materials, voltage/current regimens. If you've never done the stuff before, it's hard to know where to start. And that leads to the next problem, which is that so much of the equipment in the field has been home-made. That makes the activation barrier to trying it yourself that much higher: do you want to do this reaction enough to want to build your own apparatus and troubleshoot it? Or do you have something else to do? If someone sold a standard electrochemistry kit (controller box to run different conditions, set of different electrode materials, etc.), that would free some people up to find out what it could do for them, rather than wondering if they've built a decent setup.
Then there's the scale-up problem. When you're working at a surface to do your chemistry, that's always going to be a concern. What's the throughput? Enough to meet your needs? And if not, how exactly are you going to increase it, without having to rebuild the whole apparatus? There's probably a way to integrate flow chemistry with electrochemistry, which might solve that problem. But that mixture is, as yet, still in the realm of a few dedicated tinkerers - which is what one could say, sometimes, of the whole electrochemical field.
For a really stunning electron micrograph of the thinnest possible layer of glass, see here. (If you don't have journal access, here's a release with some details). What's even more striking is that the semi-random arrangement of atoms is basically an exact match of a hypothesis from 1932 by W. H. Zachariasen at Chicago.
And maybe it's just me, but high-resolution images of molecular structure like this still give me the shivers. I mean, I've seen all sorts of electron density maps from X-ray crystallography, but somehow this sort of thing gives one a more direct feeling of looking at the individual atoms. And for some reason, that seems like something Man Was Not Meant to Do - perhaps it's all those old elementary school textbooks that told me that atoms could never be seen. (Then again, philosopher Mortimer Adler made the same assumption, as I found to my surprise when I read his Ten Philosophical Mistakes, on page 184 if you're keeping score at home.
Noted chem-blogger Milkshake seems to have had a close call with a fire started by a tiny potassium hydride residue. It looks like he made it through without serious injury, but that sort of thing will definitely shake a person up.
I hate potassium hydride. Its relative sodium hydride is a common reagent, but it's much tamer (and even so, can cause interesting fires - I knew someone who ignited a heap of it on the pan of a balance while he was weighing it out, which slowed things down a bit). Sodium hydride is usually sold as a 60% dispersion, a dark grey powder soaked with mineral oil to keep it from deteriorating too quickly (and to keep it from setting everything on fire). You can buy 95% sodium hydride, the dry stuff, and there are people who swear by it, but I tend to sweat at it. You never know if it's been stored properly; you may be adding a slug of sodium hydroxide to your reaction without knowing it. And there's the fire part. You'll want to move briskly if you're using the 95%, and I'd pick a day when the humidity is low.
But potassium hydride, that's another beast entirely. It makes the sodium compound look like corn meal, in terms of how forgiving it is. You can't get away with the clumpy oily powder form at all - traditionally, KH is sold as a gooey dispersion of grey powder sitting under a few inches of mineral oil. If it's well dispersed, it's supposed to be 35%. You shake the stuff up until you think it's even mixed, then pipet out the amount of gunk that corresponded to the KH contained therein. Sure you do. What actually happens is that you pipet out the stuff, noticing while you do that it's already settling out inside the pipet, thereby to clog it up when you try to transfer it. No fun.
It's becoming available now dispersed in a block of wax, which is not such a bad idea at all. Wax isn't any harder to get out of your reaction than oil is, and you can carve off chunks and weigh them without so many what-am-I-doing moments. But Milkshake worries that this ease of use will lead to more fires during workups (which is where his reaction ran into trouble), and he may well be right. If you're going to use KH, don't let your guard down.
A new paper in Angewandte Chemie tries to open another front in relations between academic and drug industry chemists. It's from several authors at GSK-Stevenage, and it proposes something they're calling "Lead-Oriented Synthesis". So what's that?
Well, the paper itself starts out as a quick tutorial on the state and practice of medicinal chemistry. That's a good plan, since Angewandte Chemie is not primarily a med-chem journal (he said with a straight face). Actually, it has the opposite reputation, a forum where high-end academic chemistry gets showplaced. So the authors start off by reminded the readership what drug discovery entails. And although we've had plenty of discussions around here about these topics, I think that most people can agree on the main points laid out:
1. Physical properties influence a drug's behavior.
2. Among those properties, logP may well be the most important single descriptor,
3. Most successful drugs have logP values between 1 and perhaps 4 or 5. Pushing the lipophilicity end of things is, generally speaking, asking for trouble.
4. Since optimization of lead compounds almost always adds molecular weight, and very frequently adds lipophilicity, lead compounds are better found in (and past) the low ends of these property ranges, to reduce the risk of making an unwieldy final compound.
As the authors take pains to say, though, there are many successful drugs that fall outside these ranges. But many of those turn out to have some special features - antibacterial compounds (for example) tend to be more polar outliers, for reasons that are still being debated. There is, though, no similar class of successful less polar than usual drugs, to my knowledge. If you're starting a program against a target that you have no reason to think is an outlier, and assuming you want an oral drug for it, then your chances for success do seem to be higher within the known property ranges.
So, overall, the GSK folks maintain that lead compounds for drug discovery are most desirable with logP values between -1 and 3, molecular weights from around 200 to 350, and no problematic functional groups (redox-active and so on). And I have to agree; given the choice, that's where I'd like to start, too. So why are they telling all this to the readers of Angewandte Chemie? Because these aren't the sorts of compounds that academic chemists are interested in making.
For example, a survey of the 2009 issues of the Journal of Organic Chemistry found about 32,700 compounds indexed with the word "preparation" in Chemical Abstracts, after organometallics, isotopically labeled compounds, and commercially available ones were stripped out. 60% of those are outside the molecular weight criteria for lead-like compounds. Over half the remainder fail cLogP, and most of the remaining ones fail the internal GSK structural filters for problematic functional groups. Overall, only about 2% of the JOC compounds from that year would be called "lead-like". A similar analysis across seven other synthetic organic journals led to almost the same results.
Looking at array/library synthesis, as reported in the Journal of Combinatorial Chemistry and from inside GSK's own labs, the authors quantify something else that most chemists suspected: the more polar structures tend to drop out as the work goes on. This "cLogP drift" seems to be due to incompatible chemistries or difficulties in isolation and purification, and this could also illustrate why many new synthetic methods aren't applied in lead-like chemical space: they don't work as well there.
So that's what underlies the call for "lead-oriented synthesis". This paper is asking for the development of robust reactions which will work across a variety of structural types, will be tolerant of polar functionalities, and will generate compounds without such potentially problematic groups as Michael acceptors, nitros, and the like. That's not so easy, when you actually try to do it, and the hope is that it's enough of a challenge to attract people who are trying to develop new chemistry.
Just getting a high-profile paper of this sort out into the literature could help, because it's something to reference in (say) grant applications, to show that the proposed research is really filling a need. Academic chemists tend, broadly, to work on what will advance or maintain their positions and careers, and if coming up with new reactions of this kind can be seen as doing that, then people will step up and try it. And the converse applies, too, and how: if there's no perceived need for it, no one will bother. That's especially true when you're talking about making molecules that are smaller than the usual big-and-complex synthetic targets, and made via harder-than-it-looks chemistry.
Thoughts from the industrial end of things? I'd be happy to see more work like this being done, although I think it' going to take more than one paper like this to get it going. That said, the intersection with popular fragment-based drug design ideas, which are already having an effect in the purely academic world of diversity-oriented synthesis, might give an extra impetus to all this.
Most readers here will remember the fatal lab accident at UCLA in 2009 involving t-butyllithium, which took the life of graduate student Sheri Sangji. Well, there's a new sequel to that: the professor involved, Patrick Harran, has been charged along with UCLA with a felony: "willfully violating occupational health and safety standards". A warrant has been issued for his arrest; he plans to turn himself in when he returns from out of town this season. The University could face fines of up to $1.5 million per charge; Harran faces possible jail time.
This is the first time I've heard of such a case going to criminal prosecution, and I'm still not sure what I think about it. It's true that the lab was found to have several safety violations in an inspection before the accident - but, on the other hand, many working labs do, depending on what sort of standards are being applied. But it would also appear that Sangji herself was not properly prepared for handing t-butyllithium, which (as all organic chemists should know) bursts into flames spontaneously on exposure to air. She was wearing flammable clothing and no lab coat; no one should be allowed to start working with t-BuLi under those conditions. Being inexperienced, she should have been warned much more thoroughly than she appears to have been.
So something most definitely went wrong here, and the LA County DA's office has decided to treat it as a criminal matter. Well, negligence can rise to that level, under the law, so perhaps they have a point. Thoughts?
Update: here's a post that rounds up the responses to this across the blogging world.
For some comic relief, here's a list that was going around on Twitter: Chemistry, The Movie. What titles would you suggest? To give you the idea, some of the ones that have already come up include "Boron Free", "The Wizard of Osmium", and "The Bench Connection". More at the link (and on Twitter, #ChemistryTheMovie), if you can stand it. But if you can't take, for example, "Weekend at Swernie's", you'd be advised to click somewhere else (!)
In case people haven't seen it, this trifluoromethylation method from the MacMillan lab looks quite interesting. Now, not everyone loves the idea of sticking CF3 groups all over their molecules, and if you're a medicinal chemist you'll want to exercise restraint, but it's still an inarguably useful group. And the chemistry is interesting, too, using visible-light photoredox chemistry, an area that's been getting a lot of attention recently and seems pretty promising.
There's quite a list of reactions that have been done via this route, usually involving ruthenium or iridium catalysts and either fluorescent light or blue LEDs. (A trivia note: that ruthenium compound linked to looks more like good saffron powder, both in solid form and solution, than anything I've ever seen. It's all that Iranian food I get at home, I guess). Labs to watch include MacMillan's at Princeton, Corey Stephenson's at BU, and Tehshik Yoon's at Wisconsin, among others. Photochemistry has been a neglected field in many ways - perhaps taking it out of the ultraviolet and finding useful new reactions will slowly bring it back into the usual toolkit.
Here's a very nice poster-style presentation of proton NMR and spectral interpretation, courtesy of Jon Chui. I wish I'd had something like it when I was learning the topic, and it's a very useful way to picture it even for those of us who've been taking spectra for years. Recommended.
Sorry about the lack of posting today; it's been a busy one. But I do have something that follows up on one of my less useful chemical bulletins, the one the other day about using uranium catalysts. Ben Warner sends along this paper from his time at Los Alamos, and yes, that means what you think it means. You may have done the Meerwein-Pondorf-Verley reaction, but if you have, I'll bet that you wimped out with some laid-back aluminum compound.
But you could have used plutonium, and how does that make you feel? Uranium (III), as it turns out, just doesn't cut it. Accept nothing but plutonium, folks; you can't beat it. And I now return you to your regular research, which I hope has nothing to do with this post at all!
The University of Ottawa has it all: colored solutions in their test tubes, thoughtful young scientists to look at them intently, and the absolutely required nonsensical chemical structures on the board in the background. What more do you want from a research department, eh? Throw in some purple spotlights and I'm sold. (Link via Chemjobber and Barney Grubbs on Twitter).
Update: embarrassing spelling fixed. Canadian readers are welcome to email me their complaints about the time they visited Woshington, DC.
How do you find new reactions? I blogged here in September about a very direct way of doing it, from John Hartwig's lab: set up a bunch of things and see what happens. I liked it very much, but opinions in the comments were mixed. Some people found this approach refreshing, while others found it more simplistic than simple.
Well, get ready for some more, courtesy of the MacMillan group at Princeton. This paper has just come out in Science on reaction discovery, and it takes a very similar approach to "accelerated serendipity". They were looking at photoredox catalysts, which have been used for some interesting studies in the past few years. You mostly see iridium and ruthenium catalysts, with variations of tris-bipyridyl ligands on them, but the variety of reactions that they can initiate is extraordinary.
Clearly, there must be a lot of reactions in this area that haven't even been found, and that's what this latest paper sets out to do:
Assuming that serendipity is governed by probability (and thereafter manageable by statistics), performing a large number of random chemical reactions must increase the chances of realizing a serendipitous outcome. However, the volume of reactions required to achieve serendipity in a repetitive fashion is likely unsuitable for traditional laboratory protocols that use singular experiments. Indeed, several combinatorial strategies have previously been used to identify singular chemical reactions (2–11); however, the use of substrate-tagging methods or large collections of substrate mixtures does not emulate the representative constituents of a traditional chemical reaction. On this basis, we posited that an automated, high-throughput method of reaction setup and execution, along with a rapid gas chromatography–mass spectrometry (GC-MS) assay using National Institute of Standards and Technology (NIST) mass spectral library software, might allow about 1000 random transformations to be performed and analyzed on a daily basis (by one experimentalist). Although we recognized that it is presently impossible to calculate the minimum number of experiments that must be performed to achieve “chance discoveries” on a regular basis, we presumed that 1000 daily experiments would be a substantial starting point.
That it would, and by combining a broad selection of interesting starting materials with several plausible photoredox catalysts, and then basically just letting things rip, they found one. Dicyanobenzene, as it turns out, does a radical coupling with tertiary amines, giving you a direct C-C bond formation route that arylates next to the nitrogen. It's a perfectly believable reaction, but there are a lot of perfectly believable reactions that you could draw in this area that don't actually work.
Looking over the paper, it appears that the more time-consuming parts of the experimental setup were avoiding known chemistry in the starting combinations, and looking over the results to see what was worth following up on in more detail. Those are both human-brainpower intensive tasks; the rest was automated as far as possible. Interestingly, it appears that MacMillan had earlier been trying a very similar approach to that Hartwig paper I blogged about in September, doing reaction discovery with transition metals. But they then switched to photochemistry, thinking that this might be a more wide-open field.
It's not like the reaction dropped out of the robotics fully formed. They saw a new product form with an iridium catalyst, dicyanobenzene, and N,N-dimethylaniline, but further optimization gave better (and more general) conditions. That's as it should be; there's no way (yet) to run enough experiments to both find new reactions and the best ways to run them in one shot. But just getting a whiff of something new and useful is enough, and I don't see any reason not to engage in automated searches for such things.
But from the reaction in the comments here to that Hartwig paper, I gather that not everyone agrees. As far as I can tell, one objection is that famous talented organic chemistry professors shouldn't have to engage in such brute-force exercises. The more elegant way to come up with these things, by this opinion, is to use more brainpower up front, rather than just mixing up a bunch of stuff to see what works. I suppose - not being a famous talented organic chemistry professor, myself - that I'm not so proud. But then, John Hartwig and Dave MacMillan are FTOCPs, and they seem to have swallowed their pride enough to find something new. And good for them!
You'd think that Georgia Tech's new undergraduate chemistry buildings would be decorated with chemical structures that (at least) don't violate the most basic rules of chemistry. You would be wrong. Who signed off on this stuff?
Now, I know that I'm not the first to notice this. And in the grand scheme of things, it's pretty trivial. But isn't it true, and hasn't it been true for many years, that the print advertisements of chemical companies are often strange and useless?
Here's an example from a recent issue of Chemical and Engineering News, one that was open on my desk to this very spot. Now, I don't know what a quarter-page goes for these days - probably not as much as the folks at C&E News would like for it to - but this was wasted money for sure. Let's count the ways. For one thing, the purple molecule graphic might be a neat-looking thing in a cosmetics ad, but not when placed in a magazine whose subscriber base is about 98% people with a chemistry degree. The slogan ("Our people make the difference") is such an ancient chunk of corporate goodthink that it can't even support a good covering of mold any more. And are we to infer that the model, a vaguely futuristic Eurofied Joni Mitchell, is one of those people? Not hardly. And what's with the cyber-gizmo dog collar thing she's wearing? One of those invisible-fence zappers, scaled up to human size?
The ad enjoins us to visit them at a conference booth in Geneva, which is at least a place where you're sure to find out what on earth Saltigo does. To be fair, the opposite page in the C&E News issue has another Saltigo ad, which has a couple of chemists in an unexciting but straightforward pitch that lets you know that they're a custom synthesis/process company that you can hire to try to save you money during production. (Interestingly, at least for me, I just now noticed that the first of the two, Andreas Stolle, is an old colleague of mine from my days at the Wonder Drug Factory in Connecticut - hello, Andreas! And tell your ad agency to make sure to spell "throughout" properly next time.)
No, I'm sure that Saltigo's a perfectly good outfit. But their ads aren't doing much to get that across. Nor are they the only company in that position - a glance through any issue of any magazine in the field will yield a rich harvest of ads that are drably functional at best, and baffling at worst. I wouldn't want the job of producing the things, I have to admit - but doesn't someone want to do it better than it's being done?
Well, the field bet won this year - no one had Daniel Schechtman and quasicrystals in their predictions, as far as I know. This is one of those prizes that is not easy to communicate to someone outside the field, but if I had to sum it up in one phrase for a nonscientist, it would be "Discovery of crystals that everyone thought were impossible".
That's because they have five-fold symmetry, among other types. And the problem there is pretty easy to show: if you take a bunch of identical triangles (any triangle at all), you can tile them out and cover a surface evenly - imagine a tabletop mosaic or a bathroom floor. And that works with any rectangle, too, naturally, and it also works with hexagons. But it does not work with regular pentagons (or with any other regular geometric figure). Gaps appear that cannot be closed. You can cheat and tile the plane with two types of bent pentagons or the like, but closer inspection shows that these cases are all really tiles of one of the allowed classes.
The same problems appear in three-dimensional crystals, and five-fold symmetry, in any of its forms, is just not allowed (and had never been seen). But in the early 1980s there came a report of just that. Daniel Schechtman, working at the National Bureau of Standards, had found a metallic crystalline substance that seemed to show clear evidence of an impossible form. I was in grad school when the result came out, and I well remember the stir it caused. Just publishing the result took a lot of nerve, since every single crystallographer in the world would tell you that if they knew one thing about their field, it was that you couldn't have something like this.
As it turned out, these issues had already been well explored by two different groups: medieval Islamic artists and mathematicians. It turns out that what looks like unallowable symmetry in two (or three) dimensions works out just fine in higher-dimensional spaces, and these theoretical underpinnings were actually a lot of help in the debates that followed.
Here's a good history of what happened afterwards. One thing that I recalled was that Linus Pauling wasn't buying it for a minute. He was, of course, quite old by that time, but he was still a force to be reckoned with in his own areas of expertise, despite the damage he'd done to his reputation with all the Vitamin C business. He kept up the barrage for the remainder of his life, publishing one of his last scientific papers (in 1992) on the subject and arguing yet again that the quasicrystal idea was mistaken. As that above-linked paper from Schechtman's co-worker John Cahn put it:
Quasicrystals provided win-win opportunities for crystallographers: If we were mistaken about them, expert crystallographers could debunk us; if we were right, here was an opportunity to be a trail blazer. While many crystallographers worldwide availed themselves of the opportunity, U.S. crystallographers avoided it, to a large extent because of Pauling’s influence.
But time has shown that the quasicrystal hypothesis is correct. You can have local symmetries of this kind, and many other "impossible" examples have been discovered since. The resolution of the X-ray structures has gotten better and better, ruling out all the other explanations - Pauling would have found it painful to watch. The resulting solids have rather odd properties, although if someone asked me to name any effect that they've had on anyone's daily life, I'd have to answer "none at all". But I'm sympathetic to anyone who proves something in science that no one thought could be proved, so Nobel Prize it is, and congratulations.
A side note: anyone want to take bets on whether some ayatollah or other Iranian politician will pop up, claiming that the whole subject of the prize was anticipated by the 15th-century Darb-e Imam shrine in Isfahan? Let's set the odds. . .
Now here's an odd reaction, done in an odd way. Organic chemists will all be familiar with the azide/acetylene cycloaddition to form triazoles. In its copper-catalyzed variant, it's become a sensation, and is used as a convenient linker to do all kinds of interesting things. The reverse reaction, taking a triazole back to the starting materials, just isn't feasible. If you heat up one of the triazoles enough to get it to do anything, which takes some pretty serious heat, it just gives you a handful of decomposition products.
But what if you grabbed each side of the ring and just pulled on it? A paper in Sciencedoes just that, though having polymeric chains attached. If you subject that to ultrasound, the cavitation bubbles that form are violent enough to pull and jerk the molecular chains around - and when they try that on a triazole-linked molecule, they can see reversion to the acetylene and the azide. This only happens with long-chain polymers - the effect increases with polymer molecular weight, and small-molecule analogs aren't cleaved at all. It also appears that the effect works best when the triazole is near the midpoint of the polymer, not out towards one end. These are just what you would expect for this sort of "mechanosynthesis", and strong evidence for the proposed effect.
This could lead to some rather unusual reactions being discovered. Some sort of cleavable tether that stands up under sonication might allow you to put on "mechanosynthetic handles" that you could then take off again, as if they were protecting groups. Silyl ethers, maybe? Which functional groups can take the stress, and which will pull apart to give something new?
I've been meaning to mention this paper from John Hartwig (and co-worker Daniel Robbins), because it's just the sort of let's-find-something-new idea that I like. Hartwig has made a name in the field of organometallic catalysis, and is looking for new reactions. So how do you find new reactions?
Most published methods for the high-throughput discovery of catalysts evaluate one of the two catalyst-reactant dimensions. In other words, these methods have been used to examine either many catalysts for a single class of reaction or a single catalyst for many reactions. A two-dimensional approach in which many catalysts for many possible catalytic reactions are tested simultaneously would create a more efficient discovery platform if the reactants and products from such a system could be identified.
Well, this paper details a brutally straightforward technique for doing that. They take a list of seventeen reactants, all around the same rough molecular weight range, each of them with a single functional group. They put a mixture of all seventeen into every well of a 96-well plate. Then they take twelve ligands, dispensed one per column of the plate, and then they take eight different metal catalyst precursors and dispense those across the eight rows. And then they take the plate and heat it up.
Can't get much more straightforward than that, can you? But analyzing the wells by mass spec tells you some interesting things, and you can cover a lot of ground. Seventeen substrates, fifteen metal starting points, and 23 ligand (or lack of ligand) combinations gives you a look into tens of thousands of possible reactions. They simplified the mass spec analysis by combining samples for each row, then combining another site for each column, so you only have to run 20 samples per plate to give you the X-Y coordinated of a well that did something. A test plate containing some combinations of known catalytic reactions showed the expected products in the right wells - and it showed some other reactions, too.
Among those were several wells that indicated an alkyne/aniline addition reaction catalyzed by copper. This turned out to be a hydroamination reaction that no one had observed before. There was also a new product in several Ni-catalyzed wells - a set of deconvolution experiments narrowed that one down, and it turned out to be reaction of arylboronic acids with diphenylacetylene to give a triarylalkene - a reaction not previously catalyzed by such a cheap metal as nickel. And while most of the known reactions are syn, this one gives anti addition, with E/Z ratios that vary depending on the ligand used for the metal.
Not bad - two new reactions in what was, in the end, a pretty simple experiment. And any good chemist should be able to see the ways this protocol could be extended. For example:
This approach to reaction discovery holds considerable potential for purposes beyond those revealed in the current work. For example, this system could be used to explore reactions with additives, such as oxidants, reductants, acids, and bases, and to explore reactions of two substrates with a third component, such as carbon monoxide or carbon dioxide. It could also be used to examine the reactivity of a single class of ligand with various organic substrates and transition metal–catalyst precursors. Thus, we anticipate that this approach to reaction discovery will provide a general and adaptable platform suitable for use by a wide range of laboratories for the discovery of a variety of catalytic reactions.
There's going to be some criticism, though, that this is (a) obvious and (b) not elegant. I regard those as features, not bugs. Never be afraid of the obvious. And organometallic catalysis is so complicated that trying to elegantly reason your way right to the good parts is not always a productive use of your time. Do you want to look like a genius, or do you want to discover new chemistry?
Back last year I did a brief post about how much not-so-exotic druglike chemical matter has never been explored. My example was substituting heteroatoms into the steroid nucleus - hard to get much more medicinally active than those, but most of the possible variations have never been made. Structurally they're right next door to things that have been known for decades, but they're largely unexplored (which is many cases is because they're not all that easy to make).
The RSC/SCI symposium called my attention to something in this exact class, abiraterone, a CYP17 inhibitor. This was discovered at the Institute for Cancer Research in London, and after several steps through the development world has ended up with J&J. It was approved by the FDA earlier this year for some varieties of prostate cancer.
So there's an example of a sorta-steroid making it all the way through. If intelligent (and oddly motivated) aliens landed tomorrow and forced me to use their advanced organic synthesis techniques to generate a library of unique structures with high hit rates in drug screens, I think I might ask them if they knew how to scatter basic amines, ethers, sulfonamides and so on in and around the steroid nucleus. I offer that advice free of charge to any readers who might find themselves in a similar situation.
Update: as per the comments, compare Cortistatin A for another, more highly modified steroid nucleus with an aromatic heterocycle hanging off it.
I wanted to send people to this 50-year retrospective in J. Med. Chem.. It's one of those looks through the literature, trying to see what kinds of compounds have actually been produced by medicinal chemists. The proxy for that set is all the compounds that have appeared in J. Med. Chem. during that time, all 415, 284 of them.
The idea is to survey the field from a longer perspective than some of the other papers in this vein, and from a wider perspective than the papers that have looked at marketed drugs or structures reported as being in the clinic. I'm reproducing the plot for the molecular weights of the compounds, since it's an important measure and representative of one of the trends that shows up. The prominent line is the plot of mean values, and a blue square shows that the mean for that period was statistically different than the 5-year period before it (it's red if it wasn't). The lower dashed line is the median. The dotted line, however, is the mean for actual launched drugs in each period with a grey band for the 95% confidence interval around it.
As a whole, the mean molecular weight of a J. Med. Chem. has gone up by 25% over the 50-year period, with the steeped increase coming in 1990-1994. "Why, that was the golden age of combichem", some of you might be saying, and so it was. Since that period, though, molecular weights have just increased a small amount, and may now be leveling off. Several other measures show similar trends.
Some interesting variations show up: calculated logP, for example, was just sort of bouncing around until 1985 or so. Then from 1990 on, it started a steep increase, and it's hard to tell if that's leveling off or not even now. At any rate, the clogP of the literature compounds has been higher than that of the launched drugs since the mid-1980s. Another point of interest is the fraction of the molecules with tetrahedral carbons. What you find is that "flatness" in the literature compounds held steady until the early 1990s (by which point it was already disconnected from the launched drugs), but since then it's gotten even worse (and further away from the set of actual drugs). This, as the authors speculate, is surely due to metal-catalyzed couplings taking over the world - you can see the effect right in front of you, and so far, the end is not in sight.
Those two measures are the ones moving the most outside the range of marketed drugs. And despite my shot at early combichem molecules, it's also clear that publication delays mean that some of these things were already happening even before that technique became fashionable (although it certainly revved up the trends). Actually, if you want to know When It Changed in medicinal chemistry, you have to go earlier:
It is worth noting that these trends seemed to accelerate in the mid-1980s, indicating that some change took place in the early 1980s. The most likely explanations for an upward change in the early 1980s (before the age of combinatorial chemistry or high-throughput screening) seem to be advances in molecular biology, i.e., understanding of receptor subtypes leading to concerns about speciﬁcity; target-focused drug design and its corresponding one-property-at-a-time optimization paradigm (possibly exacerbated by structural biology); and improvements in technologies which enabled the synthesis and characterization of more complex molecules.
Target-based drug design, again. I'm really starting to wonder about this whole era. And if you'd told me back in, say, 1991 about these doubts that I'd be having, I'd have been completely dumbfounded. But boy, do I ever have them now. . .
We've talked here before about the structural class known as rhodanines - the phrase "polluting the scientific literature" has been used to describe them, since they rather promiscuously light up a lot of drug target assays, and almost never to any useful effect.
Well, guess what? Now there's an even easier way to make them! And says this new paper in the Journal of Organic Chemistry:
5-(Z)-Alkylidene-2-thioxo-1,3-thiazolidin-4-ones (rhodanine derivatives) were prepared by reaction of in situ generated dithiocarbamates with recently reported racemic α-chloro-β,γ-alkenoate esters. This multicomponent sequential transformation performed in one reaction flask represents a general route to this medicinally valuable class of sulfur/nitrogen heterocycles. Using this convergent procedure, we prepared an analogue of the drug epalrestat, an aldose reductase inhibitory rhodanine.
Sequentially linking several different components in one reaction vessel has been studied intensively as a rapid way to increase molecular complexity while avoiding costly and environmentally unfriendly isolation and purification of intermediates.(1-4) Such efficient multicomponent reactions, such as the Ugi reaction, often produce privileged scaffolds of considerable medicinal value. Rhodanines (2-thioxo-1,3-thiazolidin-4-ones) are five-membered ring sulfur/nitrogen heterocycles some of which have antimalarial, antibacterial, antifungal, antiviral, antitumor, anti-inflammatory, or herbicidal activities. . .In conclusion, convergent syntheses of N-alkyl 5-(Z)-alkylidene rhodanine derivatives have been achieved using recently reported racemic α-chloro-β,γ-alkenoate ester building blocks. The formation of these rhodanine derivatives involves a three-step, one-flask protocol that provides quick access to biologically valuable sulfur–nitrogen heterocycles.
Just what we needed. Now it's only going to be a matter of time before someone makes and sells a library of these things, and we can all get to see them again as screening hits in the literature.
You don't hear much about bullvalene, outside of physical organic chemistry textbooks. It's a funny-looking symmetric tricyclic compound, which just seems to be another weirdo hydrocarbon until you consider what it can do with all those alkenes. Everything is lined up just right to rearrange - and then the product you get is lined up just right to rearrange, which gives you a product that rearranges, and so on and so on. The molecule has no permanent structure at reasonable temperatures; this process never stops.
We owe William von E. Doering and Wolfgang Roth for this one (the background story is here). I hadn't realized that the "bull" in the name was put in there by Doering's grad students - it was his nickname! (Believe me, there are a lot of research groups out there where that trick wouldn't provide anything printable). The molecule was synthesized by Gerhard Schröder of Karlsruhe, who continued to work in the bullvalene field (and on related cycloalkene oddities) for many years
There are 10!/3 distinct bullvalene structures, or 1,209,600 of the things. And while you can see the fluxional character in the NMR (one peak at high temperature in the carbon NMR, four sharp singlets at -60 C, and a mess at room temp), no one's really worked out what happens with substituted derivatives. They're going to wander around, too, but how much of that space do they explore? Schröder's group prepared a number of derivatives over the years and showed that they have dynamic structures, but figuring out just how dynamic is a complicated problem. Here's a picture of what happens with a tetrasubstituted compound, for example.
Now Jeffrey Bode (and coworker Maggie He) at the ETH in Zürich may have started to answer this question. They prepared a chiral trisubstituted bullvalone, no picnic in itself. That structure doesn't rearrange, but then they prepared an enolate and trapped it as an enol carbamate. That completes the three alkenes, and off things go. Of course, the alkenes are rather different from each other now, so not every pathway is going to be energetically similar, but there are still enough of them to make for quite a scatter.
When they analyzed the product(s) of that enolate trapping reaction, they found that there was still some chirality present. That must have been an exciting moment, but checking the HPLC carefully showed that there was a chiral impurity present that was left over from the starting material. Once that was cleaned out, it was clear that the situation was still pretty complex: they pulled out four fractions from the HPLC, all of which were mixtures of rearranging substituted bullvalenes. Two of the fractions had no optical activity at all, and showed (and kept) the same HPLC trace as each other over time. One of the other original HPLC cuts, though, had some residual optical activity, which disappeared over another 24 hours. During that time, too, its HPLC trace gradually evened out to be the same as the other two racemic cuts. The fourth cut of the original HPLC trace had even more optical activity in it, and normalized out even more slowly.
Their best explanation for all this is that the molecule starts off on its crazy course of interconverting rearrangements, but occasionally gets to a structure that, energetically speaking, is somewhat painted into a corner. Its pathways to get back out into the rapidly-rearranging manifold are higher-energy, so that part of the population retains chirality longer than the ones that took a different path. Eventually, though, everything does even out: the metastable structures back out of their respective dead ends and start flipping back around through the lower-energy rearrangement pathways.
As they get more of a handle on these molecules, they hope to start to control some of the rearrangement population, messing with the various rate constants so that the isomers sort themselves out (possibly) into discrete populations. There could be some very unusual applications for such shape-shifting molecules, although I have to say that training them away from their bucket-of-marbles-on-the-floor tendencies will not be easy. Still, this is the kind of physical organic chemistry I've always been happy to read about (and glad that I'm not having to do myself!)
There have been some neat ways to make fluorinated molecules reported recently, which I wanted to mention. We med-chemists just love our fluorines - as long as we don't have to use, like, fluorine itself to make them - because they armor-plate parts of our molecules against being metabolized and can change the binding profiles of the parent structures like nothing else can.
Over at New Reactions, there's a nice writeup on a new way to generate difluorocarbene, which (as it should) immediately adds to alkenes to give you difluorocyclopropanes. (It'll add to alkynes to give you the somewhat more exotic difluorocyclopropenes, too). This is from G. K. Surya Prakash and George Olah, and from the looks of it, it's simplicity itself: take your alkene and some TMS-CF3 in THF, and either run it hot with sodium iodide or in the cold with the anhydrous TBAF substitute TBAT. So there's what looks like a perfectly useful med-chem structural motif, suddenly made widely available.
The second paper is from the Baran and Blackmond labs at Scripps, and is a completely new way to introduce trifluoromethyl groups onto heterocyclic rings. This one generates trifluoromethyl radicals under very mild conditions, using the hitherto-obscure (but stable and relatively cheap) Langlois reagent as a source. You don't need any special group on the substrate to make this work - it charges right in and attacks the more active C-H bonds of the parent heterocycle. A wide variety of useful ring systems are shown to work, and it looks like you can change the regiochemistry by varying the solvent. I'm sure that people will think of other uses for the CF3 radical, now that it's much easier to get ahold of, but this one just by itself is going to be adopted very quickly.
These, I have to say, are just the kinds of new reactions that working chemists like to see: they make useful compounds that have been hard to access, they use commercial reagents, the conditions are not hideous and require no special equipment, and the authors have taken the time to demonstrate them on a very wide range of structures. The more things like this that get discovered, the better off we are.
You hear often about how many marketed drugs target G-protein coupled receptors (GPCRs). And it's true, but not all GPCRs are created equal. There's a family of them (the Class B receptors) that has a number of important drug targets in it, but getting small-molecule drugs to hit them has been a real chore. There's Glucagon, CRF, GHRH, GLP-1, PACAP and plenty more, but they all recognize good-sized peptides as ligands, not friendly little small molecules. Drug-sized things have been found that affect a few of these receptors, but it has not been easy, and pretty much all of them have been antagonists. (That makes sense, because it's almost always easier to block some binding event rather than hitting the switch just the right way to turn a receptor on).
That peptide-to-receptor binding also means that we don't know nearly as much about what's going on in the receptor as we do for the small-molecule GPCRs, either (and there are still plenty of mysteries around even those). The generally accepted model is a two-step process: there's an extra section of the receptor protein that sticks out and recognizes the C-terminal end of the peptide ligand first. Once that's bound, the N-terminal part of the peptide ligand binds into the seven-transmembrane-domain part of the receptor. The first part of that process is a lot more well-worked-out than the second.
Now a German team has reported an interesting approach that might help to clear some things up. They synthesized a C-terminal peptide that was expected to bind to the extracellular domain of the CRF receptor, and made it with an azide coming off its N-terminal end. (Many of you will now have guessed where this is going!) Then they took a weak peptide agonist piece and decorated its end with an acetylene. Doing the triazole-forming "click" reaction between the two gave a nanomolar agonist for the receptor, revving up the activity of the second peptide by at least 10,000x.
This confirms the general feeling that the middle parts of the peptide ligands in this class are just spacers to hold the two business ends together in the right places. But it's a lot easier to run the "click" reaction than it is to make long peptides, so you can mix and match pieces more quickly. That's what this group did next, settling on a 12-amino-acid sequence as their starting point for the agonist peptide and running variations on it.
Out of 89 successful couplings to the carrier protein, 70 of the new combinations lowered the activity (or got rid of it completely). 15 were about the same as the original sequence, but 11 of them were actually more potent. Combining those single-point changes into "greatest-hit" sequences led to some really potent compounds, down to picomolar levels. And by that time, they found that they could get rid of the tethered carrier protein part, ending up with a nanomolar agonist peptide that only does the GPCR-binding part and bypasses the extracellular domain completely. (Interestingly, this one had five non-natural amino acid substitutions).
Now that's a surprise. Part of the generally accepted model for binding had the receptor changing shape during that first extracellular binding event, but in the case of these new peptides, that's clearly not happening. These things are acting more like the small-molecule GPCR agonists and just going directly into the receptor to do their thing. The authors suggest that this "carrier-conjugate" approach should speed up screening of new ligands for the other receptors in this category, and should be adaptable to molecules that aren't peptides at all. That would be quite interesting indeed: leave the carrier on until you have enough potency to get rid of it.
What do you want to bet that Huw Davies and co-workers were partly interested in making dihydrofurans here, and mostly interested in having a synthetic sequence that used rhodium, silver, and then gold? Not that I blame them - personally, I'd have gone ahead and done a palladium coupling, a copper-catalyzed Ullmann of some sort, and then found something to reduce with platinum oxide. Go for the record! What is the record, I wonder?
I wanted to call attention to another blog roundtable, on several subjects related to how nonchemists see us and our business. The first post (at ScienceGeist) is on chemical safety (industrial chemicals = bad?). Day 2, at ChemJobber, is on whether the general public has any good idea of not only what chemists do (we work with chemicals, right?) but why and how we do it. Day 3, at ChemBark, takes things to a practical level, showing how lack of understanding can confuse people about energy policy (does growing corn to make ethanol make any sense?) And Day 4, at The Bunsen Boerner, is on a topic I've been known to go off on myself, the use (and mostly the misuse) of the word "organic".
Here's an article from Xconomy on Ensemble Therapeutics, a company that spun off from work in David Liu's lab at Harvard. Their focus these days is on a huge library of macrocyclic compounds (prepared by using DNA tags to bring the reactants together, which is a topic for a whole different post). They're screening against several targets, and with several partners. Why macrocycles?
Well, there's been a persistent belief, with some evidence behind it, that medium- and large-ring compounds are somehow different. Cyclic peptides certainly can be distinguished from their linear counterparts - some of that can be explained by their being unnatural (and poor) substrates for some of the proteases that would normally clear them out, but there can be differences in distribution and cell penetration as well. The great majority of non-peptidic macrocycles that have been studied in biological systems are natural products - plenty of classic antibiotics and the like are large rings. I worked on one for my PhD, although I never quite closed the ring on the sucker.
You can look that that natural product distribution in two ways: one view might be that we have an exaggerated idea of the hit rate of macrocycles, because we've been looking at a bunch of evolutionarily optimized compounds. But the other argument is that macrocycles aren't all that easy to make, therefore evolutionary pressures must have led to so many of them for some good reasons, and we should try to take advantage of the evidence that's in front of us.
What's for sure is that macrocyclic compounds are under-represented in drug industry screening collections, so there's an argument to be made just on that basis. (You do see them once in a while). And the chemical space that they cover is probably not something that other compounds can easily pick up. Large rings are a bit peculiar - they have some conformational flexibility, in most cases, but only within a limited range. So if you're broadly in the right space for hitting a drug target, you probably won't pay as big an entropic penalty when a macrocycle binds. It already had its wings clipped to start with. And as mentioned above, there's evidence that these compounds can do a better job of crossing membranes than you'd guess from their size and functionality. One hope is that these properties will allow molecular weight ranges to be safely pushed up a bit, allowing a better chance for hitting nontraditional targets such as protein-protein interactions.
All this has led to a revival of med-chem interest in the field, so Ensemble is selling their wares at just the right time. One reason that there haven't been so many macrocycles in the screening decks is that they haven't been all that easy to make. But besides Liu's DNA templating, some other interesting synthetic methods have been coming along - the Nobel-worthy olefin metathesis reaction has been recognized for some time as a good entry into the area, and Keith James out at Scripps has been publishing on macrocyclic triazoles via the copper-catalyzed click reaction. Here's a recent review in J. Med. Chem., and here's another. It's going to be interesting to see how this all works out - and it's also a safe bet that this won't be the only neglected and tricky area that we're going to find ourselves paying more attention to. . .
A reader sends along this example of a "stereodestructive" synthesis. I have nothing in particular against N-alkylpyrroles, but do we need another route to them so badly that we have to tear up not-so-cheap hydroxyproline to get there, burning up two chiral centers in the process?
Readers are invited to submit other examples from the "wad it up and throw it away" school of chiral synthesis in the comments. . .
Now, this is a strange little paper in Chem. Comm. The authors are studying small reverse micelles (RMs, basically, for those of you not in the field, bits of water enclosed by a layer of soap-like organic molecules).
Nothing wrong with that - micelles and reverse micelles have been objects of study for many years now. But they're saying that when they look at positively charged molecules and the way that they associate with positively charged RMs - that once the size of the reverse micelles gets small enough, that like charges attract instead of repel:
Comparing the results in the RMs and in the conventional micelles, it is quite evident that the violation in the principle of electrostatic interaction is not a general phenomenon and is quite speciﬁc for the nano-conﬁned environment, like in RMs. Thus, the charged surface formed under the nano-conﬁnement shows quite extraordinary electrostatic behaviour as compared to other normal charged surfaces.
They have some possible explanations, such as the large number of counterions in the small micellar pool of water providing electrostatic screening. They go on to suggest that if this effect is robust, that it could have real implications for behavior in biological systems (and for various drug-carrier ideas). Any thoughts from the more physical-chemistry oriented members of the crowd?
Yesterday's look into the Google Ngram data set brought up a discussion in the comments on how good the numbers are in it (and in other large datasets). "Garbage in, garbage out" is as true a statement as ever, so it's a real worry. (Even if the data were perfect, the numbers could still be misused and misinterpreted, of course).
An e-mail from a reader pointed me to another example of this sort of thing. The NIH Chemical Genomics Center (NCGC) has a collection of known pharmaceutically active compounds for use in screening and target ID. This is a good idea, and the same sort of thing is done internally in the drug industry. But the ChemConnector blog has some questions about how robust the dataset is. The rough estimate is that between 5 and 10% of the 7600+ structures are messed up in some way (stereochemistry, salt form, the dreaded pentavalent carbon, and so on).
Read the comments there for some interesting back-and-forthing with the NIH people. The NCGC folks realize that they have some problems, and are willing to put in the work to help clean things up. The problem is, they'd already published on this list, calling it "definitive, complete, and nonredundant", which now seems to be a bit premature. . .
Not a common occurrence, that. But this Wall Street Journal article goes into details on some efforts to improve the synthetic route to Viread (tenofovir) (or, to be more specific, TDF, the prodrug form of it, which is how it's dosed). This is being funded by former president Bill Clinton's health care foundation:
The chasm between the need for the drugs and the available funding has spurred wide-ranging efforts to bring down the cost of antiretrovirals, from persuading drug makers to share patents of antiretrovirals to conducting trials using lower doses of existing drugs.
Beginning in 2005, the Clinton team saw a possible path in the laboratory to lowering the price of the drugs. Mr. Clinton's foundation had brokered discounts on first-line AIDS drugs, many of which were older and used relatively simple chemistry. Newer drugs, with advantages such as fewer side effects, were more complex and costly to make. . .A particularly difficult step in the manufacture of the antiretroviral drug tenofovir comes near the end. The mixture at that point is "like oatmeal, making it very difficult to stir," explained Prof. Fortunak. That slows the next reaction, a problem because the substance that will become the drug is highly unstable and decomposing, sharply lowering the yield.
Fortunak himself is a former Abbott researcher, now at Howard University. One of his students does seem to have improved that step, thinning out the reaction mixture (which was gunking up with triethylammonium salts) and improving the stability of the compound in it. (Here's the publication on this work, which highlights that step, formation of a phosphate ester, which is greatly enhanced with addition of tetrabutylammonium bromide). This review has more on production of TDF and other antiretrovirals.
This is a pure, 100% real-world process chemistry problem, as the readers here who do it for a living will confirm, and it's very nice to see this kind of work get the publicity that it deserves. People who've never synthesized or (especially) manufactured a drug generally don't realize what a tricky business it can be. The chemistry has to work on large scale (above all!), and do so reproducibly, hitting the mark every time using the least hazardous reagents possible, which have to be reliably sourced at reasonable prices. And physically, the route has to avoid extremes of temperature or pressure, with mixtures that can be stirred, pumped from reactor to reactor, filtered, and purified without recourse to the expensive techniques that those of us in the discovery labs use routinely. Oh, and the whole process has to produce the least objectionable waste stream that you can come up with, too, in case you've got all those other factors worked out already. Not an easy problem, in most cases, and I wish that some of those people who think that drug companies don't do any research of their own would come down and see how it's done.
To give you an example of these problems, the paper on this tenofovir work mentions that the phosphate alkylation seems to work best with magnesium t-butoxide, but that the yield varies from batch to batch, depending on the supplier. And in the workup to that reaction, you can lose product in the cake of magnesium salts that have to be filtered out, a problem that needs attention on scale.
According to the article, an Indian generic company is using the Howard route for tenofovir that's being sold in South Africa. (Tenofovir is not under patent protection in India). Interestingly, two of the big generic outfits (Mylan and Cipla) say that they'd already made their own improvements to the process, but the question of why that didn't bring down the price already is not explored. Did the Clinton foundation improve a published Gilead route that someone else had already fixed? Cipla apparently does the same phosphate alkylation (PDF), but the only patent filing of theirs that I can find that addresses tenofovir production is this one, on its crystalline form. Trade secret?
Word reached me yesterday that Corwin Hansch, long of Pomona College, had died. Anyone who's ever done (or thought about) trying to apply mathematical techniques to compound structure-activity relationships has internalized some of his work. (Here's an intro, for those who haven't encountered classical QSAR).
I was quite excited about using such techniques (and their successors) early in my career, but ran into difficulty applying them in the real world. There were several complications - our compounds were (very likely) in several different SAR series, so combining them wasn't doing the analysis any favors; we had gaps in the compound space that would have helped refine the calculations (but were difficult to prepare and not felt to be worth the trouble to make), and, perhaps most importantly, the underlying assay data might not have been as tight as it needed to be to give sensible answers. These problems are not unique.
But that said, Hansch deserves a lot of credit for going after the whole idea of applying linear free-energy relationships to med-chem activity, and for having the fortitude to do so in the computationally deficient early 1960s. It's because of his work (and the many people who followed his lead) that we've come to realize how tricky these problems are. He was indeed a pioneer.
I've also been remiss in not mentioning the unexpected death of David Gin of Illinois and then Sloan-Kettering. Gin was an excellent synthetic chemist who tackled some very difficult problems in carbohydrate chemistry, among other areas - here's just one example, and there are many more. He surely had many more discoveries left to make, and his loss is a loss to the field.
Chemists who don't (or don't yet) work in drug discovery often wonder just what sort of chemistry we do over here. There are a lot of jokes about methyl-ethyl-butyl-futile, which have a bit of an edge to them for people just coming out of a big-deal total synthesis group in academia. They wonder if they're really setting themselves up for a yawn-inducing lab career of Suzuki couplings and amide formation, gradually becoming leery of anything that takes more than three steps to make.
Well, now there's some hard data on that topic. The authors took the combined publication output from their company, Pfizer, and GSK, as published in the Journal of Medicinal Chemistry, Bioorganic Med Chem Letters and Bioorganic and Medicinal Chemistry, starting in 2008. And they analyzed this set for what kinds of reactions were used, how long the synthetic routes were, and what kinds of compounds were produced. Their motivation?
. . .discussions with other chemists have revealed that many of our drug discovery colleagues outside the synthetic community perceive our syntheses to consist of typically six steps, predominantly composed of amine deprotections to facilitate amide formation reactions and Suzuki couplings to produce biaryl derivatives. These “typical” syntheses invariably result in large, ﬂat, achiral derivatives destined for screening cascades. We believed these statements to be misconceptions, or at the very least exaggerations, but noted there was little if any hard evidence in the literature to support our case.
Six steps? You must really want those compounds, eh? At any rate, their data set ended up with about 7300 reactions and about 3600 compounds. And some clear trends showed up. For example, nearly half the reactions involved forming carbon-heteroatom bonds, with half of those (22% of the total) being acylations. mostly amide formation. But only about one tenth of the reactions were C-C bond-forming steps (40% of those were Suzuki-style couplings and 18% were Sonogoshira reactions). One-fifth were protecting group manipulations (almost entirely on COOH and amine groups), and eight per cent were heterocycle formation, and everything else was well down into the single digits.
There are some interesting trends in those other reactions, though. Reduction reactions are much more common than oxidations - the frequency of nitro-to-amine reductions is one factor behind that, followed by other groups down to amines (few of these are typically run in the other direction). Among those oxidations, alcohol-to-aldehyde is the favorite. Outside of changes in reduction state, alcohol-to-halide is the single most favorite functional group transformation, followed by acid to acid chloride, both of which make sense from their reactivity in later steps.
Overall, the single biggest reaction is. . .N-acylation to an amide. So that part of the stereotype is true. At the bottom of the list, with only one reaction apiece, were N-alkylation of an aniline, benzylic/allylic oxidation, and alkene oxidation. Sulfonation, nitration, and the Heck reaction were just barely represented as well.
Analyzing the compounds instead of the reactions, they found that 99% of the compounds contained at least one aromatic ring (with almost 40% showing an aryl-aryl linkage) and over half have an amide, which totals aren't going to do much to dispel the stereotypes, either. The most popular heteroaromatic ring is pyridine, followed by pyrimidine and then the most popular of the five-membered ones, pyrazole. 43% have an aliphatic amine, which I can well believe (in fact, I'm surprised that it's not even higher). Most of those are tertiary amines, and the most-represented of those are pyrrolidines, followed closely by piperazines.
In other functionality, about a third of the compounds have at least one fluorine atom in them, and 30% have an aryl chloride. In contrast to the amides, there are only about 10% of the compounds with sulfonamides. 35% have an aryl ether (mostly methoxy), 10% have an aliphatic alcohol (versus only 5% with a phenol). The least-represented functional groups (of the ones that show up at all!) are carbonate, sulfoxide, alkyl chloride, and aryl nitro, followed by amidines and thiols. There's not a single alkyl bromide or aliphatic nitro in the bunch.
The last part of the paper looks at synthetic complexity. About 3000 of the compounds were part of traceable synthetic schemes, and most of these were 3 and 4 steps long. (The distribution has a pretty long tail, though, going out past 10 steps). Molecular weights tend to peak at between 350 and 550, and clogP peaks at around 3.5 to 5. These all sound pretty plausible to me.
Now that we've got a reasonable med-chem snapshot, though, what does it tell us? I'm going to use a whole different post to go into that, but I think that my take-away was that, for the most part, we have a pretty accurate mental picture of the sorts of compounds we make. But is that a good picture, or not?
PNAS recently came out with a special concentration of chemistry papers, and they're worth a look. The theme is the synthesis of chemical probes, which makes me think that maybe Stuart Schrieber can guest-edit an issue of Vogue next. Today I'm going to highlight one from the Broad Institute on diversity-oriented synthesis (DOS), and next week I'll get to some more.
OK, that was something of a come-on for regular readers of this site, who now will be listening for the sound of grinding wheels coming up to speed, the better to sharpen the Sword of Justice. I've said unfriendly things in the past about DOS and some of the claims made for it. The point of much of this work has been lost on me, and I'm a pretty broad-minded guy. (That word, in this case, rhymes with "sawed", not with "load"). The first flush (no aspersions meant) of papers in the field might just as well have been titled "Check It Out: A Bunch of Huge Compounds No One's Ever Made Before", and were followed up, in my mind, by landmark publications such as "A Raving Heapload of Structures You Didn't Want in the First Place" and "Dang, There Are Even More Compounds With Molecular Weight 850 Than We Thought". But does it have to be this way?
Maybe not. As I mentioned earlier this year, people are starting to compare DOS and fragment-based approaches. (I think that Nature dialog could have been more useful than it was, but it was a start). And this latest paper continues that process. It's using DOS approaches to generate smaller molecular weight compounds - fragments, actually. They're not tiny ones, more medium-to-large size by fragment-based standards, but they're under 300 MW.
And, importantly, they're deliberately designed to be three-dimensional - lots of pyrrolidines and fused-ring compounds thereof, homopiperidines, spiro-lactams, and so on. Many of the early fragment libraries (and many of the commercial ones that you can still buy) are too invested in small, flat, heterocycles. It's not that you can't get good leads from those things, but there's a lot more to life (and to molecular property space). This paper's collection is still a bit heavy on the alkenes to my taste (all those ring-closing metathesis reactions), but they've also reduced those for part of the library, which means that a screen of this collection will tell you if the olefin is a key structural feature or not. The alkenes themselves could serve as useful handles to build out from as well; a fragment hit with no ways to elaborate its structure isn't too useful.
As I said back in February, "I'd prefer that DOS collections not get quite so carried away, and explore new structural motifs more in the range of druglike space." That's exactly what this paper does, and I think its direction should be encouraged. This plays to the strengths of both approaches, rather than pushing either of them to the point where they break down.
A few years ago, I wrote here about Luca Turin and his theory that our sense of smell is at least partly responsive to vibrational spectra. (Turin himself was the subject of this book, author of this one (which is quite interesting and entertaining for organic chemists), and co-author of Perfumes: The A-Z Guide, perhaps the first attempt to comprehensively review and categorize perfumes).
Turin's theory is not meant to overturn the usual theories of smell (which depend on shape and polarity as the molecules bind into olfactory receptors), but to extend them. He believes that there are anomalies in scent that can't be explained by the current model, and has been proposing experiments to test them. Now he and his collaborators have a new paper in PNAS with some very interesting data.
They're checking to see if Drosophila (fruit flies) can tell the difference between deuterated and non-deuterated compounds. The idea here is that the size and shape of the two forms are identical; there should be no way to smell the difference. But it appears that the flies can: they discriminate, in varying ways, between deuterated forms of acetophenone, octanol, and benzaldehyde. Deuterated acetophenone, for example, turns out to be aversive to fruit flies (whereas the normal form is attractive), and the aversive quality goes up as you move from d-3 to d-5 and d-8 forms of the isotopically labeled compound.
The flies could also be trained, by a conditioned avoidance protocol, to discriminate between all of the isotopic pairs. Most interestingly, if trained to avoid a particular normal or deutero form of one compound, they responded similarly when presented with a novel pair, which seems to indicate that they pick up a "deuterated" scent effect that overlays several chemical classes.
There's more to the paper; definitely read it if you're interested in this sort of thing. Reactions to it have been all over the place, from people who sound convinced to people who aren't buying any of it. If Turin is right, though, it may indeed be true that we're smelling the differences between C-H stretching vibrations, possibly through an electron tunneling mechanism, which is a rather weird thought. But then, it's a weird world.
Looking over the chemical literature with an RSS reader can really give you a sense of what the hot topics are, and what's cooling off. Remember when it seemed as if every third paper was about ionic liquids? You still see work in the area, but it's nowhere near as crazy as it was. (I had a colleague come by my office the other day and ask "Did anyone ever find out what to do with those things?") Similarly, gold catalysts have been all over the place in recent years, but seem, to my eye, to be calming down.
Some of these things are research areas that look promising, but die off when their limits become apparent. Some of them are almost sheer fads, with papers coming out from all sorts of odd places because the authors want to get in on the hot, publishable topics while they can. Others keep going because the topics themselves are important but ver hard to exhaust (metal-catalyzed couplings come to mind).
And there are areas that keep going in the literature because they look like they should be important and useful, and eventually will, but no one can quite get them to either work generally enough or get people to recognize that they do. The metal-catalyzed coupling literature was in this shape back in the 1970s and into the 1980s - there were a lot of disparate reactions that you could do with palladium, but none of them had exactly taken over the world. My vote for a current field in this protostar state is engineered solid-phase catalysis.
That may sound odd, since work on solid-phase catalysts has been going on for decades, and is of huge industrial importance. But many of the important catalysts have been arrived at either by luck or by an awful lot of hard slogging. The field is complicated enough - fiendishly so - that it's hard to draw general conclusions. If you have a good solution-phase catalyst, how do you make a solid-supported variety that works just as efficiently? Well. . .if you really want one, you make about a zillion variants and hope for the best, as far as I can see.
Part of the problem (as with the metal-catalyzed coupling world) is that there are just so many variables. The solid supports alone are enough to keep a person occupied for life, what with all the various aluminas, silicas, zeolites, polymers, mesoporous engineered thingies, and so on. Then you have the uncountable schemes for linking these surfaces to active catalysts - what functional groups to use, what density things should be on the surface, what distance you need between the surface and the catalyst, etc. And just linking up to the known catalysts is no light work, either, since most of these things were not made with convenient handles hanging off them.
As we get better at making (and characterizing) new kinds of surfaces and new kinds of macromolecular assemblies, we might start to get our hands around this subject. For now, though, it seems to be mostly in the descriptive stage: papers are of the "Hey, we made this thing and here's what it does" variety, with further work in the series being "Hey, remember that stuff we made? Turns out you can do this with it, too - who knew?" What you don't see, or not too darn often, is a paper describing the general principles of these processes. For the most part, we don't know them yet.
But if I had to pick an area that will eventually blossom into a host of applications, this would be high on the list. It's a mixture of surface chemistry, materials science, nanotechnology, and organic synthesis, and it's got a lot of promise. But then again, it's had a lot of promise for a long time now. . .
Several people have called this guy to my attention: the Escondido wild man who seems to have had a good-sized explosives factory going in his house. He had kilo quantities of (highly explosive) PETN, HMTD, and all kinds of other things you Do Not Want in your basement (see that Chemistry Blog link for a list).
In fact, he and his home chemistry operation seem to have been too much for local law enforcement, who (at least at last report) bailed out of the house and haven't finished searching it yet. That sounds like an excellent decision - you couldn't pay me to go in the place and poke around. On the one hand, perhaps his lab technique wasn't so bad: he was able to work in those quantities without blowing himself up. But on the other hand, and by golly this hand wins, anyone who makes kilos of such things at home has very skewed ideas about risk, to the point that you don't really know what they're capable of. The owner's day job appears to have been robbing banks, which fits right in.
The latest news is a decision that the only way to deal with the house is to burn it. A sixteen-foot fire-resistant wall is being built around the place, and they're just going to let it rip. Beats going around in there opening drawers and looking under the sink, for sure.
Fluorinated compounds are always of interest to a medicinal chemist, and difluordioxolanes are perfectly reasonable things to put into a drug's structure. But any method that first uses thiophosgene (you can buy it easily, but here's a good old prep that gets across its fine qualities) and follows that up with bromine trifluoride (which shares many of the wonderful properties of its sibling). . .well, let me know how it goes, and do it far downwind of me.
The FDA has approved Eisai's Halaven (eribulin) for late-stage breast cancer. As far as I can tell, this is now the most synthetically complex non-peptide drug ever marketed. Some news stories on it are saying that it's from a marine sponge, but that was just the beginning. This structure has to be made from the ground up; there's no way you're going to get enough material from marine sponges to market a drug.
If anyone has another candidate, please note it in the comments - but I'll be surprised if there's anything that can surpass this one. There have been long syntheses in the industry before, of course, although we do everything we can to avoid them. Back when hydrocortisone was first marketed by Merck, it had a brutal synthetic path for its time. (That's where a famous story about Max Tishler came from - one of the intermediates was a brightly colored dinitrophenylhydrazone. Tishler, it's said, came into the labs one day, saw some of the red solution spilled on the floor, and growled "That better be blood") And Roche's Fuzeon is a very complicated synthesis indeed, but much of that is repetitive (and automated) peptide coupling. It took a lot of work to get right, but I'd still give the nod to eribulin. Can anyone beat it?
A reader sent this paper along the other day. Is it just me, or does it seem a bit odd to talk about how aryl coupling in these systems is traditionally done by (list of metal-catalyzed reactions), which unfortunately involve (list of toxic and/or expensive metals) under (list of rigorous conditions involving oxygen exclusion and protecting groups). . .and then propose as a shiny new alternative: three equivalents of aluminum chloride?
Not that there's anything particularly wrong with aluminum chloride. The workup is much nastier than with the metal-catalyzed couplings, though, and I'd think that the waste stream is also more hefty. And I'm willing to bet that a lot more structures can survive Suzuki coupling conditions than can survive scoops of aluminum chloride, too. But it certainly is a lot cheaper and simpler to set up.
Still, isn't this just more or less the aryl-Friedel-Crafts (Scholl) reaction? And haven't very similar couplings been reported before, many times? This new paper cites a few of these (but not that last one). Maybe it's just the whole "Now we can finally get rid of all that palladium" tone. . .
A reader forwards an e-mail from Harris Interactive, a marketing research firm that says that it's running a survey on membership in the American Chemical Society. The reason he sent it along, though, is that it looks rather odd. The subject line of the message is three lines of gibberish, and it offers $150 for participation, which seems rather high for a survey company sending out random emails.
If this is something the ACS has commissioned, well, they're (a) probably spending too much money on it, and (b) should realize that the message is triggering the mental spam filters of its recipients. And if it's not the ACS, then who the heck is it? Any ideas?
Since graphene was worth a Nobel prize this year, it's only fitting that I mention a recent application of it in chemical synthesis. A paper in Angewandte Chemie shows how graphene oxide can be used as an oxidizing reagent for organic compounds. It performs primary alcohol-to-aldehyde, secondary alcohol-to-ketone, and alkyne-to-methylene ketone reactions quite well. This doesn't seem to be due to residual metals, but is a reaction of the graphene oxide (GO) itself, which is probably a complex mixture of epoxides and who-knows-what on the carbon surface.
Interestingly, it appears that the GO can be regenerated by atmospheric oxygen as the reaction goes on (and then re-used_, so in the end, these processes are being performed by the oxygen itself. This could be an appealing method for scaleup, since it drastically reduces some possible waste streams. The turnover isn't as high as with some more traditional oxidants, but the cost might be hard to beat.
The first thing I thought of was using this material in a flow reactor, perhaps with occasional bubbling of oxygen into the solvent stream. It seems likely that as we learn to manipulate the surfaces of such materials that we'll find some very useful catalysts. . .
So I believe that they're moving into the new chemistry building at Princeton, which is a mighty glass whopper. In light of some of the past discussions we've had around here about lab design, I'd be interested in hearing from anyone with personal experience of the building. I can't really get a good sense of the layout from the pictures I've seen, just that there sure seem to be a lot of glass walls. And those aren't necessarily bad; it's the way the labs are put together and their relationship the desks and offices.
Interestingly, much of the money for its construction seems to have come from the university's royalties on Alimta (pemetrexed), a folate anticancer drug discovered by Ted Taylor's group there in the early 1990s and developed by Lilly. (Taylor, a heterocyclic chemistry legend, worked on antifolates for many, many years, and contributed a huge amount to the field).
Here's more on the building, and here are some photos, and here are some architectural renderings, for what those are worth. Any comments from folks on the ground?
You know, on reflection, one of the things that probably has me feeling strange about being in Philadelphia for this conference is that it was here that I attended my first ACS national meeting. That was August of 1984, when I was just about to start my second year in graduate school. For all I know, I attended a session in this same Sheraton. All these hotel ballrooms look pretty much the same.
Twenty-six years ago! If I sit here and try to figure out how that happened, I won't have time to take any notes here in 2010. There were slide projectors pointed at the screens back then, not LCDs, and there sure weren't any laptops to be seen. But the rows of chairs under the gaudy chandeliers, those you could superimpose on 1984 with no change at all.
I'm out of the lab for the next few days. It's Conference Time once again, and I'm in Philadelphia for the Fragment-Based Lead Discovery meeting. Last year this one was in England, but did I go? Nooo, I waited until it was in Philly. No offense to the city's residents who read the blog, but even its partisans would have to admit that it's not an exotic destination, particularly for someone who's lived for eight years in New Jersey like I have. Anyway, any readers of the blog who are also attending, please feel free to track me down. Bernard Munos told me last week that I look just like my picture on the site, which can't quite be true, since that's getting to be an old shot, but it's apparently a reasonable guide.
I won't be live-blogging any sessions here, although I may well mention particularly interesting things as they come up. Not everyone's into fragments, for one things, and a three-day diet of them might be a bit much. And I'm going to be busy taking notes of my own, which will necessarily be skewed by my own proprietary perspective. To be honest, seeing a blow-by-blow account of what I find interesting and what I find old hat would give away too much about what my company's up to.
But I will be blogging on other topics during the meeting, thanks to the wireless in the conference room. I take notes on the laptop, anyway, since I type much more quickly (and legibly) than I write. I've got a pen handy if I have to scrawl down a structure, but otherwise, the notes are just going into a text window. Now, that does mean that I'm going to need to find an electrical outlet somewhere in this room this afternoon. . .
So, a chemistry Nobel that's just pure chemistry from top to bottom. I'll be darned! This is one that most chemists had on the list of "Worth a prize, but who knows if they'll ever get around to it". (If you check my archives, and those of the other chem-bloggers, you'll see palladium couplings mentioned every time).
One of the sticking points has been who to put on the prize, what with the three-name limit and all. Were Stille alive, he might well be on there instead of Negishi, but that just highlights the trickiness of this area. There are plenty of other people, starting, most likely, with Sonogashira, who have made major contributions in this area. I notice that some people are wondering about Buchwald and Hartwig et al., but that (to me) is a separate issue. This is a prize for carbon-carbon bond formation; carbon-nitrogen can wait its turn.
But as a chemistry prize, I think everyone can agree that palladium-catalyzed C-C bond formation is worthy. Such reactions are the single biggest change to the practice of synthesis since my grad school days. In the mid-1980s, palladium reactions were looked on as being a bit weird, and I hardly knew anyone who'd run one. I didn't have occasion to, myself, until something like 1992. By that time these reactions were well on their way to conquering the world. It's gotten to the point now where some industrial drug discovery organizations have jokingly considered banning the things for a period. They're so useful that the sorts of structures that are easy to make through them tend to get over-represented in drug screening files.
For non-chemists, the reason these things are so well used is that carbon-carbon bonds are both the backbone of organic molecules, and a pain in the rear to make and break. They're pretty solid, but not so solid that they can't be worked with under special conditions, which is why they're so useful for both living systems and for synthetic chemists. A carbon framework is like solid steel construction: very durable and hard to destroy, but if you know how to weld or rivet you can make one yourself. These palladium reactions are the equivalent of riveting; using them, we can stick whole carbon units together as if we were using power tools.
So in honor of today's prize, folks, go run yourself a Heck, Suzuki, or Negishi coupling. They'll probably work; they generally do.
I mentioned directed evolution of enzymes the other day as an example of chemical biology that’s really having an industrial impact. A recent paper in Science from groups at Merck and Codexis really highlights this. The story they tell had been presented at conferences, and had impressed plenty of listeners, so it’s good to have it all in print.
It centers on a reaction that’s used to produce the diabetes therapy Januvia (sitagliptin). There’s a key chiral amine in the molecule, which had been produced by asymmetric hydrogenation of an enamine. On scale, though, that’s not such a great reaction. Hydrogenation itself isn’t the biggest problem, although if you could ditch a pressurized hydrogen step for something that can’t explode, that would be a plus. No, the real problem was that the selectivity wasn’t quite what it should be, and the downstream material was contaminated with traces of rhodium from the catalyst.
So they looked at using a transaminase enzyme instead. That’s a good idea, because transaminases are one of those enzyme classes that do something that we organic chemists generally can’t usually do very well – in this case, change a ketone to a chiral amino group in one step. (It takes another amine and oxidizes that on the other side of the reaction). We’ve got chiral reductions of imines and enamines, true, but those almost always need a lot of fiddling around for catalysts and conditions (and, as in this case, can cause their own problems even when they work). And going straight to a primary amine can be, in any case, one of the more difficult transformations. Ammonia itself isn’t too reactive, and you don’t have much of a steric handle to work with.
But transaminases have their idiosyncracies (all enzymes do). They generally only will accept methyl ketones as substrates, and that’s what these folks found when they screened all the commercially available enzymes. Looking over the structure (well, a homology model of the structure) of one of these (ATA-117), which would be expected to give the right stereochemistry if it could be made to give anything whatsoever, gave some clues. There’s a large binding pocket on one side of the ketone, which still wasn’t quite large enough for the sitagliptin intermediate, and a small site on the other side, which definitely wasn’t going to take much more than a methyl group.
They went after the large binding pocket first. A less bulky version of the desired substrate (which had been turned, for now, into a methyl ketone) showed only 4% conversion with the starting enzymes. Mutating the various amino acids that looked important for large-pocket binding gave some hope. Changing a serine to phenylalanine, for example, cranked up the activity by 11-fold. The other four positions were, as the paper said, “subjected to saturation mutagenesis”, and they also produced a combinatorial library of 216 multi-mutant variations.
Therein lies a tale. Think about the numbers here: according to the supplementary material for the paper, they varied twelve residues in the large binding pocket, with (say) twenty amino acid possibilities per. So you’ve got 240 enzyme variants to make and test. Not fun, but it’s doable if you really want to. But if you’re going to cover all the multi-mutant space, that’s twenty to the 12th, or over four quadrillion enzyme candidates. That’s not going to happen with any technology that I can easily picture right now. And you’re going to want to sample this space, because enzyme amino acid residues most certainly do affect each other. Note, too, that we haven’t even discussed the small pocket, which is going to have to be mutated, too .
So there’s got to be some way to cut this problem down to size, and that (to my mind) is one of the things that Codexis is selling. They didn’t, for example, get a darn thing out of the single-point-mutation experiments. But one member of a library of 216 multi-mutant enzymes showed the first activity toward the real sitagliptin ketone precursor. This one had three changes in the small pocket and that one P-for-S in the large, and identifying where to start looking for these is truly the hard part. It appears to have been done through first ruling out the things that were least likely to work at any given residue, followed by an awful lot of computational docking.
It’s not like they had the Wonder Enzyme just yet, although just getting anything to happen at all must have been quite a reason to celebrate. If you loaded two grams/liter of ketone, and put in enzyme at 10 grams/liter (yep, ten grams per liter, holy cow), you got a whopping 0.7% conversion in 24 hours. But as tiny as that is, it’s a huge step up from flat zero.
Next up was a program of several rounds of directed evolution. All the variants that had shown something useful were taken through a round of changes at other residues, and the best of these combinations were taken on further. That statement, while true, gives you no feel at all for what this stuff is like, though. There are passages like this in the experimental details:
At this point in evolution, numerous library strategies were employed and as beneficial mutations were identified they were added into combinatorial libraries. The entire binding pocket was subjected to saturation mutagenesis in round 3. At position 69, mutations TAS and C were improved over G. This is interesting in two aspects. First, V69A was an option in the small pocket combinatorial library, but was less beneficial than V69G. Second, G69T was improved (and found to be the most beneficial in the next
round) suggesting that something other than sterics is involved at this position as it was a Val in the starting enzyme. At position 137, Thr was found to be preferred over Ile. Random mutagenesis generated two of the mutations in the round 3 variant: S8P and G215C. S8P was shown to increase expression and G215C is a surface exposed mutation which may be important for stability. Mutations identified from homologous enzymes identified M94I in the dimer interface as a beneficial mutation. In subsequent rounds of evolution the same library strategies were repeated and expanded. Saturation mutagenesis of the secondary sphere identified L61Y, also at the dimer interface, as being beneficial. The repeated saturation mutagenesis of 136 and 137 identified Y136F and T137E as being improved.
There, that wasn’t so easy, was it? This should give you some idea of what it’s like to engineer an enzyme, and what it’s like to go up against a billion years of random mutation. And that’s just the beginning – they ended up doing ten rounds of mutations, and had to backtrack some along the way when some things that looked good turned out to dead-end later on. Changes were taken on to further rounds not only on the basis of increased turnover, but for improved temperature and pH stability, tolerance to DMSO co-solvent, and so on. They ended up, over the entire process, screening a total of 36,480 variations, which is a hell of a lot, but is absolutely infinitesmal compared to the total number of possibilities. Narrowing that down to something feasible is, as I say, what Codexis is selling here.
And what came out the other end? Well, recall that the known enzymes all had zero activity, so it’s kind of hard to calculate improvement from that. Comparing to the first mutant that showed anything at all, they ended up with something that was about 27,000 times better. This has 27 mutations from the original known enzyme, so it’s a rather different beast. The final enzyme runs in DMSO/water, at loadings up of to 250g/liter of starting material at 3 weight per cent enzyme loading, and turns isopropylamine into acetone while it’s converting the prositagliptin ketone to product. It is completely stereoselective (they’ve never seen the other amine), and needless to say involves no hydrogen tanks and furnishes material that is not laced with rhodium metal.
This is impressive stuff. You'll note, though, the rather large amount of grunt work that had to go into it, although keep in mind, the potential amount of grunt work would be more than the output of the entire human race. To date. Just for laughs, an exhaustive mutational analysis of twenty-seven positions would give you 1.3 times ten to the thirty-fifth possibilities to screen, and that's if you know already which twenty-seven positions you're going to want to look at. One microgram of each of them would give you the mass of about a hundred Earths, not counting the vials. Not happening.
Also note that this is the sort of thing that would only be done industrially, in an applied research project. Think about it: why else would anyone go to this amount of trouble? The principle would have been proven a lot earlier in the process, and the improvements even part of the way through still would have been startling enough to get your work published in any journal in the world and all your grants renewed. Academically, you'd have to be out of your mind to carry things to this extreme. But Merck needs to make sitagliptin, and needs a better way to do that, and is willing to pay a lot of money to accomplish that goal. This is the kind of research that can get done in this industry. More of this, please!
Here's an interesting example of a way that synthetic chemistry is creeping into the provinces of molecular biology. There have been a lot of interesting ideas over the years around the idea of polymers made to recognize other molecules. These appear in the literature as "molecularly imprinted polymers", among other names, and have found some uses, although it's still something of a black art. A group at Cal-Irvine has produced something that might move the field forward significantly, though.
In 2008, they reported that they'd made polymer particles that recognized the bee-sting protein melittin. Several combinations of monomers were looked at, and the best seemed to be a crosslinked copolymer with both acrylic acid and an N-alkylacrylamide (giving you both polar and hydrophobic possibilities). But despite some good binding behavior, there are limits to what these polymers can do. They seem to be selective for melittin, but they can't pull it out of straight water, which is a pretty stringent test. (If you can compete with the hydrogen-bonding network of bulk water that's holding the hydrophilic parts of your target, as opposed to relying on just the hydrophobic interactions with the other parts, you've got something impressive).
Another problem, which is shared by all polymer-recognition ideas, is that the materials you produce aren't very well defined. You're polymerizing a load of monomers in the presence of your target molecule, and they can (and will) link up in all sorts of ways. So there are plenty of different binding sites on the particles that get produced, with all sorts of affinities. How do you sort things out?
Now the Irvine group has extended their idea, and found some clever ways around these problems. The first is to use good old affinity chromatography to clean up the mixed pile of polymer nanoparticles that you get at first. Immobilizing melittin onto agarose beads and running the nanoparticles over them washes out the ones with lousy affinity - they don't hold up on the column. (Still, they had to do this under fairly high-salt conditions, since trying this in plain water didn't allow much of anything to stick at all). Washing the column at this point with plain water releases a load of particles that do a noticeably better job of recognizing melittin in buffer solutions.
The key part is coming up, though. The polymer particles they've made show a temperature-dependent change in structure. At RT, they're collapsed polymer bundles, but in the cold, they tend to open up and swell with solvent. As it happens, that process makes them lose their melittin-recognizing abilities. Incubating the bound nanoparticles in ice-cold water seems to only release the ones that were using their specific melittin-binding sites (as opposed to more nonspecific interactions with the agarose and the like). The particles eluted in the cold turned out to be the best of all: they show single-digit nanomolar affinity even in water! They're only a few per cent of the total, but they're the elite.
Now several questions arise: how general is this technique? That is, is melittin an outlier as a peptide, with structural features that make it easy to recognize? If it's general, then how small can a recognition target be? After all, enzymes and receptors can do well with ridiculously small molecules: can we approach that? It could be that it can't be done with such a simple polymer system - but if more complex ones can also be run through such temperature-transition purification cycles, then all sorts of things might be realized. More questions: What if you do the initial polymerization in weird solvents or mixtures? Can you make receptor-blocking "caps" out of these things if you use overexpressed membranes as the templates? If you can get the particles to the right size, what would happen to them in vivo? There are a lot of possibilities. . .
You don't see an awful lot of chemistry publications from Vietnam. So in a way, I'm reluctant to call attention to this one, in the way that I'm about to. But it's in the preprint section of Bioorganic and Medicinal Chemistry Letters, and some of my far-flung correspondents have already picked up on it. So it's a bit too late to let it pass, I suppose.
The authors isolate a number of natural products from Wisteria (yep, the flowering woody vine one), and most of them are perfectly fine, if unremarkable. But their compound 1 (wisterone) is something else again.
Man, is that thing strained. Nothing with that carbon skeleton has ever been reported before (I just checked), outside of things that you can draw as part of the walls of fullerenes. I have a lot of trouble believing that this compound exists as shown - and if it does, then it deserves a lot more publicity than being tossed into a list inside a BOMCL paper - even though that journal is now getting a reputation for. . .interesting structural assignments.
This thing could get you into Angewandte Chemie or JACS, no problem. But the authors don't make much of it, just calling it a new compound, and presenting mass spec and NMR evidence for it. The 13C spectrum is perfectly reasonable for some sort of para-substituted aryl ring, but this compound would not give a perfectly reasonable spectrum, I would think. Surely all that strain would show up in some funny chemical shifts? Another oddity must be a misprint - they have the carbon shift of the carbonyl as 190.8, which is OK, I suppose, but they assign the methylenes as 190.8, which can't be right. (The protons come at 4.48).
No, I really think something is wrong here. I don't have a structure to propose, off the top of my head (not without resolving that weirdo methylene carbon shift), but I don't think it's this. Anyone?
Update: just noticed that this is said to be a crystalline compound, melting point of 226-228. I find it hard to imagine any structure like this taking that much heat, but. . .it's a crystal! Get an X-ray structure. No one's going to believe it without one, and BOMCL should never have let this paper through without someone asking for at least that. . .
As we head towards October, the thoughts of a very select group of scientists may be turning to their chances of winning a Nobel Prize - and the thoughts of the rest of us turn to laying odds on the winners. I've handicapped the race here before (here's the 2009 version), and that's one place to start a list. Another excellent roundup can be found over at Chembark, and another well-annotated one at the Curious Wavefunction. Meanwhile, Thomson/Reuters sent me their citation-voodoo list the other day, but to my eyes, they're always a bit off the mark.
So who are the favorites? Last year I mentioned Zare, Bard, and Moerner for single-atom spectroscopy, and I think that after a run of biology-laced prizes that a swing back over to nearly-physics is pretty plausible. If the committee is going to stick with nearly-biology, then perhaps humanized antibodies, microarrays, or chaperone proteins will make it in, but I really don't think that this is the year (in the Chemistry prize, anyway). On the chemistry/medicine interface, there's always the chance that the committee could turn around and honor Carl Djerassi after all these years, but that's the only med-chem themed prize I can see. I think the chances of a pure organic synthesis prize are very low indeed - and that includes palladium-catalyzed couplings, too, unfortunately. There are too many people deserving of credit there, "too many" meaning "more than three" for Nobel purposes, and not all of them are still alive.
The more I think about it, the more skeptical I am of a Nobel for dye-based solar cells (Grätzel et al.) or any form of asymmetric catalysis this year. If anything, the committee waits too long before recognizing things, and it's just too early for these (and some other ideas floating around out there). The Thomson/Reuters list seems to be very big on metal-organic framework materials, for example, and I just don't see it. Waiting too long is a problem, but giving trendy things out too soon can be an even bigger one.
On the other end of the scale, I used to confidently predict a Nobel for RNA interference (in one field or another), and they finally took care of that one. The only Nobel I feel similarly sure of is in Physics, for the "dark energy" finding that the expansion of the universe is accelerating. At some point that one's going to win - maybe when there's more of an explanation for it, although that could be a bit of a wait. This is an area where I and the Thomson/Reuters people agree (and a lot of physicists seem to go along, too).
Want to make your own odds? This Chembark post is a fine overview of the factors involved. Suggestions welcome in the comments from anyone who feels as if their psychic powers are tuned up. . .
I agree with many of the commenters around here that one of the most interesting and productive research frontiers in organic chemistry is where it runs into molecular biology. There are so many extraordinary tools that have been left lying around for us by billions of years of evolution; not picking them up and using them would be crazy.
Naturally enough, the first uses have been direct biological applications - mutating genes and their associated proteins (and then splicing them into living systems), techniques for purification, detection, and amplification of biomolecules. That's what these tools do, anyway, so applying them like this isn't much of a shift (which is one reason why so many of these have been able to work so well). But there's no reason not to push things further and find our own uses for the machinery.
Chemists have been working on that for quite a while. We look at enzymes and realize that these are the catalysts that we really want: fast, efficient, selective, working at room temperature under benign conditions. If you want molecular-level nanotechnology (not quite down to atomic!), then enzymes are it. The ways that they manipulate their substrates are the stuff of synthetic organic daydreams: hold down the damn molecule so it stays in one spot, activate that one functional group because you know right where it is and make it do what you want.
All sorts of synthetic enzyme attempts have been made over the years, with varying degrees of success. None of them have really approached the biological ideals, though. And in the "if you can't beat 'em, join 'em" category, a lot of work has gone into modifying existing enzymes to change their substrate preferences, product distributions, robustness, and turnover. This isn't easy. We know the broad features that make enzymes so powerful - or we think we do - but the real details of how they work, the whole story, often isn't easy to grasp. Right, that oxyanion hole is important: but just exactly how does it change the energy profile of the reaction? How much of the rate enhancement is due to entropic factors, and how much to enthalpic ones? Is lowering the energy of the transition state the key, or is it also a subtle raising of the energy of the starting material? What energetic prices are paid (and earned back) by the conformational changes the protein goes through during the catalytic cycle? There's a lot going on in there, and each enzyme avails itself of these effects differently. If it weren't such a versatile toolbox, the tools themselves wouldn't come out being so darn versatile.
There's a very interesting paper that's recently come on on this sort of thing, to which I'll devote a post by itself. But there are other biological frontiers beside enzymes. The machinery to manipulate DNA is exquisite stuff, for example. For quite a while, it wasn't clear how we organic chemists could hijack it for our own uses - after all, we don't spend a heck of a lot of time making DNA. But over the years, the technique of adding DNA segments onto small molecules and thus getting access to tools like PCR has been refined. There are a number of applications here, and I'd like to highlight some of those as well.
Then you have things like aptamers and other recognition technologies. These are, at heart, ways to try to recapitulate the selective binding that antibodies are capable of. All sorts of synthetic-antibody schemes have been proposed - from manipulating the native immune processes themselves, to making huge random libraries of biomolecules and zeroing in on the potent ones (aptamers) to completely synthetic polymer creations. There's a lot happening in this field, too, and the applications to analytical chemistry and purification technology are clear. This stuff starts to merge with the synthetic enzyme field after a point, too, and as we understand more about enzyme mechanisms that process looks to continue.
So those are three big areas where molecular biology and synthetic chemistry are starting to merge. There are others - I haven't even touched here on in vivo reactions and activity-based proteomics, for example, which is great stuff. I want to highlight these things in some upcoming posts, both because the research itself is fascinating, and because it helps to show that our field is nowhere near played out. There's a lot to know; there's a lot to do.
In the wake of yesterday's revelation about the latest breakthrough in amide formation, one point that's come up is whether we getting into the era of diminishing returns in finding new synthetic methods.
My opinion? We may well - but we shouldn't have to be. It is true that we know how to do an awful lot of transformations. And I'd also subscribe to the view that we can, given no constraints of time, money, or heartbreak, synthesize basically any stable organic molecule that anyone can think up. In what we're pleased to call the real world, though, constraints of money and time (related by a similar equation to Einstein's mass-energy one) are always with us. (Heartbreak, well, that seems to be in constant supply).
So even though we can do so many things, everyone realizes that we need to be able to do them better. That applies even to amide formation. There are eleventy-dozen ways to form amides in the literature. But as some of the comments to yesterday's post show, sometimes you have to go pretty far down the list to get one that meets your needs. There is no set of conditions that is simultaneously easy, fast, cheap, nonracemizing, nontoxic, tolerant of all other functional groups, and generates a benign waste stream. Finding such a universal reaction is a fearsome goal, especially considering the number of alternatives that have already been tried.
This is why stoichiometric samarium metal is such a ridiculous idea. There are a lot of good ways to form amides. And there are a lot of lesser-known ways that might save you in tough situations. And there are lots of stupid, crappy ways. The world does not need another one of the latter. So what does it need?
Well, if you're going to stick with amide formation, you're going to have to find something closer to that ideal reaction, which won't be easy. Several other transformations are in that same category - lots of alternatives available, so something new had better be good. There are, though, plenty of other reactions that don't work so well, where improvements don't require you to approach so near perfection. A person's time might be better spent there than on trying to find the Perfect Amide Reaction, although the impact of finding the latter would probably be greater. Neither possibility excuses time spent on finding Another Lousy Amide Reaction.
And there are a lot of transformations that we can't do very well. Turn a phenol into an aromatic aldehyde in one step. Selectively epoxidize aromatic double bonds. Staple a secondary amine in where an aliphatic C-H used to be. Fluorinate at will. You can go beyond that to reactions that you can't even think up a mechanism: go around a benzene ring, switching out carbon for nitrogen. Pyridine, pyrimidine, pyrazine. . .I have no clue how to do that, or if it's even possible. Change a given oxazole into its corresponding thiazole. Turn a methoxy back into a methyl group. And so on - we sure can't do those, and the list goes on.
Hard stuff! But there are plenty of non-science-fictional possibilities out there, too. An eye to applications beyond pure synthetic chemistry helps. Look, for example, at Barry Sharpless and the copper-catalyzed triazole formation (click chemistry). That's a nice little transformation, and there are people who probably would have just made a nice little Org Lett paper out of it if they'd discovered it themselves. But it's such a versatile way to stitch things together that it's finding uses all over the place, and the end is not in sight. The world could most definitely use more chemistry that can take off in such fashion, and surely it's out there to be found.
I realize that we had this discussion just back in August, and earlier in the summer. But it keeps coming up. Seeing someone form amides with a pile of elemental samarium brings it right back to mind.
Y'know, this is what I call an incremental improvement in the synthetic repetoire. I noticed this new paper in Tetrahedron Letters by its title, and read the whole thing just to make sure that I wasn't missing something.
Yep, that's right: someone has come up with a new way to form amides by reacting acid chlorides and amines. "But hold on," you say, "I thought that acid chlorides and amines form amides like an unstoppable juggernaut, which grinds to a halt only when enough HCl is given off to take the remaining amine out of contention". Well, you'd be right about that: but that's because you didn't think of using samarium metal as an acid scavenger.
Because that's what it seems to be here. The authors report that you have to pretty much use a full equivalent of samarium to get the high yields - control experiments with only 1/3 equivalent didn't work so well. What I wish they'd done is run the freaking control experiments with triethylamine. Or Hünig's base. Or pyridine. Or potassium carbonate, or aqueous 0.1N NaOH, or resin-bound nanocrystalline cesium complexes prepared in ionic liquids through renewable green chemistry whatchamacallits - in fact, with damn near anything else except stoichiometric metallic samarium, of all things. Well, OK: zinc and indium didn't work. I stand corrected. Give these folks another four or five Tet Lett papers, and they'll work their way back to baking soda. Only it'll be samarium bicarbonate, with any luck.
Perhaps I'm being unfair here. But really, amide formation is not a problem that is crying out for a new solution. It's really very, very, well worked out, and the number of options available for the experimentalist are nearly beyond counting. But now there's samarium metal. So if you're looking for the most expensive way you can think of to react an acid chloride with an amine, one that will make your labmates question your sanity and a reaction that will probably be a separate item all on its own come your next annual performance review, then go to it.
Oh, and one more thing: if you bother to read the experimental section, which apparently no one did, the procedure is titled: "General procedure for the homocoupling of terminal alkynes". Wrong samarium reaction, guys.
Many readers will remember the "sodium hydride as an oxidizing reagent" story from last year. (For the non-chemists in the audience, the problem here is that sodium hydride is most certainly not what you'd think of as an oxidizing reagent, quite the opposite, in fact. Seeing the paper's title was, for an organic chemist, a bit like reading about a new way to sweeten drinks with vinegar).
This was famously live-blogged over at Totally Synthetic and picked up on around the chemical blog world. The current thinking, though, is that adventitious oxygen is really doing the work here. If you run the reaction under strict inert atmosphere conditions, you get no more oxidation. (And it still doesn't appear that any note has been added to the original JACS paper). Update - make that no note added to the abstract page. The paper itself is still accessible, although it does have notes that it was withdrawn.
Well, now we have another one. This paper in press in Tetrahedron Letters claims oxidation of benzoins to benzils with good ol' sodium hydride. In this case, anyway, the authors (from Korea) did try running the reaction under inert atmosphere, and saw their yield go down. Their proposed mechanism involves molecular oxygen, in fact, and is quite plausible. (I've seen anion-oxygen chemistry myself - if you deprotonate Strecker amines of benzaldehydes, you'll convert them into amides via oxygen in your solvent, that is, if you don't saturate things with inert gas first). Still, I'd rather that they titled this paper differently, since it's not sodium hydride that's doing the oxidation here. You could probably get this to happen with NaHMDS, t-butoxide, or the base of your choice.
And, weirdly, the authors (as far as I can see by going over the PDF) manage not to cite the original JACS NaH oxidation paper at all. You'd never think that anyone had tried this before, especially not in one of the most high-profile chemical journals in the world, just last year, with plenty of added press coverage. What does it take a get a paper cited? Update: given the withdrawn-but-still-available status of the original, this becomes a trickier question. The earlier paper seems to have clearly gone through the same sort of chemistry, but the mechanism - and thus the point of the whole paper - was misassigned. Do you cite it, or not?
The ACS journals page has a "20 Most Accessed" list, which can be an interesting thing to examine. The current one has some articles I've read and enjoyed, such as the guide to molecular interactions that was in J. Med. Chem. earlier this year. And there are synthetic methods in there, and a review of molecular gastronomy, some total syntheses, surface chemistry, and something on wastewater treatment. All fine.
I had an interesting email in response to my post on returning from the SciFoo meeting. I have to say, there weren't too many chemists at that one - not that it's a representative slice of science, to be sure. (Theoretical physicists and computer science people were definitely over-represented, although they were fun to talk to).
But perhaps there's another reason? I'll let my correspondent take it from here:
I worry a lot about organic chemistry, about the state of the discipline. I worry about the relative lack of grand challenges, and that most academic work is highly incremental and, worse, almost entirely the result of screening rather than design. There is still so little predictive power (at least in academia) in drug or catalyst discovery. I have a theory that the reason we're so brutal with each other in paper and grant refereeing is because we're essentially dogs under the table fighting for scraps.
There are big exceptions, which make me excited to be a scientist. There's usually something in Nature Chemistry that has the wow factor, for example. They're just so rare. . .
He went on to point out that other fields have results that can wow a general audience more easily, which can make it harder for even excellent work in chemistry to get as high a profile. As for that point, there may be something to it. High-energy physics and cosmology would, you'd think, be abstract enough to drive away the crowds, but they touch on such near-theological questions that interest remains high. (Why do you think that the press persists in calling the Higgs boson the "God particle"?) And biology, for its part, always can call on the familiarity of everyone with living creatures, possible relevance to medical advances, and the sheer fame of DNA. All these fields have lower-profile areas, or ones that are harder to explain, but they always have the big marquee topics to bring in the crowds.
Chemistry's big period for that sort of thing was. . .well, quite a while ago. We're at one remove from both the Big Overarching Questions at the physics end and the Medical Breakthroughs at the biology end, so our big results tend to get noticed according to how they relate to something else. If (for example) chemists achieved some breakthrough in artificial photosynthesis, it would probably be seen by the public as either physics or biology, depending on the inorganic/organic proportions involved.
But what about the first point: are we really running out of big questions to answer in this field? It's easy to think so (and sometimes I do myself), but I'm not so sure. Off the top of my head, I can think of several gigantic advances that chemistry could help to deliver (and hasn't yet). Room-temperature organic superconductors. That artificial photosynthesis I mentioned, to turn the world's excess carbon dioxide into organic feedstocks. Industrial spider-silk production. Small molecules to slow the aging process. A cheap way to lay down diamond layers on surfaces. And I haven't even mentioned the whole nanotechnology field, which is going to have to depend on plenty of chemistry if it's ever to work at all.
Now, it's true that looking through a typical chemistry journal, you will not necessarily find much on any of these topics, or much to make your pulse race at all. But that's true in the journals in even the most exciting fields. Most stuff is incremental, even when it's worthwhile, and not all of it is even that. And it's also true that of the big chemistry challenges out there, that not all of them are going to need organic synthesis to solve them. But many will, and we should be encouraging the people who feel up to taking them on to do so. Not all of them do. . .
I've written several times about how important metal-catalyzed coupling reactions are to organic synthesis - they're the single biggest change since my grad school days in the 1980s, when they were considered sort of squirrely and exotic. Now they're everywhere, and the literature on them is almost beyond counting.
A lot of work gets done trying to extend these reactions to starting materials that are more easily available but don't tend to work as well, to make the catalysts cheaper and more robust, and to find replacements for the palladium that's so often at the center of things. But people have been scorched in the attempt - several "palladium-free" couplings using other metals have turned out to be actually catalyzed by ridiculous trace amounts of palladium contamination instead.
Now there's a paper in JACS that's getting a lot of attention, and a lot of raised eyebrows. The authors claim that they can couple aryl iodides with plain unfunctionalized aromatic compounds with either amines or alcohol as catalysts - and no transition metals at all - just potassium t-butoxide as base. Organic chemists will recognize that this is a very unusual reaction indeed, since carbon-carbon bonds between aryl groups are not supposed to be so easy to form. This reaction, in fact, would suggest that a lot of the palladium-catalyzed work is some sort of odd detour to get to a process that happens fairly easily anyway.
But that doesn't seem right, somehow. The mechanism for the metal-catalyzed reactions is pretty well worked out (in its broad strokes, anyway), and the metal really is crucial. How can these things be going? The authors suggest that since they're using iodides that a free radical mechanism is operating. Addition of radical scavengers, they say, shuts the reaction down. And while it's true that iodides are great radical precursors, these couplings seem too clean for that mechanism - unless you take care to give them limited opportunities, free radicals tend to react with most everything in sight. (The fact that they don't tend to get regioisomers rules out another possible mechanism through benzyne intermediates).
The other problem I have with that is that potassium t-butoxide is not the sort of thing one generally needs in a radical reaction, although they are proposing a radical anion. Lithium and sodium t-butoxide don't work, interestingly, and I'm not sure what to make of that, either - these sorts of counterion effects can certainly be real (I've seen some myself), but they do call for an explanation.
And what's more, just this morning I've heard from a reader, an experienced chemist in a good lab, who's tried to reproduce this work and (so far) failed. I'd be very interested in hearing from others who've taken a crack at it, too. Real or not? Let's find out.
Readers will remember the extraordinary pictures of individual pentacene molecules last fall. Well, the same IBM team, working with a group at Aberdeen, has struck again.
This time they've imaged a much more complex organic molecule, cephalandole A. As that link details, the structure of this natural product has recently been revised - it's one of those structural-isomer problems that NMR won't easily solve for you. Here's a single molecule of it, imaged by the same sort of carbon-monoxide-tipped atomic force microscope probe used in the earlier work>
Now, it's not like you can just look at that and draw the structure, although it is vaguely alarming to see the bonding framework begin to emerge. If you calculate the electon densities around the structure, though, it turns out that the recently revised one is an excellent fit to what the AFM tip picks up, while the other structural possibilities lead to different expected contours.
It's quite possible that as this technique goes on that it could become a real structure-determination tool. These are early days, and it's already being applied to a perfectly reasonable organic molecule. Of course, the people applying it are the world's experts in the technique, using the best machine available (and probably spending a pretty considerable amount of time on the problem), but that's how NMR was at the start, and mass spec too. Both of those are still evolving after decades, and I fully expect this technology to do the same.
One of the people I met this past weekend was Matt Todd, chemistry professor at the University of Sydney. We talked about a project his lab is working on, and I wanted to help call attention to it.
They're working on praziquantel, also known as PZQ or Biltricide, which is used to cure schistosomiasis in the tropics. It's on the WHO's list of essential medicines for this reason. But PZQ is used now as a racemate, and this is one of those cases where everyone would be better off with a single enantiomer - not least, because the active enantiomer is significantly easier for patients to stand than the racemic mixture. Problem is, there's no cheap enantioselective synthesis or resolution.
So what Todd's group has done is crowdsourced the problem. Here's the page to start with, where they lay out the current synthetic difficulties - right now, those include enantioselective Pictet-Spengler catalysts and help with the resolution of a key intermediate. They were in need of chiral HPLC conditions, but that problem has recently been solved. I'd like to ask the chemists in the crowd here to take a look, because it wouldn't surprise me if one of us had some ideas that could help. Don't leave your suggestions here, though; do it over at their pages so it's all in one place.
This sort of thing is an excellent fit with open-source models for doing science: it's all pro bono, and the more eyes that take a look at the situation, the better the chance that a solution will emerge. I don't think it's getting the publicity it deserves. And no, in case anyone's wondering, I don't think that this is how we're all going to end up discovering drugs. Figuring out how to do this for large commercial projects tends to bring on frantic hand-waving. But in cases like this - specific problems where there's no chance for profit to push things along - I think it can work well. It makes a lot more sense than that stuff I was linking to last week!
As mentioned here before, there have been several episodes where people have thought to have discovered a new metal-catalyzed coupling reaction that uses some metal not known for such things. But closer examination often reveals that ridiculous trace amounts of palladium, copper, or other more reactive metals are still in the system and responsible for all the results.
The most recent candidate is been a series of gold-catalyzed reactions. Gold complexes have been quite fashionable in recent years, after a long period where they were considered next to useless. But perhaps things have gone a bit too far. A new paper in Organic Letters examines some gold-catalyzed couplings and finds, well. . .
Experimental reports claim that Au(I) is selective and very active, for instance, toward cross coupling of aryl halides with acetylenes (“Pd-free Sonogashira” for example), in the presence of mild bases. Surprisingly, this intriguing process has not been investigated mechanistically. We decided to set out experiments that would explain mechanistically the Pd-free cross-coupling catalysis with gold, but in fact what we are reporting is our failure to find a plausible mechanism. Furthermore, our experiments suggest that the presence of adventitious Pd might explain the positive “Pd-free Sonogashira” catalysis reported. . .
It's the oxidative addition step (the first one in the cycle) that makes things go off the rails. Gold complexes (at least the ones reported) just don't seem to be able to do it. On the other hand, as the authors mention, even high-quality gold often has a bit of palladium in it, and that bit is all it takes.
Phil Baran of Scripps has a paper out on the "ideal synthesis" of complex molecules. It's mostly a review of a number of his group's own syntheses, but it's done in light of his definition of "ideal": all bond-forming steps, with no protecting group manipulations or oxidation-state maneuvering.
That's a tough standard, but many biosynthetic routes reach 100% against it. I think that the highest figure from one of the Baran group's own syntheses is 84%, but he emphasizes that comparing these figures across the synthesis of different molecules isn't too meaningful, since they each carry their own issues. Comparing different routes to the same molecule is what he has in mind; it's a pity that no one else is ever going to make maitotoxin.
He also emphasizes that "ideality" isn't the only consideration in a synthesis. It gets at some key issues, but others (availability of reagents, ease of experimental procedures or purifications) can trump ideality out in the real world. You certainly see that in process chemistry in the drug industry. A reliable procedure that always gives the same (but lower) purity will win out over a temperamental one that sometimes gives wonderful material but sometimes craps out. And an elegant-looking route that gives a small amount of an intractable impurity isn't so elegant, compared to a slightly longer one that delivers material that's easily cleaned up.
The same goes for reagents. Ideally, you'd want to be able to buy all of them, and cheaply, too. But that's where the comparison with those 100% ideal biosynthetic routes breaks down. The enzymes that accomplish them are nothing if not bespoke reagents, doing one thing only but nearly perfectly. And there's that matter of a billion years of evolutionary overhead to factor in to the development costs. Of course, the other great thing about enzymes is that they're catalytic, and can just keep turning over reactions constantly. If they were one-time-use, like many of our reagents from the catalogs, it wouldn't matter how incredibly high-yielding and specific they were; the horrendous waste of time and material required to produce them for just one transformation would rule them out. Average those expenses out over the turnover numbers of a typical enzyme, though, and things look very good indeed.
I think that Baran's criteria are well worth keeping in mind, although I also think that most synthetic chemists already think this way, to one degree or another. I always gritted my teeth when I put on a protecting group during my total synthesis days, because I knew that I was adding another step (and more potential trouble) down the line when it had to come off again. Mind you, I was putting the thing on to avoid what I saw as even more immediate trouble, but I guess that's one of the things that Baran is saying, that it's time to try to stop making such deals if we can.
Earlier this year we had a paper from the Nicolaou lab on the synthesis of the ABCDEFG ring system of maitotoxin. Now I see that a synthesis of the QRSTU domain has arrived. That's what, twelve rings down? Only twenty more to go, guys. This piece is ". . . appropriately functionalized . . . for further elaboration and coupling with suitably activated neighboring ring systems of maitotoxin for the purposes of constructing larger domains of the natural product." My deepest sympathies to all concerned.
One of the folks over at Chemistry Blog has run into a shortage: he and his labmates have tried to order (-) sparteine from every supplier in the book, and there's none to be had. So if anyone has a big dusty bottle of it sitting around, you might drop these desperate chemists a line. But that got me thinking about the way things suddenly dry up like this.
The situation is different than for an industrial chemical shortage, like the acetonitrile crunch that we went through a while back (and which has long since eased up). It's quite unusual for a bulk chemical like that to go down; several factors hit all at once in that case, and it affected an awful lot of people who needed the solvent. But fine chemicals are much weirder. When you trace some of them back to their real sources, you sometimes find that there are really only a couple of people in the world at any given time making some of these things. Or, in many cases, you find that there's no one making it at all - someone made a bunch a few years ago for some reason, sold the excess to a supplier, and everyone else has been buying it from that same bottle ever since.
So when one of these small-scale itemsevaporates, the reason can be supply: no one makes it any more. Or it can be demand-driven: a single drug company's scale-up group can deplete the world's commercial supply of some strange little molecule when they suddenly switch to a 500-gram run. Everyone working in such a group knows to call all the suppliers when they have a prep calling for some weirdo starting material, and they'll often take the precaution of ordering whatever's out there to be had. (That serves as a cushion while they contract someone else to crank out a batch or figure out how to make it themselves). Naturally, you'd rather have your drug candidates depend only on things that can be ordered in tank car lots, but that's just not always possible.
So it could be that someone needed a lot of (-) sparteine for an asymmetric synthesis recently, and bought up the existing world stocks. But this one sounds like more of a supply problem. There would appear to be customers out there, who have been draining the existing stocks, but no one's been able to replenish them. TCI apparently stated that it's the starting material for (-) sparteine that has become unavailable, but that sounds a bit funny, since it would surprise me if the material on the market is synthetic. Sparteine is a naturally occurring alkaloid, found in several species of plant, and it's very hard to compete with isolation of the natural product in those cases.
Perhaps TCI means that the usual plant source is unavailable - that's happened before, too. A spike in Tamiflu demand a few years ago suddenly sent the price of star anise up to record levels, since the chiral starting material (shikimic acid) in the usual synthesis was most conveniently isolated from that source. But for sparteine, it looks as if the isolation comes from plants in the broom family, which are not exactly rare shrubs, so I'm not sure what's going on. Any ideas?
For once, I'm going to farm out a "Things I Won't Work With" post to someone else. For those who missed it in the comments, here's the link to the PDF of Max Gergel's extraordinary memoir "Excuse Me Sir, Would You Like to Buy a Kilo of Isopropyl Bromide?" Gergel founded Columbia Organic Chemicals, and if you want to see how it was done in the Old Days, this is the place to go. A sample:
". . .As we chatted, as if the thought had struck him for the first time, the old rogue said, "You know Gergel, I have a prep you could run for us which would make you a lot of money." Now this was the con working on the con. When my mother told me that a gentleman had called from town asking to visit Dr. Gergel there was no one at the plant except the two of us; when Parry, whom I already knew by reputation, sauntered in disguised as a simple country bumpkin I knew he was the director of research for Naval Research Labs, and his mission was to find someone foolhardy enough to make pentaborane. News travels. I met him at the door and told him that I was simply a lab flunky but would fetch Mr. Gergel, that my boss was extremely smart but had been prevented by the war effort (in which he had served valiantly and with distinction) from getting a PhD; that right now Mr. Gergel was extremely busy with priority reaction but would be able to see him in ten minutes—which gave me time to change my clothes and wash my face. He never realized that we were the same person. Parry chatted with me in the breezy, confidential voice that has been used by every con man since Judas Iscariot and told me that the only reason that the Navy was willing to farm out this fascinating project was simply luck of qualified personnel. That my splendid contribution to Manhattan District was well known by the military, that people spoke of me as a true Southern prodigy. (The old devil was so good that I listened with gradually increasing preparedness to make pentaborane, although I had been forewarned that it was dog with a capital "D". . .
I came across the book in Duke's chemistry library in 1984, a few years after its publication, and read it straight through with my hair gradually rising upwards. Book 2 is especially full of alarming chemical stories. I suspect that some of the anecdotes have been polished up a bit over the years, but as Samuel Johnson once said, a man is not under oath in such matters. But when Gergel says that he made methyl iodide in an un-air-conditioned building in the summertime in South Carolina, and describes in vivid detail the symptoms of being poisoned by it, I believe every word. He must have added a pound to his weight in sheer methyl groups.
By modern standards, another shocking feature of the book is the treatment of chemical waste. Readers will not be surprised to learn that several former Columbia Organic sites feature prominently in the EPA's Superfund cleanup list, but they certainly aren't alone from that era.
There are probably some other reactions of the same order as this one - but does anyone know a higher one? I'm talking about this four-component condensation reaction, reported from a lab in Iran, which actually makes semi-useful looking oxadiazoles. Anyone know of a five-component condensation? A real one, I mean, that makes a real product, as opposed to dark gooey stuff. Those, I can imagine.
Here's a quick warning for the bench chemists in the crowd: look out if you're making pyridines and using dichloromethane as solvent. This paper reports that the two can react, forming bis-pyridinium compounds - which isn't too surprising, in theory. What's alarming is that this happens at an appreciable rate at room temperature, which is something that I don't think a lot of people knew. I didn't.
As you'd imagine, electron-rich pyridines are the worst offenders. So keep an eye on these guys. . .
When I wrote here about unknown compounds, using aza-steroids as examples, I apparently wasn't thinking far enough afield. I noticed this new paper on a new class of tellura-steroids. I've no doubt that they're new; probably no one has ever thought to make anything that looks quite like this before (there's one other report of a tellura-steroid from 1990). Tellurium remains an element I've never used, but after that barrage of reports from fans of hafnium the other day, I'm sort of afraid to ask what people have used this one for. . .
Last year I wrote about the hideous structure of maitotoxin, with a note about how various groups were kicking around synthetic approaches to it. Now K. C. Nicolaou has a paper out in JACS on the synthesis of a portion of the molecule, which includes the line: ". . .as a prelude to a possible synthesis of large domains of this molecule for biological investigations. . .". Yeah, sure. Betting will now commence on whether or not he'll be able to resist going for the whole thing. As to whether or not that's a good idea, well. . .my views on the subject have already been aired pretty thoroughly.
Well, the first thing I can tell everyone is that I think the entire editorial staff at Chemical and Engineering News read every comment to this post. And that includes the nasty ones, for sure. The readership around here is a self-selected lot, and the commentors even more so, but the quick volume of responses got a lot of attention.
I noticed a lot of discussion around the "Do we really need more chemists?" theme. Readers will be interested to know that many people at the magazine share their uneasiness with some of the never-ending "scientist shortage" talk. The ACS's own figures (which many here seem to feel are too low) nevertheless show the highest unemployment rates among chemists they've ever shown.
Outside of the issues that came up here on the site, one of the things I suggested was more focus on smaller companies - both in terms of plain science/business news, but also with reference to where they come from. My point was that chemists reading C&E News see all sorts of items about various companies, but it's as if they've condensed out of the air. If there really is any sort of economic recovery coming on, I think that one of the best chances to lower our profession's jobless rate is through startup formation, and I told the people at the magazine that they should keep this in mind.
I wasn't in the discussion groups that touched on another theme that came up here in the comments, the long-running "Women in Chemistry" articles. And it's probably a good thing - I tend to be pretty much an eye-roller when it comes to corporate diversity programs, but I get the feeling that no one at the ACS (or its publications) feels safe doing so much as that, even if they were so inclined. For the record, I have no problem at all, of course, with women in chemistry, or anyone else in chemistry - it's just the let's-all-join-hands march-of-progress stuff that can get tedious. The people whose march through the ranks I most want to promote are the people who are good at it, whoever that might turn out to be.
One thing I found interesting is that the writers, although almost all of them have chemistry training, seem to feel apart from the actual business of chemistry. That's understandable, I suppose, because their profession is really journalism. I told them that not being a journalist made writing a blog a lot easier. . .
The Daily Telegraph in the UK has a story today claiming that a 1951 outbreak of hallucinations and dementia in the French village of Pont-Saint-Esprit was not (as everyone thought) an example of ergot poisoning. No, according to some guy who's writing a book, it was. . .a secret LSD experiment.
Now, there most certainly were secret LSD experiments during the 1950s and 1960s. (The book Storming Heaven has a good account of them, as well as of the history of LSD in general). But it's rather hard to see why the CIA should decide to dose some village in the Auvergne, especially when the symptoms (burning sensations in the extremities as well as hallucinations) seem to match ergotism quite well.
But no matter. I think we can dispose of this new book and its author pretty quickly. Just take a look at some of his scoop:
However, H P Albarelli Jr., an investigative journalist, claims the outbreak resulted from a covert experiment directed by the CIA and the US Army's top-secret Special Operations Division (SOD) at Fort Detrick, Maryland.
The scientists who produced both alternative explanations, he writes, worked for the Swiss-based Sandoz Pharmaceutical Company, which was then secretly supplying both the Army and CIA with LSD.
Mr Albarelli came across CIA documents while investigating the suspicious suicide of Frank Olson, a biochemist working for the SOD who fell from a 13th floor window two years after the Cursed Bread incident. One note transcribes a conversation between a CIA agent and a Sandoz official who mentions the "secret of Pont-Saint-Esprit" and explains that it was not "at all" caused by mould but by diethylamide, the D in LSD.
Laughter may now commence. For the non-chemists in the audience, diethylamide isn't a separate compound; it's the name of a chemical group. And LSD isn't some sort of three-component mixture, it's the diethylamide derivative of the parent compound, lysergic acid. (I'd like to hear this guy explain to me what the "S" stands for). Diethylamides have no particular hallucinogenic properties; they're too small and common a chemical group for anything like that. DEET, the insect repellent, is a common one, and there are plenty of others.
In short, neither the author of this new book, nor the people at the Telegraph, nor the supposed scientific "source" of this quote, know anything about chemistry. This is like saying that the secret of TNT is a compound called "Tri". Nonsense.
Update: see the comments section. Not everyone's buying my line of thought here. . .
I've written both here and elsewhere about flow chemistry, the technique where you pump your reactions through a reaction tube of some sort rather than mixing them up in a flask. And I freely admit that I have a fondness for the idea, but it's definitely not the answer to every problem.
For one thing, I tend to like the idea of sending reactants over a bed of catalyst or solid-supported reagent (what I call Type II or Type III flow reactions in that 2008 link above). Type I reactions, in my scheme, are the ones where you just use a plain tube or channel, and all the reactants are present in solution. A big advantage of those, as far as I can tell, is to handle tricky intermediates that you wouldn't want to have large amounts of or to control potential runaway exothermic reactions. There's also the possibility of running the reaction stream through some solid-phase purifications and scavengers, the way Steve Ley and his group like to work, which is convenient since you're already pumping the stuff along anyway.
But the sorts of reactions that you often see in the flow-chemistry equipment brochures. . .well, that's something else again. More than one outfit has earnestly tried to sell me a machine based on how well it did a Fischer esterification. My problem wasn't that the reaction was discovered almost in Neanderthal times - it was that Thag run reaction in round bottom flask, work fine, not need flow reactor. I mean, really, this is a nonexistent problem and needs no solution.
So I read this new paper in Angewandte Chemie with interest. The authors are looking at some standard catalytic organic transformations and comparing them carefully between batch mode and a flow setup. They stipulate at the beginning that flow chemistry has the advantages mentioned above, but they're wondering about what it can do for more ordinary chemistry:
"In addition to these developments, general and rather sweeping claims have been made that microreactor systems accelerate organic reactions and that lower catalyst loadings and higher yields can routinely be achieved in these systems compared to those of reactions carried out in flasks. Despite these potential advantages, examples of successful implementation of microflow reaction technologies in either academic organic synthesis or industrial process research and manufacturing remain more isolated than these reports would suggest. However, the implication is that it is only a matter of time before microflow reactors will dominate laboratory studies aimed at both fundamental research and practical applications of complex organic reactions, with our current mode of operation in reaction flasks ultimately becoming a relic of the past. It seems therefore worthwhile to examine the assumptions behind this viewpoint to provide a critical analysis of “flask versus flow” as a means for effecting reactions."
What they find is that there's very little difference. A catalyzed aldol reaction that was studied under flow conditions by the Seeburger lab is shown to perform identically to a batch reaction, if you make sure to run them at the same temperature and with the same catalyst loading. The paper then looks at asymmetric addition of diethyl zinc to benzaldehyde, a model reaction that I often wish would disappear from human consciousness so it would afflict us no more. But here, too, under more challenging heat-transfer conditions, flow showed no differences from batch. The authors point out that this reaction is, in fact, run under industrial conditions, but not in a flow apparatus. Rather, it's done in batch mode, but though good old slow addition of reagent, which also gives you control over exotherms.
The authors specifically exempt all supported-reagent chemistry from their analysis, so that preserves what I like about flow systems. But for homogeneous reactions, the only time they can see an advantage for the flow reactors is when there's a potential for a dangerous rise in temperature. So now we'll see what some of the more flow-oriented people have to say in reply. . .
Well, I have no particular need to make azo-linked compounds (see this morning's post for one reason!). And I have to say, although it's mechanistically interesting, I definitely feel no desire to make them by combining a hydroperoxide and a diazonium salt in one pot. This is not a moment destined to take its place alongside the legendary invention of the chocolate/peanut butter cup.
There was a natural products paper (abstract) that I missed last fall which has finally come out in Bioorganic and Medicinal Chemistry Letters. Let's have a show of hands: how many chemists out there think that this structure is the correct one?
Right. Going back through SciFinder, I don't find any anti-Bredt cyclobutene structures of this sort in the modern era - only speculations about whether or not they could even exist. I hope, for their sake, that the authors have assigned this one correctly, and it certainly would be neat and interesting if they have. But doubts afflict me.
Note - the most recent entry on the (inactive?) med-chem blog "One in Ten Thousand" was a raised eyebrow about this exact paper. Fear not, there's no curse - I'll continue posting. . .
Now here's an oddity: medicinal chemists are used to seeing the two enantiomers (mirror image compounds, for those outside the field) showing different activity. After all, proteins are chiral, and can recognize such things - in fact, it's a bit worrisome when the enantiomers don't show different profiles against a protein target.
There are a few cases known where the two enantiomers both show some kind of activity, but via different binding modes. But I've never seen a case like this, where this happens at the same time in the same binding pocket. The authors were studying inhibitors of a biosynthetic enzyme from Burkholderia, and seeing the usual sorts of things in their crystal structures - that is, only one enantiomer of a racemic mixture showing up in the enzyme. But suddenly of their analogs showed both enantiomers simultaneously, each binding to different parts of the active site.
Interestingly, when they obtained crystal structures of the two pure enantiomers, the R compound looks pretty much exactly as it does in the two-at-once structure, but the S compound flips around to another orientation, one that it couldn't have adopted in the presence of the R enantiomer. The S compound is tighter-binding in general, and calorimetry experiments showed a complicated profile as the concentration of the two compounds was changed. So this does appear to be a real effect, and not just some weirdo artifact of the crystallization conditions.
The authors point out that many other proteins have binding sites that are large enough to permit this sort of craziness (P450 enzymes are a likely candidate, and I'd add PPAR binding sites to the list, too). We still do an awful lot of in vitro testing using racemic mixtures, and this makes a person wonder how many times this behavior has been seen before and not understood. . .
While I'm putting up odd chemical structures today, I thought I'd add this one, Alasmontamine A, from the latest Organic Letters preprint stream. Natural products scare me:
Anyone who wants to take a crack at this one synthetically, you just go right ahead without me. It is pretty much a dimer, though, so it's only about half as awful as it looks. Which is still enough. It doesn't seem to have much biological activity, but if you can sell it as something to do with green chemistry, nanotech, or alternative energy, you should be able to round up some money, right?
Hmmm. As a colleague just pointed out to me, I've spent some time here defending "me-too" drugs. And just this morning (see the previous post) I take off after what can only be described as "me-too reactions", saying that I don't see the use for so many of them.
Well! The only defense I can offer (until I think of a better one) is that there is no drug category so populated as the aldoxime-to-nitrile conversion is in synthetic chemistry (or acetal formation/deprotection, desilylation, or the other categories I spoke of in that other post). I suppose I might have a tougher time standing up for me-too drugs if there were (say) twenty-nine statins on the market. But still. . ."I'd better put up a post on that", I said. "Better you than someone with a funny pseudonym in your comments section", came the reply.
Here's a question you don't hear discussed very often: are there some synthetic organic chemistry reactions that don't need any more work? I'm moved to ask this because I just came across yet another way that someone has reported to dehydrate an oxime to a nitrile. (No, I won't link to it. You don't need it. No one needs it).
If asked to count the number of times I have seen new reagents that dehydrate oximes to nitriles, I would be at a total loss to even try to guess. But I've seen it over and over and over. Is it possible that we now have enough ways to do this? And that anyone who is contemplating adding another one to the list should instead go do something else?
I'll vote for that. And there are several other transformations that could go on the same list. That doesn't mean that I think that our existing methods for these are all perfect, or that they couldn't be improved. I mean, even for forming amides, I would like an inexpensive reagent that never fails, even with crappy unreactive hindered coupling partners, works at room temperature in about five minutes, and has a ridiculously simple workup. We don't quite have that, do we? But no one's publishing on coupling reagents like that, because they're rather hard to realize. What we get are a bunch of things that are about as useful as what we have already.
And I agree that it's worth having multiple methods to accomplish the same reaction. I've been saved several times by being able to move down the list and find something that works. But how long should the list be? Eight reagents? Ten? Twenty? At what point should something like this cease to become an acceptable field for human effort?
My first nomination, then, for the Retirement Home for Organic Transformations is aldoxime to nitrile. I am willing to face the rest of my chemistry career with only the monstrously long list of reagent systems we have today for that reaction. Further nominations can be made in the comments - I'll assemble a list for another post.
Many synthetic chemists these days use microwave reactors to speed up their reactions, especially metal-catalyzed couplings. But there's been a debate ever since the technique became popular about why it works so well. Some people think that microwave irradiation is just a very efficient and fast way to heat up a reaction, while others have hypothesized some sort of microwave-specific effect, outside of the heating behavior. Metal catalysts have been particular favorites for this possibility.
The former view has been gaining ground, though, and I think we can now say that it's won. A new paper from the lab of microwave chemistry pioneer Oliver Kappe has an ingenious way to settle the argument. They've fabricated a microwave reactor vial out of silicon carbide. It's chemically inert and has very high thermal conductivity, but SiC is completely opaque to microwave frequencies. Reactions run in this vessel heat up just as quickly as those run in the same-sized glass tube, and reach the same internal pressures and working temperatures. But the contents experience no microwave irradiation at all.
Kappe and his co-workers ran a wide variety of reactions head-to-head in the two kinds of vial, including a range of metal catalysts. No differences were observed in the yields, purities, or side products for any of eighteen different types of reaction. That's good enough for me: unless someone can come up with a weirdo outlier catalyst, there is no nonthermal microwave effect on organic chemistry.
Fall is in the air, which (for a very small group of people) brings thoughts of a call from Stockholm. The Nobel Prizes will be announced next week, starting the Physiology and Medicine on Monday. And as in years past, people are lining up with predictions.
Predicting the Chemistry prize is tricky, since it's so often used as a surrogate for the nonexistent Biology prize (and, once in a while, as an overflow Physics one as well). But let's take a look at the field and see if the Scandinavians surprise us or not.
The two best roundups I've seen so far are from the Wall Street Journal and Thomson/Reuters. For Chemistry, the Journal has a pair of biology prize possibilities going to (1) Hartl and Horwich for chaperone proteins, or (2) Winter and Lerner for antibodies (humanized, monoclonal, catalytic). They also have a material-science one for Matyjaszewski (atom-transfer radical polymerization). Note that that last Wikipedia entry seems to show (at least as of this morning) the hand of an interested editor.
And over at the Chem Blog, the current favorites are Grätzel and also Richard Zare, Allan Bard, and William Moerner for single-molecule spectroscopy. Those last two have already picked up the Wolf Prize in Chemistry for that work in 2008, and Zare won one in 2005. It's worth noting that Richard Lerner, from the Thomson list, won back in 1994-1995, along with Peter Schultz, who also is often mentioned when Nobel time comes around.
I think that Grätzel is a good bet, considering that the work seems solid and that solar power is such a hot topic these days. I would like to see Bernd Giese get in on a prize, since I did my post-doc with him, but I consider the electron-transfer work to be more of a long shot, at least for now. List is probably the best shot at a "pure organic chemistry" prize; although I also doubt that this is the way it'll go this year. As always, it wouldn't surprise me a bit if things bleed over from biology - the committee might go as far as to consider telomeres to be chemicals and give it to Blackburn, Greider, and Szostak. And that's certainly worth an award, just not in Chemistry.
We'll know soon. Feel free to put your favorites into the comments, and I'll update this post with the list of suggestions. One of has to get it right, you'd think.
File this one under "Department of Odd Ideas". There's a paper coming out in JACS that has a neat variation on an idea that's been kicking around for some years now: molecularly-imprinted polymers (MIPs). A MIP is a sort of molded form around some molecular template - you make your polymer in the presence of the desired target molecule, with the idea that you'll then form target-shaped cavities in the resulting gel.
These things have been worked on for years in the analytical chemistry field, since they have the potential to form very robust sensors for a wide variety of substances. The thought has also been that they might serve as pseudo-enzymatic catalysts for some reactions as well, although I get the impressions that that's been harder to realize. From the outside, the whole area seems to be one of those that goes on for years as something that's still developing and hasn't quite taken off.
This latest idea may or may not change that, but it's ingenious. What this group (from two French labs) has done is anchor the initiation point of the polymer to an enzyme inhibitor molecule - in this case, to an amidine inhibitor of trypsin. The resulting polymer turns out to have strong inhibitory activity for the enzyme, about a thousandfold higher than the starting amidine - as well it might, if it's muffling the active site like a huge beach towel. They tried a number of potential polymeric systems, settling on some neutral methacrylates, since charged species didn't seem to give binding (or specificity) at all.
The control experiments support their interpretation of what's going on. The resulting polymers don't seem to recognize (or inhibit) a variety of otherwise similar proteins. If control polymers are formed without the anchoring group, they have no inhibitory effect. Similarly, if the experiment is done with an excess of non-polymerizable inhibitor, the effect goes away as well (since the active site is already occupied).
I'm not sure that these things will find much use as enzyme inhibitors in living systems, unless you're looking to shut down some sort of enzyme in the gut. (In that case, you might be able to give someone a glass full of soluble polymeric stuff, with the expectation that it wouldn't be absorbed and would emerge more or less unchanged. But perhaps there are applications under blood filtration or dialysis conditions, or topical ones. At any rate, it's a neat idea which is now looking for a home. . .
Friday's article on the T2 explosion has had a lot of readers, thanks to links from various outside sources. One line from it has attracted a disproportionate amount of comment - the one where I mentioned that the two owners of the company had only undergraduate degrees. This needs some clearing up; I should have explained myself more clearly in the original post.
First off, there are two things I most definitely didn't mean. I do not, of course, mean to imply that anyone without a graduate degree is incapable of running a complex or hazardous chemical process. Nor am I assuming that there's some sort of magic in a graduate degree program that turns a person into someone who actually can run such things. I've seen enough smart people who didn't go to grad school (and enough fools with PhDs) not to believe either of those.
The key thing here (besides intelligence, which is necessary, but not sufficient) is experience. And what experience gives you, among other things, is a sense of knowing what needs to be worried about. That's what the T2 people seem to have lacked. It's no exaggeration that every time I've described this accident to an experienced scale-up or process chemist, their response has been outrage and incredulity. De mortuis nil nisi bonum, and my apologies in advance to any relatives or colleagues of the deceased, but these people were conducting a very hazardous chemical process, and the lack of care they showed while doing so is stunning. No calorimetry to look for exothermic reactions, a totally inadequate rupture disk for venting that large a reactor, no attempt to set up the process as a flow or feed (which also would have given you built-in temperature control), and no backup for the absolutely crucial cooling system.
Now, it's quite possible that if the people who set up the T2 reactor had been through a graduate program that they would have gone on to do the exact same thing. But it might have helped a bit, which might have been enough to keep four people from being killed. Graduate work is supposed to involve research, experiments that haven't been run before. If you get a degree that's worth anything, you've had the experience of having to figure experimental setups out on your own, and that means that you should have had some chances to think about what might go wrong with them. And the larger the scale of your chemistry, the more you should think about that last point.
Having a couple of reactions take off and spray the inside of your fume hood brings home the problems of heat transfer and pressure relief in a way that no textbook can quite match, and that's not something that you'll experience as an undergraduate in most colleges. Now, it's true that you can experience that at work, too, where the lessons will be even more vivid. That's why in an industrial setting an experienced chemist without a doctorate is almost always much more worth listening to than a freshly arrived PhD - if they're any good, they've seen a lot and they've learned from it.
The people running T2 not only did not take proper precautions, they had been told that they needed to bring in a consultant to look over their process. In other words, "get someone in here who can see things that you're overlooking". But they didn't do that. It's also possible that they might have brought someone in and ignored their recommendations, too, and there's no degree program that can keep you from acting like that, either. They'd run this thing over and over just the way it was, and they probably thought that everything was under control. But it wasn't. And they had no idea.
I noted this item over at C&E News today, a report on a terrible chemical accident at T2 Laboratories in Florida back in 2007. I missed even hearing about this incident at the time, but it appears to have been one of the more violent explosions investigated by the federal Chemical Safety and Hazard Board (CSB). Debris ended up over a mile from the site, and killed four employees, including one of the co-owners, who was fifty feet away from the reactor at the time. (The other co-owner made it through the blast behind a shipping container and suffered a heart attack immediately afterwards, but survived). Here's the full report as a PDF.
The company was preparing a gasoline additive, methylcyclopentadienyl manganese tricarbonyl (MCMT). To readers outside the field, that sounds like an awful mouthful of a name, but organic chemists will look it over and say "OK, halfway like ferrocene, manganese instead of iron, methyl group on the ring, three CO groups on the other side of the metal. Hmmm. What went wrong with that one?"
Well, the same sort of thing that can go wrong with a lot of reactions, large and small: a thermal runaway. That's always a possibility when a reaction gives off waste heat while it's running (that's called an exothermic reaction, and some are, some aren't - it depends on the energy balance of the bonds being broken versus the bonds being made, among other things). Heating chemical reactions almost invariably speeds them up, naturally, so the heat given off by such a reaction can make it go faster, which makes it give off even more heat, which makes it. . .well,, now you know why it's called a runaway reaction.
On the small scales where I've spent my career, the usual consequence of this is that whatever's fitted on the top of the flask blows off, and the contents geyser out all over the fume hood. One generally doesn't tightly seal the top of a reaction flask, not unless one knows exactly what one is doing, so there's usually a stopper or rubber seal that gives way. I've walked back into my lab, looked at the floor in front of my hood, and wondered "Who on earth left a glass condenser on my floor?", until I walked over to have a look and realized where it came from (and, um, who left it there).
But on a large scale, well, things are always different. For one thing, it's just plain larger. There's more energy involved. And heat transfer is a major concern on scale, because while it's easy to cool off a 25-milliliter flask, where none of the contents are more than a centimeter from the outside wall, cooling off a 2500-gallon reactor is something else again. Needless to say, you're not going to be able to pick it up quickly and stick it into 25,000 gallons of ice water, and even that wouldn't do nearly as much good as you might think. The center of that reactor is a long way from the walls, and cooling those walls down can only do so much - stirring is a major concern on these scales, too.
What's worth emphasizing is that this explosion occurred on the one hundred seventy-fifth time that T2 had run this reaction. No doubt they thought they had everything well under control - have any of you ever run the same reaction a hundred and seventy-five times in a row? But what they didn't know was crucial: the operators had only undergraduate degrees (Update: here's another post on that issue), and the CSB report concludes that the didn't realize that they were walking on the edge of disaster the whole time. As it turns out, the MCMT chemistry was mildly exothermic. But if the reaction got above the normal production temperature (177C), a very exothermic side reaction kicked in. Have I mentioned that the chemistry involved was a stirred molten-sodium reaction? Yep, methylcyclopentadiene dimer, cracking to monomer, metallating with the sodium and releasing hydrogen gas. This was run in diglyme, and if the temperature went up above 199C, the sodium would start reacting energetically with the solvent. Update: corrected these temperature values
Experienced chemists and engineers will recognize that setup for what it is: a black-bordered invitation to disaster. Apparently the T2 chemists had experienced a few close calls in the past, without fully realizing the extent of the problem. On the morning of the explosion, the water cooling line experienced some sort of blockage, and there was (fatally) no backup cooling system in place. Ten minutes later, everything went up. In retrospect, the only thing to do when the cooling went out would have been to run for it and cover as much ground as possible in the ten minutes left, but that's not a decision that anyone usually makes.
Here you see part of the company's reactor vessel, which ended up on some train tracks 400 feet away. The 4-inch-wide shaft of the agitator traveled nearly as far, imbedding itself into the sidewalk like a javelin. My condolences go out to the families of those killed and injured in this terribly preventable accident. The laws of thermodynamics, unfortunately, have no regard for human life at all. They cannot be brushed off or bargained with, and if you do not pay attention to them they can cut you down.
Here's an odd idea that might turn into something useful. A group at Berkeley (spanning both the chemistry and physics departments of Cal-Berkeley and the Lawrence labs) have reported a method for encapsulating organic molecules and releasing them inside a reaction when needed.
What they do is form microcapsules, small polymer spheres, from branched acid chlorides and amines. That technology is already known, but in this case they're also incorporating carbon nanotubes inside the capsules, as shown in the photo. If you do this from a solution of some reagent of interest, you now have it, the solvent, and the carbon nanotubes wrapped up in small polymer beads.
And if you irradiate these things, the carbon nanotubes heat up rapidly, causing the microcapsules to break open. There's the control mechanism. They've demonstrated this for reactions such as the "click" triazole formation and for olefin metathesis. You can follow the reaction progress, and it goes stepwise, further every time you hit the solution with a near-IR laser, and stopping until you do it again and release more coupling partner.
The limits of this system, so far, are (1) that the microcapsules aren't compatible with the full range of organic solvents, (2) that heat-sensitive reagents probably won't do very well in a system that require local heating to burst the capsules, and (3) that you eventually have to clean out (presumably by some sort of filtration) the capsule and nanotube residue after things have burst. But some of these can be addressed in further rounds of improvements.
For example, there must be different sorts of polymers that can form these beads, for one thing. And if it's possible to encapsulate things on the surface of a larger sheet of solid material, you could just dip that in and pull it back out when you're through, which should cut down on the capsule residue. (That would also allow you to quantitate how much reagent you've released, by following the surface area of the sheet that you've irradiated with the laser). What would really make this something to see would be if a way could be found to release different sorts of capsules at different wavelengths, selectively. . .
Beware of iron! That's the lesson that's being hammered home these days in synthetic chemistry. I wrote recently about the discovery that a series of iron-catalyzed couplings were actually being caused by trace amounts of copper compounds. Now there's another re-examination of some similar iron couplings that were reported last year.
If you click on that last link, you'll see that there was already trouble with the original work. The authors themselves appear to have had a hard time repeating it, and earlier this year they retracted the paper. This latest publication (from other workers) details their own attempts to reproduce the original iron-catalyzed work. In most cases, they got nothing at all, but once (and only once) they had a wonderful spot-to-spot reaction take place with para-bromoacetophenone, which must have been just the sort of thing that excited the original researchers.
But it could never be reproduced. The best guess is that this one reaction may have been catalyzed by trace amounts of palladium. That's plausible, because, as it turns out, the coupling can be run at high conversion with one ten-thousandth of a per cent of palladium acetate. Yes, a substrate-to-catalyst ratio of one million to one is sufficient, and that's the kind of activity that makes it very, very hard to assume that trace amounts of palladium salts aren't doing the work.
It also makes you wonder why anyone would use anything else, at least for activated systems like para-Br acetophenone. In the future, anyone trying to come up with a non-palladium coupling protocol had better stick with the tough reactions that don't work well anyway. That will keep this sort of thing from happening again - and those are the kinds of reactions we need help with, anyway. A new catalyst for coupling red-hot electron-poor aryl bromides, on the other hand, will be greeted with yawns, and with suspicion as well.
Here's an interesting paper that some of you may have seen in J. Med. Chem.: "Heteroaromatic Rings of the Future". That's an odd title, but an appropriate one.
For the non-chemists in the crowd who made it to this paragraph, heteroaromatic rings are a very wide class of organic compounds. They're flat cyclic structures with one or more nitrogen, oxygen, or sulfur atoms in the ring - I'll leave out explaining the concept of "aromaticity" for now, but suffice it to say that it makes them flat and gives them some other distinct properties. These structures are especially important in medicinal chemistry. If you stripped out all the drugs that contain something from this class, you'd lose a bit under half of the current pharmacopoeia, and that share has lately been increasing.
The authors have sat down and attempted to work out computationally all the possible heteroaromatic systems. If you include a carbonyl group as a component of the ring, you get 23,895 different scaffolds (and only 2986 if you leave the carbonyl out of it). Their methods to define and predict that adjective "possible" are extensive and worth reading if you're curious; they did put a lot of effort into that question, and their assumptions seem realistic to me. (For example, right off, they only considered mono- and bicyclic systems, 5- and 6-membered only, C, H, N, O and S).
At any rate, only 1701 of those 23,985 have ever been reported in the literature. And it looks as if reports of new ring systems reached a peak in the late 1970s, and have either dropped off or (at the very least) never exceeded those heights since then. The authors estimate that perhaps 3,000 of their list are synthetically feasible, with a few hundred of them being notably more likely than the rest. Their paper, in fact, seems to be a brief to alter that publication trend by explicitly pointing out unexplored synthetic territory. It wouldn't surprise me if they go back in a few years to see if they were able to cause an inflection point.
I hope they do. I'm a great believer in the idea that we medicinal chemists need all the help we can get, and if there are reasonable ring systems out there that we're not exploiting, then we should get to them. Adventurous chemists should have a look.
Things are pretty quiet around the industry these days, so my blogging thoughts have been turning to Big General Problems. And here's one that I know that people are working on, but which I think we as chemists are going to have to understand much better: localization.
"Say what?" is the usual response to that, but hear me out. What I mean is the trick that living cells use for their feats of multistep synthesis. Enzymes aren't generally just floating around hoping to bump into things - well, some of them are, but a lot of them are tied to specific regions. They're either membrane-bound, or they're expressed in structures where they don't get a lot of chances to diffuse out into the mix. The interior of a cell, on the whole, is a pretty intensely structured place (as it would have to be).
And that allows specific reactions to take place away from other things that might interfere, which is something that we have a hard time doing in the lab. If you have a five-step synthesis, it's a pretty safe bet that you don't dump the reagents for all five steps into the pot at the same time and hope for the best. No, we generally have to fish out the product and take it on separately. It's often a real achievement (especially on larger scale) to be able to "telescope" two steps into one flask and skip any sort of product isolation between them. Doing it with more than one step is even more rare (and more useful when you can bring it off).
There's been a lot of work on one-pot cascade or domino reaction systems, and that's a step toward what we need. But most of these cases are reaction-driven: people find chemistries that can be run in this fashion, and then try to exploit them to make whatever can be made. Nothing wrong with that, but it would be nice to have product-driven approaches, where you'd look at a particular structure and figure out which multicomponent reaction scheme would work best for it. Generally speaking, we just don't have enough worked-out systems to be able to do that.
And that's where I think that some new technologies could help, specifically flow chemistry and/or microfluidics. Instead of figuring out reactions that can exist while all stirring around together in one pot, this approach takes it as a given that many transformations probably just can't be done that way. And if you can't have one big reactor with multiple things in it, then why not make multiple reactors, each with a different thing in it? Flow systems can, in theory, send compounds through a series of isolated reactions, moving the material physically through various zones and reagents. Not every reaction is perfect of course, but you can often use scavenger reagents along the way to strip out potential interfering impurities before the next step.
I like the idea, but there are a lot of things to be done to make it work. Probably the most advanced organic synthesis that's being done is this style is in Steve Ley's lab at Cambridge. I always enjoy reading their flow papers, which make clear that there's some significant optimization that needs to be done before you can throw the switch and stand back. Some other multistep flow work can be found here and here, and the same comment applies: there's a lot of preparation involved.
My hope is that these kinds of things will eventually move toward more of a plug-and-play system, where you put in the various cartridges and choose a protocol from the list of best-general-fits for your planned reactions. We're quite a ways from that, but I don't see why it wouldn't be possible.
I had a printout of the structure of maitotoxin on my desk the other day, mostly as a joke to alarm anyone who came into my office. "Yep, here's the best hit from the latest screen. . .I hear that you're on the list to run the chemistry end. . .what's that you say?"
This is, needless to say, one of the largest and scariest marine natural product structures ever determined (and that determination has been no stroll past the dessert table, either).
But that' hasn't stopped people from messing around with it. And there's much speculation that other people are strongly considering messing around with it, too - you synthetic chemists can guess the sorts of people that this might be, and their names, and what it might be like to sit through the seminars that result, and so on.
I fear that a total synthesis of maitotoxin would be largely a waste of time, but I'm willing to hear arguments against that position. Just looking at it, though, inspires thought. This eldrich beastie has 98 chiral centers. So let's do some math. If you're interested in the SAR of such molecules, you have your choice of (two to the 98th) possible isomers, which comes out to a bit over (3 times ten to the 29th) compounds. This is. . .a pretty large number. If you're looking for 10mg of each isomer to add to your screening collection (no sense in going back and making them again), then you're looking at a good bit over half the mass of the entire Earth. And that's just in sheer compounds; we're not counting the weight of vials, which will, I'd say, safely move you up toward the planetary weight of a low-end gas giant. We will ignore shelving considerations in the interest of time.
Recall that yesterday's post gave a number of about 27 million compounds below 11 heavy atoms. You could toss 27 million compounds into a collection of ten to the 29th and never see them again, of course. But that brings up two points: one, that the small-compound estimate ignores stereochemistry, and we've been getting those insane maitotoxin numbers by considering nothing but. The thing is, with only 11 non-hydrogen atoms, there aren't quite as many chances for things to get out of control. The GDB compound set goes up only to 110 million or so if you consider stereoisomers, which actually isn't nearly as much as I'd thought.
But the second point is that this shows you why the Berne group stopped at 11 heavy atoms, because the problem becomes intractable really fast as you go higher. It's worth remembering that the GDB people actually threw out over 98% of their scaffolds because they represented potential ring structures that are too strained to be very stable. And they only considered C, N, O and F as heavy atoms (even adding sulfur was considered too much to deal with, computationally). Then they tossed out another 98 or 99% of the structures that emerged from that enumeration as reactive and/or unstable. Relax your standards a bit, allow another atom or two, bump up the molecular weight, do any of those and you're going to exceed anyone's computational capacity. Update: the Berne group has just taken a crack at it, and managed a reasonable set up to 13 heavy atoms, with various simplifying assumptions to ease the burden. If you want to mess around with it, it's here, free of charge).
No, there are a lot of compounds out there. And if you look at the really big ones - and maitotoxin is nothing if not a really big one - there are whole universes contained just in each of them. (Bonus points for guessing the source of the name of the post, by the way).
Organic synthesis as we know it can't go on without metal-catalyzed bond-forming reactions. There are too many of them, and they're just too useful. Palladium's the workhorse, followed by copper, then you've got rhodium, nickel, and a host of others (gold's been popular the last few years). We have a. . .fairly good idea of what's going on in these reactions, but not quite good enough. If we really understood all the factors involved, there wouldn't be six garbonzillion different sets of conditions for these things, would there?
A short paper's just come out in Angewandte Chemie that illustrates some of the trickiness involved. Carsten Bolm's group at Aachen has published several interesting iron-catalyzed coupling reactions using good old ferric chloride. These are aryl-amine, aryl-ether, aryl-amide and aryl-sulfide-forming procedures, which cover a lot of ground. (Interestingly, it was one of those sulfide papers that was recently plagiarized by another set of authors). But there were always a few kinks, such as variable yield depending on which bottle of ferric chloride was used.
Well, organometallic chemists are used to that sort of thing. But Bolm has gone back to look at things closely, in collaboration with Stephen Buchwald of MIT (whose group has published many similar couplings with other metal systems), and found a surprise. The iron chloride isn't doing a thing. In fact, as you go to more and more pure sources of the reagent, the yield drops off. But it never goes away, even with the 99.9% pure stuff. That's because it seems to be copper (I) contaminants doing the coupling, even at the parts-per-million level.
There are some startling tables in the paper. For coupling pyrazole onto an aryl iodide, for example, Bolm's group had found in 2007 that they could get 87% yield using >98% ferric chloride from E. Merck, along with dimethylethylene diamine as a cosolvent. If you use the >98% from Aldrich under the same conditions, though, you get 26% yield. And the Aldrich >99.99 stuff gives you only 9%. But if you add five ppm copper (I) oxide to that last reaction, the yield goes up to 78%. And if you leave the ferric chloride out completely, and just go with a trace of copper, the yield is exactly the same (it goes down if you run the same trace-of-copper without the diamine, which seems to be complexing it up into solution).
The other couplings that were reported seem to follow the same pattern. This must really be a disappointment to Bolm and his group, because their work was, among other things, an attempt to get away from copper and palladium. Still, this appears to be a much cleaner and more efficient copper reaction than a lot of the procedures out there.
This sort of thing has happened before in organometallic chemistry. There's a well-known example of nickel contamination in a chromium-mediated reaction from the mid-1980s, and more recently, a report of supposed "metal-free" couplings which appear to have been catalyzed by parts-per-billion levels of palladium found in the sodium carbonate being used as a base, of all things. Tricky things, these metals.
I've been contacted by several people over the last few weeks about the TMS diazomethane-linked fatality in Nova Scotia (first written about here). Many more details are emerging about the case, chief among them that the fume hoods in the lab were apparently down for maintenance during this time.
Here's a newspaper article that's just appeared. I'm quoted in it as saying that I would have refused to work under such conditions, and I stand by that. But that's not surprising: in every industrial lab I've ever worked in, when the fume hoods go down, people roll their eyes and walk out the door. I most especially cannot recommend working with something like TMS diazomethane in such a situation.
I’ve written before about the copper-catalyzed triazole formation (often referred to as “click chemistry”). It’s turned into a very useful way to stick all sorts of molecules and structures together, and is showing up in materials science, biochemistry, organic synthesis and other fields.
Now Fraser Stoddart’s lab has a new variation on the technique, using atomic force microscopy (AFM) equipment. If you’re not familiar with that machinery (invented in the 1980s), it’s rather startling. An AFM rig uses a very fine metal tip (fine, as in “down to one atom or so at the end” fine), which is brought down very close to a solid surface. And that’s close as in “within the size of a molecule or so” close. Once you’re ranged in, you can run these tips around in any direction you choose, and a lot ingenious measurements can be obtained. Both modern surface and solid-state chemistry live off this family instruments, with good reason.
One thing you can imagine doing is lowering some sort of active catalyst down near the surface and doing chemical reactions. If you want to tear up the surface below in some controlled fashion, that’s a bit easier, through straight oxidation or reduction. Forming bonds is a bit trickier, but that’s been achieved with some palladium reactions. Now Stoddart’s group has gotten it to work with the triazole chemistry, and in very straightforward fashion.
If you take an azide-functionalized silicon wafer (and these are well known), you can then dissolve some acetylene compound up in ethanol and put a drop of it on the surface. And lowering an AFM tip which has simply been coated with copper metal down to the surface is enough to initiate the reaction. As the tip moves, it “writes” a path of triazoles. The conditions are very mild, the resolution of the lines is very high (down to about 50 nanometers wide), and it turns out that the reaction is so fast that the tip can be moved at relatively high speed.
This opens up a potential way to stick all sorts of molecules to solid surfaces. There are a lot of ways known to do that, of course, but this one could have some real advantages. The selectivity and high resolution seen here could allow for very dense and complicated arrays of complex molecules to be laid down. Since the triazole reaction is compatible with all sort of biomolecules, this could provide a way to produce functionalized chips that would currently be rather hard (or nearly impossible) to make. And now that we can make them, we can start thinking up unusual things to do with them.
So, people like me spend their time trying to make small molecules that will bind to some target protein. So what happens, anyway, when a small molecule binds to a target protein? Right, right, it interacts with some site on the thing, hydrogen bonds, hydrophobic interactions, all that – but what really happens?
That’s surprisingly hard to work out. The tools we have to look at such things are powerful, but they have limitations. X-ray crystal structures are great, but can lead you astray if you’re not careful. The biggest problem with them, though (in my opinion) is that you see this beautiful frozen picture of your drug candidate in the protein, and you start to think of the binding as. . .well, as this beautiful frozen picture. Which is the last thing it really is.
Proteins are dynamic, to a degree that many medicinal chemists have trouble keeping in mind. Looking at binding events in solution is more realistic than looking at them in the crystal, but it’s harder to do. There are various NMR methods (here's a recent review), some of which require specially labeled protein to work well, but they have to be interpreted in the context of NMR’s time scale limitations. “Normal” NMR experiments give you time-averaged spectra – if you want to see things happening quickly, or if you want to catch snapshots of the intermediate states along the way, you have a lot more work to do.
Here’s a recent paper that’s done some of that work. They’re looking at a well-known enzyme, dihydrofolate reductase (DHFR). It’s the target of methotrexate, a classic chemotherapy drug, and of the antibiotic trimethoprim. (As a side note, that points out the connections that sometimes exist between oncology and anti-infectives. DHFR produces tetrahydrofolate, which is necessary for a host of key biosynthetic pathways. Inhibiting it is espccially hard on cells that are spending a lot of their metabolic energy on dividing – such as tumor cells and invasive bacteria).
What they found was that both inhibitors do something similar, and it affects the whole conformational ensemble of the protein:
". . .residues lining the drugs retain their μs-ms switching, whereas distal loops stop switching altogether. Thus, as a whole, the inhibited protein is dynamically dysfunctional. Drug-bound DHFR appears to be on the brink of a global transition, but its restricted loops prevent the transition from occurring, leaving a “half-switching” enzyme. Changes in pico- to nanosecond (ps-ns) backbone amide and side-chain methyl dynamics indicate drug binding is “felt” throughout the protein.
There are implications, though, for apparently similar compounds having rather different effects out in the other loops:
. . .motion across a wide range of timescales can be regulated by the specific nature of ligands bound. Occupation of the active site by small ligands of different shapes and physical characteristics places differential stresses on the enzyme, resulting in differential thermal fluctuations that propagate through the structure. In this view, enzymes, through evolution, develop sensitivities to ligand properties from which mechanisms for organizing and building such fluctuations into useful work can arise. . .Because the affected loop structures are primarily not in contact with drug, it is reasonable to envision inhibitory small-molecule drugs that act by allosterically modulating dynamic motions."
There are plenty of references in the paper to other investigations of this kind, so if this is your sort of thing, you'll find plenty of material there. One thing to take home, though, is to remember that not only are proteins mobile beasts (with and without ligand bound to them), but that this mobility is quite different in each state. And keep in mind that the ligand-bound state can be quite odd compared to anything else the protein experiences otherwise. . .
I’ve written here before about the "click" triazole chemistry that Barry Sharpless’s group has pioneered out at Scripps. This reaction has been finding a lot of uses over the last few years (try this category for a few, and look for the word "click"). One of the facets I find most interesting is the way that they’ve been able to use this Huisgen acetylene/azide cycloaddition reaction to form inhibitors of several enzymes in situ, just by combining suitable coupling partners in the presence of the protein. Normally you have to heat that reaction up quite a bit to get it to go, but when the two reactants are forced into proximity inside the protein, the rate speeds up enough to detect a product.
Note that I said “inside the protein”. My mental picture of these things has involved binding-site cavities where the compounds are pretty well tied down. But a new paper from Jim Heath’s group at Cal Tech, collaborating with Sharpless and his team, demonstrates something new. They’re now getting this reaction to work out on protein surfaces, and in the process making what are basically artificial antibody-type binding agents.
To start with, they prepared a large library of hexapeptides out of the unnatural D-amino acids, in a one-bead-one-compound format. (Heath’s group has been working in this area for a while, and has experience dealing with these - see this PDF presentation for an overview of their research). Each peptide had an acetylene-containing amino acid at one end, for later use. They exposed these to a protein target: carbonic anhydrase II, the friend of every chemist who’s trying to make proteins do unusual things. The oligopeptide that showed the best binding to the protein’s surface was then incubated with the target CA II protein and another library of diverse hexapeptides. These had azide-containing amino acids at both ends, and the hope was that some of these would come close enough, in the presence of the protein, to react with the anchor acetylene peptide.
Startlingly, this actually worked. A few of the azide oligopeptides did do the click triazole-forming reaction. And the ones that worked all had related sequences, strongly suggesting that this was no fluke. What impresses me here is that (1) these things were lying on top of the protein, picking up what interactions they could, not buried inside a more restrictive binding site, and (2) the click reaction worked even though the binding constants of the two partners must not have been all the impressive. The original acetylene hexapeptide, in fact, bound at only 500 micromolar, and the other azide-containing hexapeptides that reacted with them were surely in the same ballpark.
The combined beast, though, (hexapeptide-triazole-hexapeptide) was a 3 micromolar compound. And then they took the thing through another round of the same process, decorating the end with a reactive acetylene and exposing it to the same azide oligopeptide library in the presence of the carbonic anhydrase target. The process worked again, generating a new three-oligopeptide structure which now showed 50 nanomolar binding. This increase in affinity over the whole process is impressive, but it’s just what you’d expect as you start combining pieces that have some affinity on their own. Importantly, when they made a library on beads by coupling the whole list of azide-containing hexapeptides with the biligand (through the now-standard copper-catalyzed reaction), the target CA II protein picked out the same sequences that were generated by the in situ experiment.
So what you have, in the end, is a short protein-like thing (actually three small peptides held together by triazole linkers) that has been specifically raised to bind a protein target – thus the comparison to antibodies above. What we don't know yet, of course, is just how this beast is binding to the carbonic anhydrase protein. It would appear to be stretched across some non-functional surface, though, because the triligand didn't seem to interfere with the enzyme's activity once it was bound. I'd be very interested in seeing if an X-ray structure could be generated for the triligand complex or any of the others. Heath's group is now apparently trying to generate such agents for other proteins and to develop assays based on them. I look forward to seeing how general the technique is.
This result makes a person wonder if the whole in situ triazole reaction could be used to generate inhibitors of protein-protein interactions. Doing that with small molecules is quite a bit different than doing it with hexapeptide chains, of course, but there may well be some hope. And there's another paper I need to talk about that bears on the topic; I'll bring that one up shortly. . .
You don’t often get to see the sort of fistfight that’s detailed in the latest issue of Organic Process Research and Development. Patents whose procedures are hard to reproduce are familiar to every industrial chemist, unfortunately, but coming across one that seems completely mistaken in its most important details is rare. And this is the first time I’ve seen one of these dragged out into the open literature for a give-and-take with the original authors about whether they’re delusional or not. (The editors of the journal seem to be in new territory themselves on this one).
I should add here that the great majority of patent preps I’ve followed have worked pretty much as described, and I don’t think that my success rate in reproducing them is any worse than procedures from the chemical journals. Some journals more than others, of course, (another topic!) but OPRD is known to be very, very reproducible indeed. As it should be: it’s a journal for process chemists, whose livelihood is refining chemical routes until they’re scalable, economical, and (very importantly) until they work exactly the same way every time they’re run.
So here’s the situation. In 2007, the journal published a paper by a group from Dr. Reddy’s Laboratories, a large Indian company that does both generic drugs and has their own drug discovery operation. (There are, I should note, some academic co-authors who seem to have completely disappeared during this current food fight). The paper covered a synthesis of S-citalopram, and it caught the attention of the process chemists at Lundbeck, in Denmark. And well it might – citalopram (Celexa and other brand names), an antidepressant, was discovered there in the late 1980s, and has been generic since 2003.
The original paper (Eliati et al.) described a new alkylation reaction route to produce a key intermediate and a resolution of it (and of citalopram) into pure enantiomers by forming chiral salts. So far, so good – these sorts of things are the heart of process chemistry, and entirely appropriate for a paper in OPRD. But only if they work.
The Lundbeck group (Dancer and de Diego), had tried that exact resolution of citalopram many times themselves, though, without success, so they were rather taken aback to see it published as working just fine. They detail their attempts to reproduce the Eliati procedure, and demonstrate in great detail that it indeed does not work as written. I won’t go into their experimental work, which is very extensive and painstaking, but nothing the Lundbeck team could do resulted in anything better than a 55:45 mixture, which is a rather poor substitute for a pure compound. Midway through their paper, they start putting the word “resolution” in quotation marks when discussing the Eliati procedure, and the arm’s-length-and-holding-the-nose attitude is very successfully conveyed. The phrases “enormous disparity”, “effectively impossible”, “extremely unlikely”, and “not feasible in any meaningful, practical sense” all make appearances.
They also were surprised at the alkylation reaction reported in the Eliati paper, which is the only one of its kind reported in the literature – well, other than a patent by the same team from Dr. Reddy’s, that is. The weird thing about it is that it uses 3-chloropropylamine, apparently as the isolated free base. My chemistry audience will now be raising their eyebrows, because this is not a compound that you’d expect to be very happy as anything but a salt. It should, in fact, start reacting with itself quite vigorously, with plenty of HCl being given off in the process. But the Eliati procedure doesn’t have enough base to allow for anything else, and they use (supposedly) 12 grams of the stuff in 2.5 mL of DMSO. Since no paper or patent has ever reported isolation of this free base, it’s a rather odd compound to drop into your manuscript without explanation.
Another example of the same reaction in the Eliati paper is even weirder. Not only do they use this never-before-seen chloropropylamine, but this time they do the reaction in acetone, at 60 to 65 degrees C, by first adding 7.5 grams of potassium t-butoxide to 40 mL of the acetone. Now that prep should get the attention of the organic chemists in the audience, because that sounds like an excellent way to make a bunch of hot polymerized gunk. For one thing, acetone boils at 56, so how you get it to 65 is a real stumper. And adding a strong base to it is a surefire way to deprotonate it and start the famous aldol condensation (and every other base-catalyzed ketone reaction you can think of, for that matter). The Lundbeck group tried it, out of sheer curiosity, and got:
”. . . a vigorous/violent reaction. . .with the formation of a quantity of a white solid. (It had) an odor of higher ketones/alkenes, and analysis by NMR indicated that it was a complex mixture of products, with peaks consistent with condensation products of acetone.
A solid majority of the chemists reading that sentence, you can bet, finished reading that and added a “No shit” to the end. This is the sort of thing a sophomore undergraduate should be able to spot, and my guess is that whoever reviewed the Eliati paper for OPRD has had some interesting correspondence with the journal. The resolution is one thing – that’s impossible to spot if you haven’t worked with that exact reaction. But this alkylation step is ridiculous.
The journal gave Eliati and co-workers a chance to respond to all this, and followed that with a last word from Dancer and de Diego at Lundbeck. These things are all published back to back; it's like watching a boxing match. The Dr. Reddy’s group runs up the white flag immediately on the chiral salt resolution, actually, agreeing that their published procedure doesn’t work. But they claim that a modified version of the procedure does work, and that they “inadvertently missed incorporating a few words in the text” of the article which would have made this clear. The Lundbeck group isn’t buying this for a minute. They point out that the manuscript would have been had to have been substantially reworked to make it into this different procedure, for one thing. And even worse, the details of it as reported by Eliati are internally inconsistent, with the masses and ratios not even adding up. And finally, they report their own attempts to reproduce the new procedure, and find that it, too, is basically impossible.
And as for the alkylation, Eliati et al. claim that if you work quickly, you can use the chloropropylamine free base as they described. They also present a table showing how long it lasts under different conditions and in different solvents, and claim to have done the best variation of the reaction on a six-kilo scale. The acetone reaction, they admit, wasn’t as clean, but they didn’t spend much time talking about that because their “aim was to isolate the desired product instead of the aldol product.” Dancer and de Diego aren’t very happy with that either, continuing to insist that the acetone procedure is “completely unworkable”. As for the chloropropylamine, they welcome the clarifications in the second Eliati paper, but point out that said details contradict themselves at one point, and at any rate, none of them are to be found in the corresponding Dr. Reddy’s patent application, which continues to talk about using only the free base, and (on top of everything else) in a way that makes no sense.
The final Lundbeck reply has a telling line in the acknowledgements, which is, in its way, even more pointed than anything else in their paper: “One of us (R.J.D.) thanks Sir John Cornforth for inspiration derived from a series of his articles in a similar case some years ago.” That’s the famous “Some Comments on a Paper by Samir Chatterjee” affair, Tetrahedron Letters1980 709 and 1982, 2213. Cornforth completely demolished some heterocyclic chemistry work by the unfortunate Chatterjee, pointing out by several lines of evidence that the whole thing had to have been faked. Name-dropping this example is about as direct a statement of your opinion as the scientific literature will allow. . .
I'm taking the day off from cranking out the medicines of tomorrow (OK, the day after tomorrow), so there will be no post today.
I did want to add something about yesterday's post on the La Clair/hexacyclinol controversy. I'd like to ask that people not fill up the comments with ad hominem remarks or potentially libelous statements about La Clair himself. I don't mind saying that the evidence so far makes it very hard for me to believe his original paper, and I also have to say that I haven't seen any convincing explanations for all the discrepancies that have turned up. And I think that those opinions are shared by many people who've followed the story.
But let's keep it on a scientific plane, if possible. Opinions on NMR spectra and the like are one thing, but personal insults are another, and those we don't need. I try not to have to go in and hose out the comments sections around here.
Remember hexacyclinol? Some readers are probably groaning and thinking “Oh, yes, indeed”, which may make up for the ones who are saying “Remember what?”
Hexacyclinol is a complex natural product, but after that statement the arguing begins. James La Clair published a synthesis of it in 2006 in Angewandte Chemie, one of the most prestigious chemistry journals, but the reception of the paper did nothing to help the prestige of either La Clair or the journal. Readers immediately seized on odd spectral data and experimental details to ask whether the molecule had been made at all and just how well the manuscript had been refereed.
The story got even messier later in the year when synthetic organic chemist Scott Rychnovsky weighed in with a paper suggesting that the structure of the natural product had been misassigned to start with. This was followed by a synthesis by John Porco and his group of his proposed structure, which turned out to match the NMR spectra of the original natural product. Since they also had an X-ray crystal structure, you would think that this would have ended the argument, at least at the level of what hexacyclinol looks like. The argument about what La Clair actually made, though, continued. And La Clair himself suggested that he and Rychnovsky had made two different molecules that just happened to have very similar NMR spectra.