Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
This is a neat little structure, and it's truly a pain to synthesize (12 steps from myo-inositol). But it now seems to hold the record as the most polar aliphatic compound ever measured, and well it might. David O'Hagan of St. Andrews reported it at the ACS meeting in Denver - one side of that molecule is full of electronegative fluorines, and the other side has what's left. In bulk, this would be a very unusual solvent to play around with in chromatography and the like, but I don't think we're going to see any four-liter jugs of around the lab any time soon.
Update: annoyingly for my speculations, the compound appears to be a solid. Maybe the smaller-ring analogs?
I've complained in the past (and I'm not the only one) about total synthesis work that doesn't (or maybe can't) deliver relevant analogs of the final product. That's been one of the traditional rationales for the work, but it's not always followed up on. But here's one that does: Dale Boger's group at Scripps has published another paper on modifying vancomycin, work that has grown out of their total synthesis efforts in the area.
This is clearly an area with important applications - in fact, there are three synthetically modified antiobiotics of this kind on the market (oritavancin, dalbavancin, and telavancin. These have modifications, notably the addition of hydrophobic side chains, that both change their activity by helping them bind to the cell membrane, but also (at least for telavancin) also seem to give them new mechanisms of membrane disruption as well.
Vancomycin resistance is known, but it's been very slow to develop, compared to many other antibiotics. That's probably because it's not binding to a protein target (which is directly coded for by the bacterial DNA, providing a way out through mutation). Instead, vancomycin binds to D-Ala-D-Ala, a key component in the construction of the bacterial cell wall. That's a much harder mechanism for the bacteria to catch on to, as it were, but when they do, it's very bad news indeed, since vancomycin itself is often a last line of defense in the clinic against infections like MRSA. In this paper, the Boger group is adding one of the commonly used hydrophobic groups (a para-chlorobiphenyl) and simultaneously changing a key amide carbonyl, as found in their earlier binding-pocket work, in the hopes that the double modification would complete evade the defenses of the resistant bacterial strains.
Does it? They report a variety of changes to that amide, and the amidine and methylene variations turn out to have excellent potency against both the wild-type and resistant strains. This is a very nice result indeed, showing that the two modifications can work together, and this could point the way to a new generation of vancomycins that (with any luck) can continue confusing bacteria for many years to come. Congratulations to Boger and his group - this is very difficult chemistry indeed, and it's being done for excellent reasons. This, in fact, is just the sort of thing that it's hard to imagine any sort of automated synthesis machine ever being able to perform, and is the kind of high-level work that the advent of such machines should be freeing us up to do. There is no replacement for talented, hard-working organic chemists on projects like this.
Full disclosure: I was a summer undergrad in Boger's group over thirty years ago - and no, that time frame doesn't seem very plausible to me, either, but there it is. I did not enjoy myself that much, but neither did the grad student I worked for, I'm pretty sure. I was not exactly an ornament of the lab, and I think that Boger himself was able to deal with my departure at the end of the summer without too much strain.
Well, the comments are certainly rolling in on yesterday's post about the potential "end of organic synthesis". So far, I'd say that a majority of them are of the opinion that I've lost my mind. And either I've had practice at people telling me that, or I've had practice at losing my mind, but I had a feeling that this would be the reaction. So I'll do one more post on the topic, and then we can agree to disagree and check back on the subject after a while to see what's happening. I should note up front that the (few) popular press stories about this work have mostly been terrible, with people talking about a "3D printer for molecules", which is so wrong that one hardly even knows where to start. So I definitely want to divorce myself from that stuff - I'm crazy in a different direction entirely.
Here, then, are some of the objections that have been raised, in the comments, in e-mail, and in some one-on-one conversations:
End of synthesis? You must be joking. This is not even close. As I tried (ineffectively) to make clear yesterday, I don't think that this particular paper is The End. But it's the first thing I've seen that makes me think that there is an end to a lot of traditional organic chemistry. For many of the things that we use organic synthesis for, the later iterations of this approach may serve very well. And that's another point: organic synthesis is there to be used for other things, which means that anything that makes it easier to use it as a tool should find a home. In the end, though, that's what organic synthesis is to me: a tool. A really interesting, complicated, fun, ever-changing tool, a means to other ends. There's more on this at the end of this post; this attitude of mine may be the root of some of the disagreements I've had talking about this latest work.
But this machine of Burke's is nothing more than another way to do Suzuki couplings. And we already know how to do those. Yeah, it's true that this whole thing is driven by boronic acid couplings to form C-C bonds, the widely used (or even "beaten to death") Suzuki reaction. But we have not yet exhausted the possibilities of boronic acids, not by a long shot, as a look at the literature will show. Alkyl couplings are doable in many cases, even alkyl-alkyl, and (as I mentioned yesterday), that's an active area of research indeed. The idea of using Burke's protocol to crank out sausage-strings of polyaryl compounds does not excite me, although that's surely the easiest thing to do with it right now. It doesn't have to stay at that level, and I'm guessing that it won't.
This solid-phase purification isn't much more than people do already (cartridges and so on). It is similar. But existing solid-phase purifications are often case-by-case, depending on the structure of the molecule (often whether or not it has any acid or base character to exploit). What stood out to me about the MIDA-boronate solvent switch method, though, is that it may be a general method. A wide variety of MIDA-containing structures are shown doing it in the paper. And not only does it look pretty general, it's also tied into the key reaction, carbon-carbon bond formation, so you're always going to have that handle on the molecule in every step. Tying these two together is one of the things that intrigues me.
It's worth distinguishing this sort of thing from solid-phase synthesis, where the molecule of interest is attached to a support. That's how peptide synthesis works, of course, and it can be really good stuff. But a lot of combichem work was set up that way, using linkers to beads, and there are some organic synthesis reactions that just don't work very well that way. (You also have to have a molecule that has a suitable linking group in its structure). The Burke approach sticks with solution chemistry, in which we have more elbow room, and uses the solid phase just in the purification step. But since I bring that up. . .
This is nothing more than combichem. And we tried going beserk with combichem in the 1990s, and look what happened. Good point. Back when the combichem craze was just starting, people were talking just like I was in my post yesterday, and time has shown them to be fools. I saw all that happening at first hand, so it's definitely on my mind here. But combichem, especially in its 1990s iteration, was mostly about forming amides and sulfonamides. Its practitioners (well, some of them) tried for that not to be the case, but the great majority of compounds produced went through those sorts of reactions. And not every useful molecule, sad to say, has an amide holding it together. But that gets back to what I just mentioned above: the currency in this new scheme is not amide formation, or reductive amination, or what have you: it's C-C bond formation, and that really is the baseline of organic chemistry. If boronic acid couplings continue to make inroads into alkyl carbon territory (and, for that matter, into C-X heteroatom bond forming territory), and they will, then this protocol just gets more powerful.
The yields aren't good/the reaction isn't atom-efficient/the whole thing isn't scalable. This is definitely not a replacement for process chemistry, that's for sure. It's for cranking out lots of early-stage variations, or (alternatively) for cranking out small but useful amounts of a specific dial-it-up combination of known building blocks. In fact, even if my entire lunatic vision from yesterday comes true, process chemistry will still be with us, because there are surely better routes for any given molecule than the generic bang-it-together that this system provides. But that's what you do when you've discovered something valuable and want to optimize it - this chemistry is there to help you discover something valuable in the first place.
What's gradually dawned on me about total synthesis of natural products is that it's something a bit like process chemistry, but applied to molecules that will never be scaled up. You're optimizing yields at every step and looking for the fewest steps possible, although you're not exactly minimizing costs or waste streams (the way real process chemists do). It's come to seem more and more bizarre to me, actually. But that leads us to. . .
How, exactly, then, is an ugly bang-it-together approach going to kill off total synthesis, which is all about making difficult molecules via the best possible routes?Actually, I think that total synthesis is the most vulnerable part of organic chemistry to this whole way of working. Even its proponents know that it's been losing ground over the years, as the reasons for doing it narrow. Its best practitioners, to my mind, have been concentrating on making it as general and free from case-by-case detail-chopping as possible. At the same time, the reasons for making many of its targets are eroding (structure determination being the most notable of these). There's always a case to be made that pushing into hard natural products structures will cause new reactions to be discovered, but those new reactions (in some cases) could also get discovered by deliberately searching for them, without bothering to get up to step twenty-three before starting to look.
One of the standard rationales for synthesis of active natural products, though, has been that it can generate analogs of these structures to learn more about their mechanism of action. That's absolutely right, but it's all too often been just lip service. The list of analogs produced ends up being short, in the drive to make the natural product itself, and no one bothers to make too many "unnatural" variations. This sort of building-block approach, though, is a natural fit for making lists of analogs in a combinatorial fashion. Burke's group has, in fact, done just that with Amphotericin B, and it's interesting stuff. Many years back, I watched a few people sweating away at making that exact natural product by the traditional approach, and the contrast between that and the Burke group's work is probably one of the things that got me thinking about this whole business from the angle I'm taking.
I still don't see how anyone can get excited about this stuff - or at least not to this level. Fair enough - as mentioned above, we can agree to disagree, and time will sort us out, as it sorts out most things. But I think it's useful, even (or especially) for people who think this is crazy talk to be exposed to some of that crazy talk, and to realize that it's possible for people to think this way about it. I think that Wavefunction is right: I'm taking a more Whitesidesian view of organic chemistry. When I see someone making a molecule, I feel like asking "What's it for?", and anything that helps to answer that question is worth investigating. Back when I was working on my own PhD (total synthesis of a natural product!), I gradually realized that the only legitimate answer I had for that question was "To get me out of grad school". (Another way I put it was "The world does not need another chiral-pool synthesis of a macrolide antibiotic. But I do"). That affected me even more than I knew at the time, and I already thought it was affecting me a lot. Ever since then, I've been trying to never end up in that spot.
The other thing to keep in mind about yesterday's post is that (as I mentioned) I've been thinking about this whole thing on and off for months. So what that represented was a pretty good head of steam, all at once. I'm not saying that if you disagree, that you'll come to agree with me after thinking about it longer - it's probably the opposite, you'll think I'm even more off my head. But that's one reason why that post was so long, among the others cited already, and why it took the tone it did.
Update: depending on the reserve levels in your sense-of-humor tank, you may like this ad from the future. . .
"It's not the end of the earth. But you can see it from there." That was Lou Holtz, talking about coaching football in Fayetteville, Arkansas. But today I'm talking about a new paper from Marty Burke and his group at Illinois, and although it isn't the end of organic synthesis, you can see it from there.
Now, that sounds a bit frightening, or a bit idiotic, or maybe a bit of both. But have a look at the paper. I had a chance to see him talk about this work a few months ago - I found it fascinating and startling, and I've been thinking about the implications ever since. This paper is the perfect opportunity to talk about it all (here's a commentary at Science). It's a summary of a lot of work that the Burke lab has been publishing over the last few years, and when you put it all together, there are some far-reaching consequences. On one level, it's about assembling sets of molecules from modular building blocks, each containing MIDA boronates and bromides. That's been a worthwhile reaction to study, since these boronates are very easy to handle and shelf-stable. What Burke's group has found, though, is that the MIDA complexes have an unusual property: they stick to silica, even when eluted with MeOH/ether. But THF moves them right off.
This trick allows something very useful indeed. It's a universal catch-and-release for organic intermediates. And that, as the paper shows, opens the door to a lot of automated synthesis. You take a MIDA boronate intermediate, and deprotect it to the free boronic acid. You then couple it to another intermediate, which has a reactive bromide (or what have you) at one end, and another MIDA boronate at the other. The solvent switch lets you purify the crude reaction by loading it onto silica, washing everything else off with MeOH/ether, and then eluting the MIDA-containing product with THF. Then you do it again. And again.
The paper shows a wide range of products produced in just this fashion. Yields are decent, although varied, but there's always product coming out the other end at some level. The number of possible compounds that can be made in this way is limited, at the first level, by the number of MIDA-boronate containing intermediates that you can synthesize, and you can certainly make a heap. At the second level, it's limited by the sorts of couplings that boronic acids can do, and we still don't have general methods to make them do bond formation between saturated carbons very well. But that's an area of intensive research, and it looks like a solvable problem, eventually. I would go so far as to suggest that this paper makes a good case for trying to get this to work with boronic acids (as opposed to alkylboranes, etc.), because of the immediate application of the catch-and-release purification, but we'll see what happens.
What gets me about this current paper, though, is the concept behind it. This has the potential to take a large part of organic synthesis into the realm now occupied by peptide and nucleotide synthesis. Those two are certainly easier problems - you have one kind of bond between every subunit, and a limited number of subunits themselves. But the advent of solid-phase iterative methods to synthesize these sorts of molecules was still a huge advance. It took making such things out of the realm of every-one-a-new-individual-challenge, and into the world of "Sure, we should be able to make that. Fire up the machine."
That first category, we should note, is where total synthesis of natural products has traditionally been. And proudly so. I've had a lot to say about that over the years around here, going back to 2002, but I'll summarize: I think that total synthesis was, at one time, one of the most vital and important parts of organic chemistry. But that day is past. Modern analytical methods have largely (although not quite totally) eroded the structure determination reasons for doing it, and modern synthetic techniques have put a vast number of molecules within theoretical reach. "Theoretical", in this case, meaning "Given enough postdocs, enough grant money, and enough time". That certainly wasn't always the case. When Woodward, Stork, or (fill in your favorite here) started out to synthesize some complex molecule back fifty years ago, it was often not very clear at all how one might go about it. Just coming up with a semi-plausible synthetic route was a real intellectual accomplishment, and dealing with what happened when these ideas met the real world was another. Total synthesis took all the brainpower and all the skill that could be brought to bear on it.
It's still not easy. But it's sure not the same. It's much harder to draw a molecule that's truly a stumper these days. We have so many reactions and approaches that you can generally come up with at least a paper synthesis - mind you, it may not be a very nice paper synthesis, but in the old days you probably couldn't even come up with that much. So if fewer and fewer molecules really are an adventure - or really promise to advance human knowledge in the course of making them - what's left?
What's left, I'd say, is for organic synthesis to get braced to take the next step. That is, it needs to stop being an end in itself, and start becoming a means to other ends. That's already what we use it for in drug research - the only reason we do organic chemistry is that we don't know any other ways to make small-molecule drug candidates. In the earlier stages of a project, we don't much care about way we make things, just so long as they get made. As I'm fond of saying, in discovery med-chem, there are only two yields: enough and not enough. Did you make a sample of the compound that can be tested in the assay? That's enough. And that's the primary concern - how you made it is secondary. This is sometimes a bit of a surprise for people coming from high-powered academic synthesis groups, because you can do an awful lot of good med-chem using just reactions from the first semester of sophomore organic chemistry, and you can do an awful lot of good med-chem while putting up with reaction yields that no academic group would stand for. But one adjusts.
We may all need to adjust. What if this MIDA boronate protocol, or some later variant of it, starts turning big swaths of organic synthesis into a process of stick-the-pieces-together? Like peptide synthesis? These routes may not be the most elegant and highest-yielding things ever seen, especially not at first. But that leads to the question of why you're making these molecules in the first place. Are you making them so that you can do something with them - test them as drugs, use them as nanotech building blocks, make a new battery or solar cell, investigate a new kind of material? Then fine - you probably have enough now to get started on the next phase of that idea, thanks to this Synth-O-Matic over here. Or are you trying to make the best possible synthesis of your molecules (fewest steps, highest yield, etc.)? In that case, you need to be careful. That's a very worthy goal if you already know that this is a valuable molecule, which is what the process chemists do in industry. But if it's just another new molecule, then why are you optimizing its synthesis? If along the way you're discovering new and better synthetic reactions and protocols, then good for you - but I would define "better" as "better able to be used to crank out new molecules for other purposes", not "done in five fewer steps than the last group had to use to make the same molecule". Not that alone. Not any more.
If organic synthesis become modular, then the new chemistry and new reactions are going to go more into making new modules. All our problems are still there - tricky functionality, multiple chiral centers, quaternary carbons. But if we end up making large molecules mostly by looking for boronate disconnections and stitching the pieces together, then we're on a hunt to make the pieces, not to make the whole molecules.
But what about the art? What about the elegance? Well, we're going to have to say goodbye to some of it. The printing press drove fine hand copy from the world - you don't see so many gold-leaf illuminated letters any more. More recently, and in our own field, the advent of modern analytical chemistry drove out the classic methods of structure determination. Now there was a puzzle worthy of the finest thinking that could be thrown at it. Old-fashioned degradation and derivatization was a fiendishly difficult challenge, like playing chess with the lights off and the moves called out in a language you don't know. But that kind of chemistry is gone, totally gone, and it'll never come back. No one does it like that any more. There were chemists who just couldn't face that, when it happened back in the 1960s and into the 1970s, when they found that what they were really good at was no longer of value. It was hard. But organic synthesis may have to face up to the same sort of realization, that time has overtaken it and that arts gratia artis is no longer a fit slogan to work by. This paper today is the first one that's really made me think that this transition is in sight. For me, organic synthesis is never quite going to be the same.
But in science, when something dies it's because something else is being born. The idea, the hope, is that if the field does become modular and mechanized, that it frees us up to do things that we couldn't do before. Think about biomolecules: if peptides and oligonucleotides still had to be synthesized as if they were huge natural products, by human-wave-attack teams of day-and-night grad students, how far do you think biology would have gotten by now? Synthesizing such things was Nobel-worthy at first, then worth a PhD all by themselves, but now it's a routine part of everyday work. Organic synthesis is heading down the exact same road - more slowly, because it'a a much harder problem, but (I think inexorably). Get ready for it. We're going to need to stop being so focused on just making molecules, and start to think more about what we do with them.
Note: for previous (and partly superseded) thoughts here on automated organic synthesis, see this post.
I remember when I was in my grad school research group, looking up at the shelves of lab notebooks from past students and postdocs. Some of these were taken down and used from time to time - "Oh yeah, so-and-so made that intermediate one time, must be in one of his notebooks" - but many of them rested undisturbed. The same goes for many a PhD dissertation (I know mine hasn't troubled very many people, that's for sure).
At the time (and we're talking mid-1980s, dang it all) I kept wondering if there were some way to get at all this information and make it searchable, especially by structure. Since my group worked in a fairly specialized area of organic synthesis, many of the former reactions had at least a chance of being relevant to some future grad student. WIth 1985 hardware and software that was a bit of a tall order, but when I use my current electronic notebook I'm seeing those thoughts made real, since I can search company-wide by structure (or whatever other criterion I can dream up).
But there are certainly masses of unpublished data sitting out there, and a group in the UK is trying to bring it out into the light. In a pilot project, they've gone over 750 PhD dissertations from 15 universities, and extracted 45,000 structures (75,000 if you count all the stereoisomers). This effort is funded by the Royal Society of Chemistry, so the structures are going into ChemSpider (and most of them were new to it, as well they might be).
As the article details, the original hope was for a physical collection of compound samples, but a variety of issues - not least, cost - has made that hard to realize. So this is a virtual set so far, which would be available for virtual screening. And here's where opinions really start to diverge. 75,000 physical compounds makes for a pretty good storage and dispensing effort, but that many virtual compounds is a tiny drop in a very large bucket. And if you'd like to double, triple, or quadruple that number of compounds, it would be short work, computationally. Of making virtual screening sets there is no practical end. (One wonders how many of the RSC dissertation compounds are already in the GDB sets). It's the chemical space problem - if you're just filling it out randomly, there's an awful lot to fill.
Now, that same argument can be applied to a physical compound collection, and such a collection will inevitably be even smaller than a virtual set. But it has a big advantage in utility, because the fact is that we really can't do virtual screening with a great deal of confidence yet. Real compounds against real proteins, that's the way to go if you can do it. The expense will be far greater, though. Just collecting the compounds themselves will be a major effort, and you need a building to store them and all the equipment that big screening collections call for. The curation of the set will be an even bigger pain: it's absolutely certain that a reasonable percentage of such compounds will be clean, but not what they say on the bottle, and what may be an even larger percentage will not be very clean at all. How to deal with those?
From what I can see, the RSC is working on those questions, and I wish them luck (and funding). What if everyone who did work in a publicly funded lab in the UK sent samples in to the National Compound Collection? What if we did that in the US? We have a lot of chemistry going on in academia, and untold numbers of compounds that get squirreled away in vials and stuck in desk drawers. How much money would it take to get them brought together, and would such an effort pay off?
This paper is not going to make a lot of computational chemists very happy at all. It's from Dan Singleton and Erik Plata at Texas A&M, and it's on the Morita-Bayliss-Hillman reaction. More specifically, though, it's on the many computational attempts to decide on the mechanism of the MBH reaction, and taken together, they're not a pretty sight. The authors do some good old physical organic chemistry to help establish the real mechanism (which had already been proposed some years ago), and let's just say that things don't always match up very well.
Computational methods are simply scientific models. Any model makes some inaccurate predictions but models can retain utility despite significant propensities for inaccuracy. Inaccurate predictions aid the choice of models for future predictions. Because of this, the central scientific problem in the computational study of the MBH mechanism is not the inaccuracy of the predictions. Rather, it is the absence of any particular prediction at all. Fully-defined computational methods (including the choice of basis set, entropy calculation, and solvent model) of course make quite specific predictions. However, there is neither a consensus best-choice method nor a common view on the right way to choose a method. When evaluable, the most accurate choice varies with the system at hand. In the MBH reaction, defensible and expectantly publishable choices of computational approaches lead to predictions of the facility of the proton-shuttle process that vary by 35 orders of magnitude in the stability of 19, while also diverging in the geometry and preferred stereochemistry of transition state 13. This variance is in practical terms indistinguishable from making no prediction. In addition, studies of the MBH mechanism have not been considered falsified by extreme inaccuracies in predictions. In the terminology of Pauli, computational mechanistic chemistry is “not even wrong” about the MBH mechanism.
Here's a C&E News article if you don't have access to JACS. It's true that predicting reaction mechanisms is a challenge for computational methods, because you are, out of necessity, looking at high-energy molecular states and trying to distinguish between them. It's especially tough with a polar reaction mechanism, because solvation effects (which we still don't have as good a handle on as we need) become very important in stabilizing transition states, assisting proton transfers, and so on. But at the same time, this sort of problem is just the sort of thing that many such groups work on: the MBH mechanism has been the subject of 11 separate computational papers.
The authors here try to figure out what has gone wrong. The errors mostly seem to be in the enthalpy term, which would suggest trouble with those polar interactions. A good number of the earlier studies predicted a proton-shuttle mechanism, which turns out not to be operating at all. The problem is that current programs have a much easier time handling proton-shuttle mechanisms, while full-scale proton transfer to and from a solvent molecule is much harder to model. So there's a constant danger of arriving at a mechanism because it's computationally tractable, not because it's real. Digging into the individual equilibria, it appears that some approaches did very well on particular reaction steps, but blew up completely on others: 14, 20, or 35 orders of magnitude off for the equilibrium constants, I would say, is enough to warrant that description. And it's very hard to see what factors led to the failures or successes - in fact, it's quite possible that some of the best individual predictions were themselves fortuitous. Overall, though, no computational approach got things anywhere near correct.
The problems in the computational study of mechanisms encountered in the MBH reaction certainly cannot be used to paint all computational mechanistic studies. Many, either by simplicity or carefully designed use of the computations, would not be susceptible to the difficulties encountered here. At least, however, it would seem that studies of complex multimolecular polar reactions in solution should be undertaken and interpreted only with extreme care.
That's for sure. And while this is a harder problem, in many ways, than docking a ligand into a protein, we should keep in mind that polar interactions and the treatment of solvation are very important parts of those calculations, too, and looking under this particular hood tells us that we have a long way to go on those.
There was some sort of incident at the University of Manchester yesterday. one that led to an evacuation of the chemistry building and all sorts of haz-mat people being called in. Press reports had it that this was a peroxide-acetone mixture that "crystallized", but you never want to take these things at face value. One can, if one is utterly luckless or foolhardy, produce crystals of triacetone triperoxide with such a mixture, and that would certainly be a great reason to vacate the area. You'd think, though, that no one in a chemistry department would do such a thing. Does anyone from Manchester have any more solid information?
Update: here's more. It was not the chemistry department, but someone in an engineering building (the Pariser building), apparently working on some sort of sustainable plastics research. He did indeed combine acetone and peroxide, and left it sitting for some time when he noticed that it had produced a crystalline precipitate. A quick look at the literature for that mixture would have saved a good deal of trouble, or so it seems.
Here's a review on a topic that I'll bet not too many medicinal chemists have thought about in detail: noncovalent interactions with sulfur atoms. Sulfur's a weird element - small enough to fit unobtrusively into organic structures, but just big enough to show some orbital effects that you don't get one row above in the periodic table.
This paper, from a well-known group of authors at BMS and Amgen, highlights the way that a bivalent sulfur's sigma* orbitals can interact with the lone pairs on oxygen and nitrogen atoms, giving you a conformational effect similar to a hydrogen bond. This sort of thing is probably encountered more in a negative sense than a positive one - people try to get away from a sulfur and find that nothing else quite does the trick. (As an aside, that's always been one of my big problems with the dynamic combinatorial chemistry schemes that use disulfide formation and exchange as their engine. If you want to turn your hit into a more druglike entity, you're faced with replacing a disulfide, and there's really nothing else quite like a disulfide, either). Definitely worth a look.
For fluorination fans, here's a new way to get trifluoromethyl groups in. Trifluoromethyl iodide is a useful reagent, or it would be if it weren't a gas. That makes it annoying to measure out and work with, especially on a small scale. But Tobias Ritter's lab has a new way to deal with the stuff: turns out that the reagent forms a 1:1 complex with tetramethylguanidine or DMSO, and the resulting liquids are shelf-stable. The paper shows that one or the other of these can substitute in many of the reactions of the neat reagent, which should make them a lot more convenient. And you can now get both the trifluoromethyl and pentafluoroethyl variations commercially, so I'll be ordering some today. I think that Quintus is a customer as well. Happy fluorinating!
One of my "Things I Won't Work With" compounds may have moved into a zone where I'd actually use it. This new paper in JOC describes in situ preparation of small amounts of chlorine azide, which can then react with alkene to give useful beta-chloro azide products. This way, you in dilute solution, with slow release - the only possible conditions I'd consider if working with the stuff.
The paper points out, most accurately, that the halogen azides "have been regarded as challenging compounds to work with". And even with this latest variation, there were challenges, and things that you should definitely not do:
The ClN3 thus produced is a gas and can be isolated by transfer out of the reaction flask with a gentle stream of nitrogen, bubbled into an organic solvent placed in a receiving flask. While we performed this procedure a number of times, it proved to be occasionally explosive, and we strongly discourage its practice. Instead, we developed a safe procedure to generate ClN3 in small quantities in situ and in the presence of an alkene with which it can immediately react. . .Even though these conditions diminish the hazards of chlorine azide substantially, it is still necessary to use common sense and care.
I'll report if I try this out. I'm still going to have to have a good reason, of course, but this at least takes things into the realm of possibility!
The mainstreaming of the PAINs concept continues, with editorials from Jonathan Baell in ACS Med. Chem. Letters and Dan Erlanson in J. Med. Chem.. Both are definitely worth a read.
Baell emphasizes that real hits tend to have real SAR around them. You can take pieces off the structures, and activity falls away; you can move things around and it changes. Eventually, you can make more potent compounds. I will say that I've seen some oddball programs where things resolutely refused to make sense, and the projects were advanced by a sort of human-wave attack. But even in those cases, the compounds were, in fact, advanced, even it it wasn't always sensible how they got there. PAINs compounds, on the other hand, just seem to sort of sit there. You try this, you try that, you try the other damn thing, and you never seem to get much better than the original hit, even though the activity might wander around some with structural variations.
Their hallmarks typically comprise the following: little or no medicinal chemistry optimization, unconvincing structure–activity relationships (SAR), relative lack of improvement in biological activity to meaningful levels that often hover around the micromolar mark, and molecular modeling described as though it is an experimental observation of relevant binding sites. Also, the literature is frequently ignored as an important SAR source of evidence that similar compounds appear to be hitting different targets and could be promiscuous.
Baell also makes a good point about the size of many screening efforts, particularly academic ones. It's a worthy cause to screen against a tough target, but you should be prepared to go big or go home:
Screening too few compounds, perhaps even as small a number as several thousand, is a contributory problem. An understandable attraction to academic researcher with a tight budget is the relative affordability and that such an exercise may not even require access to robotics but just the services of an on-site research assistant. However, screening too small a library of unbiased commercially available compounds may not return a progressable hit. In contrast, it will certainly produce artifacts that are then more likely to attract unwarranted further attention. . .
. . .I believe that even screens of 30,000 compounds are suboptimal, especially for difficult targets. For highly druggable targets, some progressable hits will plausibly be unearthed, but why not screen 150,000–250,000 compounds to find better hits that could potentially save years of medicinal chemistry optimization?
Well, money, for one thing, but he's right that most of the expense of doing a 250K screen will have already been encountered while doing a 30K one. And if your assay has false-positive problems, running it across a large collection will merely bury you in more junk, which is all the more reason to take care before you start screening at all. The point stands, though: if you want to go after a protein-protein interaction or a transcription factor (or some such), a 10,000 compound screen is completely inadequate. Update: as has been pointed out in the comments, sheer numbers alone are not necessarily enough. It is possible to have large piles of horrible compounds, and making them larger is not the way to go. And there is no way to run a "focused library" screen to improve your chances (no matter what the vendors might tell you), because no one has any good idea of what to focus on for such targets. Going on to cell-based and phenotypic screens can actually exacerbate the problem, because it opens up new ways to get fooled (membrane-perturbing amphipathic compounds, etc.).
Erlanson's piece is a commentary on the recent paper discussing thiol-scavenging compounds as promiscuous screening hits. He makes the point that what looks like SAR may just be SIR (a structure-interference relationship), since there are no classes of PAINs where every member hits the assay. You have to run counterscreens, control experiments, and mechanistic tests, particularly if you're looking at a compound that might be reactive (like a thiol scavenger). And even experienced chemists might not realize that they're looking at one, so run those experiments anyway, for peace of mind and to avoid wasting even more time down the line. Reactive compounds can indeed be great drugs (look at ibrutinib, or heck, look at aspirin), but they can also spit out confusingly convincing false positive data in a screen and scuttle your chances of finding anything real.
Another point that Erlanson's article emphasizes is that computational screens of structure, while useful, are always going to be inadequate. It's hard to write such filters so that they catch everything that they're supposed to catch, for one thing, but the bigger problem is that there are many more problematic structures than are contained in any set of filters at all. There is, alas, no substitute for doing more work and taking more care, and this annoying truth has been with us for a long, long time.
Baell makes a similar point, that structural filters are necessary but nowhere near sufficient. And I particularly liked this part:
We need to recognize the astonishingly limiting bottleneck created by the pace with which hit discovery through HTS so overwhelms the subsequent and necessary medicinal chemistry optimization. Impatient biologists may find it hard to understand that HTS heralds just the start of a multiyear journey of medicinal chemistry optimization. In this context, the medicinal chemists among us could do a better job explaining to biologists that drugs are not discovered through screening or design, but principally through medicinal chemistry.
Preach it! And let this message not only be heard in the biology labs, but yea, even unto the upper reaches of management. None of our spiffy equipment, none of our advanced techniques will do the job by themselves. Someone has to make compounds, and then make some more, and there's no way around it. A significant part of the history of the last 25 years of the drug industry consists of attempts to evade this part of the process, but they haven't worked, and they're not going to.
How many of the molecular pieces that we use in medicinal chemistry are historical accidents? I've wondered this from time to time. There's no doubt that drug structures are partly driven by ease of synthesis/commercial availability (these two go hand in hand), and these in turn are influenced by which reactions and feedstocks were exploited earlier. The Grignard reaction came well before palladium coupling methods, but there's no reason that it had to, just to pick one example.
This new paper in J. Med. Chem. is what has me thinking about this again. The authors, from AstraZeneca, show that their in-house chemists tend to think of para-aromatic substituents more often than the other regioisomers, and that this preference is mirrored in the commercially available reagents (and indeed, in marketed drugs). The paper looks into sources of this bias - cost, the 1972 Topliss tree paper, and so on, but no single factor appears to be at work. What does seem to be going on is a self-reinforcing bias - there are more p-aromatics in the screening deck, so more of them hit. And there are more commercially available compounds with the structure, so more of them get made in turn.
We believe that ultimately the present day bias is now likely due to unjustified personal preferences and overused at the expense of meta and ortho regioisomers as well as other potentially diverse bioisosteres. This last point is an important conclusion. The bias for p-ClPh has propagated throughout the years and influenced design and synthesis plans. A simple extension into disubstituted aromatics revealed that chemists favor the similarly substituted compounds (e.g., diCl, diF, diMeO), with many of these having at least one element in the para position. This analysis also illustrated that many disubstituted compounds are underrepresented in the public domain, highlighting an opportunity for screening collection differentiation.
I suspect that there are many more such biases, based on availability of different heterocycles, lack of stereoselective methods in some areas, etc. Our screening collections (and our building block catalogs) are the work of human beings, making conscious and unconscious choices, not some random slice of chemical space.
I don't spend too much time on physical organic chemistry here on the blog, which in a way is a shame. The readership would dwindle, although probably not as much as when I talk about patent law and intellectual property. But physical organic is an area I've always enjoyed, intellectually, even though it was sometimes hard to infer that during my graduate school classes. (I doubt if I have the patience to be much good at it in a lab setting).
But there's a new paper out in Science from a team at Stanford's SLAC, home of some of the brightest and hardest X-ray beams that ever fried a target sample. (Here's the press release from Stanford). Working with the University of Stockholm, they claim to have actually detected X-ray spectral data (K-edge absorption) from the transition state in the catalytic oxidation of CO to carbon dioxide. This was done on the surface of a ruthenium catalyst, with extremely fast and precise heating from an optical laser to get things going.
For any non-chemistry types reading down this far, try imagining a chemical reaction as a journey from one valley to another, through a high mountain pass. "Elevation" in this landscape, is how much energy the system has, and an irreversible reaction features things going, overall, into a lower valley/energy state. The absolute peak of the mountain transit, though, is the transition state for the reaction. It's a real thing, but it only lasts for one molecular vibration before it heads off down one slope or another. It's the highest-energy species in the whole path because it features all sorts of half-formed and half-broken bonds, the sort of state that molecules generally avoid ever getting themselves wrenched into. But since getting up to and over that particular hump is such a big part of any reaction, anything that stabilizes the TS will speed a reaction up, sometimes immensely, which is just the sort of thing we'd like to learn more about how to do on demand.
As that press release says, quite honestly, this was "long thought impossible", and my prediction is that there will be quite a few people who won't accept that it's been done. My x-ray fu is not strong enough, personally, to be able to offer an informed opinion. Even if this report is accurate, it's surely right on the edge of what's possible with some of the best equipment in the world, so you really have to know this area at a high level to critique it thoroughly. But what's reported is both plausible and interesting.
What they saw was that the oxygen molecules began to change first. The the electron distribution began to change in the CO molecules, followed by a productive collision (some of the time) to form the transition state itself. And one of the interesting things about that was how many times it apparently collapsed back to the original molecules, rather than going on to product. This is going to be subtly different for every reaction, or so theory tells us, but if we are finally able to physically investigate such things we may find ourselves revising a few theories a bit. This particular reaction, taking place between two small molecules, has been modeled extensively at all levels of calculation, and the results seen fit very well, so it's not like we're going to be packing big swaths of human knowledge into the dumpster. But anything we can learn about transition states (and how to make them selectively happier and unhappier) is the key to chemistry as a whole.
Here's a YouTube video from the Stanford team on what they've been up to. We'll see the rest of the analytical and theoretical chemistry community reacts to this work.
While catching up on the literature today, I find that even now, thirty years later, I can't look at a paper that uses 1,6-anhydroglucose (levoglucosan to its friends) without a quick, simultaneous flicker of interest and shiver of dread. This is why.
So fellow chemist, what's yours? What compound will you never forget, because it did something good for you or something bad to you, because it got you out of grad school, ruined six months of your life, was the most fun to recrystallize, or made you wish that you were standing out somewhere in a drive-through enclosure asking "Will that be all today?" instead? Nominees in the comments.
Most medicinal chemists like fluorinated compounds, since they tend to give compounds very different (and often more desirable) properties, and we're interested in new ways of preparing them. The last few years have seen a real upsurge in the synthetic methods available in this area, and particularly in reagents and techniques that can be applied to complex molecules. These "late-stage" fluorinations are particularly appealing - imagine, as a thought experiment, taking a library of natural products and running them through a protocol like this, to produce a completely new library with completely new properties.
Here's a new review of these reactions, from Tobias Ritter and coauthor Contanze Neumann of Harvard. It includes a handy cheat sheet of recent advances by reaction class, and looks forward optimistically to still more: asymmetric fluorination, new electrophilic reagents, greater functional group tolerance, and new ways of accomplishing direct C-H fluorination.
I like this reaction, highlighted by See Arr Oh here. It comes in and adds an azide group where there used to be just an alkyl CH bond. That's an instant amine group, naturally, via reduction, as well as an instant triazole should you be in to that sort of thing. This goes along with a number of recent reports of oxidations, fluorinations, and the like that can be done on elaborate scaffolds, more at the end of a synthesis than the beginning. (There's some photochemistry in this category, too, such as MacMillian's photo-Minisci carbon-carbon bond formation).
I'm a big fan of these sorts of reactions, which shows my med-chem bias. Starting from one active scaffold and being able to make interesting variants of it quickly is the essence of classic medicinal chemistry, and these reactions often do just the kinds of things that we like to do: fluorinate or hydroxylate where the compound might be oxidized by the liver anyway, staple in an amine to totally change the polarity and lead into new SAR territory, attach a THF ring to a heterocycle for instant solubility. Some of these changes are fairly subtle, like the fluorinations, but this azide reaction takes you into structures that might otherwise be very hard to access indeed. I wrote here about how such analogs, even of very common bioactive scaffolds, often have never been described in the literature. Changing that is a worthwhile thing to do.
If you do early-stage medicinal chemistry, you'll probably be interested in this overview of spirocyclic scaffolds. It has examples from the recent literature, and an update on synthetic methods to get into this chemical space.
I've made several compounds like this over the years, without much success in the assays so far. But as the paper shows, there are plenty of active compounds out there, and the spiro ring fusion gives you access to fixed conformations that you're probably not going to get to any other way. Like any other tied-back series, it's sort of a death-or-glory move, as far as your SAR goes, but when it works, it really works.
I'd like to recommend this review by Steve Ley and co-workers on the "march of the machines" in synthetic organic chemistry. Prof. Ley is well-known as an advocate of cutting-edge flow chemistry, but this article is about more than that. There's a lot of flow in it (and it's an excellent summary of current technology in the area), but it's also trying to convey the opportunities that modern instrumention gives organic chemistry in general. We can try a lot of things out now, on smaller and faster scales than ever, and with far better characterization on-the-fly than ever before, and we should be taking advantage of that.
In general, ". . .while people are always more important than machines, increasingly we think that it is foolish to do something machines can do better than ourselves". And what machines can do better, in many cases is the vast around of grunt work that goes into a lot of chemistry research. (And we shouldn't be afraid to keep redefining what's "grunt work" and what isn't). That should free us up to do more interesting work, and it's up to us to see that it does.
I can agree with this one: "Promiscuous 2-aminothiazoles: A Frequent Hitting Scaffold". The authors (the med-chem group from Monash University in Australia, with a collaborator from AstraZeneca) make the case that the title compounds are usually more trouble than they're worth. They noticed that the 4-phenyl derivative went fourteen for fourteen in a set of diverse fragment screens, which certainly make a person wonder.
I've seen some similar behavior in that series, and I think that a lot of people have the impression that aminothiazoles hit pretty often in HTS campaigns. Indeed, there are several varieties of them in the canonical PAINs list. This paper investigates a range of substitution patterns, and finds that, far too often, they tend to hit a wide variety of protein targets. There's no clear SAR to get you out of that problem, either, which mirrors the general experience with PAIN-type compounds of flat SAR with no obvious trends. The binding is not due to impurities, nor is it covalent - it seems to be legitimate interaction with the protein, but there's just too damn much of it. In fact, several of these compounds seem to be binding at multiple sites per protein, which will certainly makes things hard to interpret.
But at the same time, you can find aminothiazoles in several marketed drugs, so you can't just rule the entire class out by saying that it never leads anywhere. As with many of the PAIN scaffolds, the best way to characterize the problem might be to say that although such motifs may appear in marketed drugs, your chances of success are likely to diminish if you try to follow their lead. The binding modes can make assay and SAR data hard to interpret during the early stage of optimization, and the over-friendly nature of the binding groups will always be a worry as you go on to cell assays or into animal models. You can press on, but your life has become more complicated. If you have something else equally interesting to work on that doesn't have these issues around it, you might be better off heading in that direction.
And to reiterate something that's come up a lot around here recently, and no doubt will again, the key thing about any new papers that report such structures as hits is to see whether the authors recognize any of this. Too often, such compounds get reported as if they were just happy HTS hits with nothing to hide - perfectly legitimate tool compounds, or even candidates for preclinical optimization, why not? But if the people doing such work have missed out on the known issues around their chemical matter, what else have they missed?
So here's a friendly competition: let's see who can spot the first 2-aminothiazole paper of 2015, showing this as hits for some assay, and let's see if it mentions anything about their promiscuity. The real acid test will be in about a year, when everyone who's even writing a manuscript has had time to see this one, but people have had time to see the other PAINs paper, and what good has that done them?
Bad things get said around here about rhodanines. That chemical class tends to hit in a lot of assays, especially assays against hard targets, leading to thoughts like the ones I was expressing the other day and in this post as well.. The problems with these compounds are many (and are mentioned in those blog posts linked above). They include the meta-problem of the sheer number of targets that these compounds can hit (making them problematic as leads or tools). That promiscuity is due to a number of mechanisms - direct binding, photoreactivity, and now we can add hydrolysis to the list.
Here's a new paper from Nature that shows that some rhodanine-based activity against penicillin-binding proteins and beta-lactamases comes, not from the parent compound, but from a thioenolate hydrolysis product.
This was the unexpected result of X-ray crystallographic work on the lead series, and it makes things rather hard to interpret:
". . .there are concerns regarding the promiscuity of rhodanines. Some consider rhodanines to be promiscuous inhibitors dueto their frequent appearance in high-throughput screening.However, rhodanines have also been described as ‘privileged scaffolds’ for inhibitor development. In cellular/biological contexts, promiscuity can make it difﬁcult to correlate biochemical and biological outcomes with rhodanines. An important outcome of our work is the addition of another layer of complexity to the interpretation of rhodanine-based inhibition. . ."
So if you see a screening paper cheerfully reporting a rhondanine-based hit, and they haven't checked this issue, or the other known ones, then you know that you are reading inadequate work. And you can wonder what else has been missed.
Back to work! I wanted to start out the year of blogging by mentioning a paper that came out in December, before the break. The Baran group at Scripps has extended their very useful olefin coupling method, originally blogged about here. Now it works with a wide variety of heteroatom-substituted double bonds. This, as they note, is an unusual-looking transformation - normally, you tend to think of enamines or enol ethers reacting at the beta-position, but here's a carbon-carbon bond-forming reaction at the alpha. And not just those olefins, but vinylsilanes, boranes, halides and others can all participate. It's an iron-catalyzed process, and the optimal recipes vary a bit (as the paper makes clear), but it looks to be the same radical-based mechanism all the way across.
There's a table in the Supplementary Material that shows the reaction being run in what are termed "unconventional solvents" - aqueous ethanol mixtures of the sort you'd order at a bar. My undergraduate organic professor, Tom Goodwin of Hendrix, reported a series of Sonogashira couplings in vodka back in the 1990s, so I'm glad to see the field continuing to expand. Back when Goodwin got some press about those, Stuart McCombie and I at Schering-Plough sent him a fake letter, (from the law firm of Solitary, Poor, Nasty, Brutish, and Short) threatening to sue him for what had allegedly happened after an attempt to scale up a dimethyl zinc reaction using an entire bottle of creme de menthe. Goodwin replied that he was not liable, because the early work on boozotrophic metal couplings (he referred us to joint publications by Fidel Castro and Connie Stephens) had specifically warned against that combination, and it wasn't his fault if we didn't know the literature.
More details on the reaction can be found at the group's Open Flask blog. Cocktails aside,a good C-C bond reaction is always welcome, and this one leads to some pretty interesting products in a single step. This will probably be the first new reaction I try this year (I have some others to get out of the way first), and I'll report on my experiences when I do.
Here's a good article on the illegal recreational drug trade - the boutique end of it, anyway. I've written about this sort of thing before, and this piece is squarely in the same territory (even to interviewing David Nichols).
It all comes down to this: there are quite a few people out there modifying known CNS drug structures to see what happens when people take them. There always have been such folks, some of them the pharmacological heirs of Alex Shulgin. If someone wants to fry their own synapses in the privacy of their own home, I suppose it's not much of my business. My problem, though, is that many of the people in this field would rather have someone else do the first-in-man, perhaps next Saturday night. New structures with new PK, new binding profiles, new tox, and no studies of any kind backing them up, and people just cheerfully eat the damn things hoping for a good time. I suppose it really does take all kinds, like they say, but I'm very far removed from being that kind myself. Anyone who knows enough to synthesize something like this, though, has to know that the new agent could do most anything, with "most anything" ranging all the way to seizures and death. Taking it yourself is one thing - selling it to someone else seems to me to be a criminal act.
This field has changed since that 2010 article I referenced before, which was about people making these things themselves in their own labs. Why do that, though, when you can outsource them? The author of this new article tried that process out himself, and it worked just fine:
I made an approach to the lab during Chinese business hours, and I heard back within an hour. “First, can I know the application of this compound your client use?” asked the person on the other end. “I just want to make sure it is legal application. We can do custom synthesis of this simple chemical surely. But if you can give synthesis route, it will be very good for us and we can save some time for this project.”
I replied, “We are doing basic animal research into the compound’s putative analgesic properties. Based upon its expected effect on monoamines, we believe it will have fairly potent analgesic effects, whilst causing minimal cardiovascular strain. Our intention is to use it as a proof of concept for a new type of analgesic for dogs.”
My online identity for this character and for his company are bare bones: nothing but a webmail address. My cover explanation is that I am designing a painkiller—yet phenmetrazine, the clear progenitor of this recipe, is not known to have any analgesic qualities. To anyone who cares to look, my story is blatantly false. But the lab does not seem to care.
The (unnamed) supplier late made the offer to ship the compound hidden in a book for customs, so they knew the score. But they kept up their end of the bargain - the material received, which I presume was some sort of aryl-substituted derivative of phenmetrazine, turned out to be exactly as requested - NMR and LC/MS included. (A cursory bit of Googling would suggest, though, that the simple aryl variants of that compound have already been unofficially explored). To my relief, the author (Mike Power) did not go as far as taking any himself.
What, if anything, can be done about this? As Power puts it, "We can ban drugs. But we can’t ban chemistry, and we can’t ban medical research." It is truly impossible to say what a given new compound might do and what uses it might have. legitimate or less so. I have to confess, I'm at a loss, too.
For fans of saturated nitrogen heterocycles (and there are many in med-chem), this paper is well worth a look. It's from Jeffrey Bode's group at the ETH in Zürich (authors of a recent review in this area), and it's another version of their "SnAP" chemistry, tin-mediated conditions for ring formation. Bode and coauthor Woon-Yew Siau report a variety of interesting compounds, some of which would be rather painful to make by other routes.
No one's crazy about using tin, but the transformation is too useful to pass up. It seems to work well with a variety of cyclic ketones, for one thing, and can be extended to acyclic trifluormethyl ketones as well. On the heterocycle side, there are a few limitations. Trying it on N-proteced 3-keto pyrrolidines or piperidines, for example, gives low yields (too much diversion into an enamine). I would guess that overcoming this problem is a current research focus in the group. Larger rings than the 7-membered one also don't form well - not to anyone's surprise, since those are a pain under most conditions.
But given the number of useful morpholines, piperazines, and piperidines out there, I'm very glad to have another route to crank them out, particularly with the less-studied quaternary carbons. There's probably some tin in my future, darn it all.
For you flow chemistry fans, here's a paper on the flow equivalent of speed chess: a three-minute synthesis and purification of ibuprofen. Three bond-forming steps, one work-up, and one purification all take place in that time, producing about 8g/hour of final product. Notable features include the use of aluminum chloride, which is the sort of reagent that most flow syntheses try to avoid, aluminum workups being widely known for their gunky, sticky, frozen-yogurt-like properties.
This work, from the Jamison lab at MIT, was funded by DARPA, and it's far from the only "on-demand drug synthesis" project I've seen them fund. A lot of interesting flow chemistry and in-line processing has come out of this stuff, but I have to admit, I'm still a bit puzzled about DARPA's rationale for the whole thing. I gather that the eventual futuristic goal would be some sort of gizmo that can be taken into the field and which will crank out fresh pharmaceutical supplies on demand. But that means carrying along a lot of extra mass for reagents whose components have to be thrown away (like, say, the oxide gunk left from an aluminum chloride workup). And it also means carrying along a lot of ingredients that are more storage-sensitive than the final products as well, like aluminum chloride, or (also in the current paper) iodine monochloride.
So all in all, it would seem easier to ship or carry some ibuprofen rather than an ibuprofen-making machine. A machine of the same size that could synthesize a wide range of useful drugs would be easier to make the case for, naturally, but organic synthesis may not allow you to do that, or not yet. It starts to get into the territory of the old Star Trek method of feeding the crew: tell the wall what you want, and then reach in and take it out of the slot. I have no problem with DARPA trying to push into unknown territory - that's their job - but there's a lot of territory here to cover.
Here's a really nice example of high-throughput reaction discovery/condition scouting from a team at Merck. They certainly state the problem correctly:
Modern organic synthesis and especially transition metal catalysis is redefining the rules with which new bonds can be forged, however, it is important to recognize that many “solved” synthetic transformations are far from universal, performing well on simple model substrates yet often failing when applied to complex substrates in real-world synthesis. A recent analysis of 2149 metal catalyzed C-N couplings run in the final steps of the synthesis of highly functionalized drug leads reveals that 55% of reactions failed to deliver any product at all. The missed opportunity represented by these unsuccessful syntheses haunts contemporary drug discovery, and there is a growing recognition that the tendency of polar, highly functionalized compounds to fail in catalysis may actually enrich compound sets in greasy molecules that are less likely to become successful drug candidates. . .
That "recent analysis" they mention turns out to be an internal study of Merck's own electronic lab notebooks, and it sounds very believable. That's a problem of organic chemistry: we can do a lot, but rarely can we do it in a general fashion. The paper details an effort to look for Pd-catalyzed coupling reactions in DMSO or NMP, which are (as the authors point out) not the usual solvents that people choose. But they has many advantages for high-throughput experimentation, not least the solubility of more complicated substrates. They started off by screening bases and catalysts in glass microvials in a 96-well array, but then tried those conditions (and more) in a 1536-well plastic plate.
On that level of miniaturization, you can really start clearing some brush. And they uncovered a range of reaction conditions that have not been reported before, using a very real-world set of coupling partners (shown). Applying one of the more general-looking protocols to the whole set, though, still showed about a 50% failure rate, so they turned around and took 32 of the failures and ran new arrays with them of 48 reaction conditions each. (That's what I mean by clearing things out quickly!) Those 48 reactions consume less than 1 mg of substate in total. By careful mass-based encoding of the array, they could analyze the 1536-well plate in under three hours by LC/MS.
That led to optimized conditions for 21 of the 32, but they took 6 of the remaining recalcitrant combinations and tried another array on them, this time varying catalyst loading, amount of nucleophile, and amount of base. 5 of the 6 yielded to that optimization, which confirms the usual belief that just about any metal-catalyzed coupling will work, if you're just willing to devote enough of your life to optimizing it. And this automated system significantly changes the value of "enough of your life".
This is different from Design-of-Experiments setups, in that those are modeled in a way to minimize the number of experimental runs by identifying (or trying to identify) the key variables. But with very small, highly automated experiments, that's not really as big a concern. You can just let it rip; try a bunch of stuff and look for granularity in the reaction condition space that you'd miss by trying to get more efficient. The Merck team winds up by saying "In biomedical research, chemical synthesis should not limit access to any molecule that is designed to answer a biological question", and that really is the ideal we should be working towards.
I've written several times here about reaction discovery techniques, going back to 2011. And I've been meaning to link to this recent review in Nature Chemistry, because it's an excellent summary of the field and the relevant literature. If you're at all interested in the topic of combinatorial reaction-searching and new methods development, you really should have a look - as far as I can see, it's comprehensive.
So how's the new chemistry building at Princeton working out? I last asked for comments four years ago, but I'm prompted to do so again by this post by Luysii, who has been hearing that the building isn't necessarily working out as planned. Can anyone comment? To a first approximation, a lot of buildings don't work out quite as planned, but I'm always interested in hearing about how the glassy high-interaction high-innovation designs are performing in the real world.
To remind everyone, that's the structure of maitotoxin. Readers can decide for themselves which parts of the molecule they find most useful, and which ones they most look forward to reading about the synthesis of. My own opinion was set some time ago, quite a bit of it after I finished up my own total-synthesis-based PhD work.
A reader sends along an organic chemistry mechanisms game that he's released on Google Play. I don't have an android device, so I haven't tried it out myself, but those of you who teaching organic (or looking for ways to teach it), have a look and send along some feedback. Now if someone can set it up so that you have to draw the Mitsunobu or the Cannizarro right in order to move a bird, storm a castle, or make a row of candy disappear, than you could really cash in. . .
Other suggestions for chemistry apps on Android welcome in the comments - I'll move them up here to the body of the post as they come in.
Here's something new: working up a reaction. The authors say that they have a porous polymer that adsorbs organic compounds from aqueous reaction mixtures, allowing you to just stir and filter rather than doing a liquid/liquid extraction. The adsorbed material can then be taken right to chromatography, as if you'd adsorbed your compound onto any other solid support, or just washed with solvent to liberate the crude product.
I have colleagues who will be trying this out soon, and I'll report on their experience with the stuff. If it really is widely applicable, it could be a nice addition to the parallel synthesis and flow chemistry worlds (pumping a crude reaction through a cartridge of absorbing polymer could be a fast way to do workups and solvent switches).
No matter how long you've been doing chemistry, there are still things that you come across that surprise you. Did you know that plain old L-phenylalanine has been one of the most difficult subjects ever for small-molecule crystallography? I sure didn't. But people have tried for decades to grow good enough crystals of it to decide what space group it's in. One big problem has been the presence of several polymorphs (see blog posts here and here), but it looks like the paper linked above has finally straightened things out.
What sorts of heterocycles show up the most in approved drugs? This question has been asked several times before in the literature, but it's always nice to see an update. This one is from the Njardson group at Arizona, producers of the "Top 200 Drugs" posters.
84% of all unique small-molecule drugs approved by the FDA have at least one nitrogen atom in them, and 59% have some sort of nitrogen heterocycle. Leaving out the cephems and penems, which are sort of a special case and not really general-purpose structures, the most popular ones are piperidine, pyridine, pyrrolidine, thiazole, imidazole, indole, and tetrazole, in that order. Some other interesting bits:
All the four-membered nitrogen heterocycles are beta-lactams; no azetidine-containing structure has yet made it to approval.
The thiazoles rank so highly because so many of them are in the beta-lactam antibiotics as well. Every single approved thiazole is substituted in the 2 position, and no monosubstituted thiazole has ever made it into the pharmacopeia, either.
Almost all the indole-containing drugs are substituted at C3 and/or C5 - pindolol is an outlier.
The tetrazoles are all either antibiotics or cardiovascular drugs (the sartans).
92% of all pyrrolidine-substructure compounds have a substituent on the nitrogen.
Morpholine looks more appealing as a heterocycle than it really is - piperidine and piperazine both are found far more frequently. And I'll bet that many of those morpholines are just there for solubility, and that otherwise a piperidine would have served for SAR purposes. Ethers don't always seem to do that much for you.
Piperidines rule. There's a huge variety of them out there, the great majority substituted on the nitrogen. Azepanes, though, one methylene larger, have only three representatives.
83% of piperazine-containing drugs are substituted at both nitrogens.
There are a lot of other interesting bits in the paper, which goes on to examine fused and bicyclic heterocycles. But I think this afternoon I'll go make some piperidines and increase my chances.
Update: Betzig himself has shown up in the comments to this post, which just makes my day.
Yesterday's Nobel in chemistry set off the traditional "But it's not chemistry!" arguments, which I largely try to stay out of. For one thing, I don't think that the borders between the sciences are too clear - you can certainly distinguish the home territories of each, but not the stuff out on the edge. And I'm also not that worked up about it, partly because it's nowhere near a new phenomenon. Ernest Rutherford got his Nobel in chemistry, and he was an experimental physicist's experimental physicist. I'm just glad that a lot of cutting-edge work in a lot of important fields (nanotechnology, energy, medicine, materials science) has to have a lot of chemistry in it.
With this in mind, I thought this telephone interview with Eric Betzig, one of the three laureates in yesterday's award, was quite interesting:
This is a chemistry prize, do you consider yourself a chemist, a physicist, what?
[EB] Ha! I already said to my son, you know, chemistry, I know no chemistry. [Laughs] Chemistry was always my weakest subject in high school and college. I mean, you know, it's ironic in a way because, you know, trained as a physicist, when I was a young man I would look down on chemists. And then as I started to get into super-resolution and, which is really all about the probes, I came to realise that it was my karma because instead I was on my knees begging the chemists to come up with better probes for me all the time. So, it's just poetic justice but I'm happy to get it wherever it is. But I would be embarrassed to call myself a chemist.
Some people are going to be upset by that, but you know, if you do good enough work to be recognized with a Nobel, it doesn't really matter much what it says on the top of the page. "OK, that's fine for the recipients", comes one answer, "but what about the committee? Shouldn't the chemistry prize recognize people who call themselves chemists?" One way to think about that is that it's not the Nobel Chemist prize, earmarked for whatever chemists have done the best work that can be recognized. (The baseball Hall of Fame, similarly, has no requirement that one-ninth of its members be shortstops). It's for chemistry, the subject, and chemistry can be pretty broadly defined. "But not that broadly!" is the usual cry.
That always worries me. It seems dangerous, in a way - "Oh no, we're not such a broad science as that. We're much smaller - none of those big discoveries have anything to do with us. Won't the Nobel committee come over to our little slice of science and recognize someone who's right in the middle of it, for once?" The usual reply to that is that there are, too, worthy discoveries that are pure chemistry, and they're getting crowded out by all this biology and physics. But the pattern of awards suggests that a crowd of intelligent, knowledgable, careful observers can disagree with that. I think that the science Nobels should be taken as a whole, and that there's almost always going to be some blending and crossover. It's true that this year's physics and chemistry awards could have been reversed, and no one would have complained (or at least, not any more than people are complaining now). But that's a feature, not a bug.
This year's Nobel prize in Chemistry goes to Eric Betzig, Stefan Hell, and William Moerner for super-resolution fluorescence microscopy. This was on the list of possible prizes, and has been for several years now (see this comment, which got 2 out of the 3 winners, to my 2009 Nobel predictions post). And it's a worthy prize, since it provides a technique that (1) is useful across a wide variety of fields, from cell biology on through chemistry and into physics, and (2) does so by what many people would, at one time, would have said was impossible.
The impossible part is beating the diffraction limit. That was first worked out by Abbe in 1873, and it set what looked like a physically impassable limit to the resolution of optical microscopy. Half the wavelength of the light you're using is as far as you can go, and (unfortunately) that means that you can't optically resolve viruses, many structures inside the cell, and especially nothing as small as a protein molecule. (As an amateur astronomer, I can tell you that the same limits naturally apply to telescope optics, too: even under perfect conditions, there's a limit to how much you can resolve at a given wavelength, which is why even the Hubble telescope can't show you Neil Armstrong's footprint on the moon). In any optical system, you're doing very well if the diffraction limit is the last thing holding you back, but hold you back it will.
There are several ways to try to sneak around this problem, but the techniques that won this morning are particularly good ones. Stefan Hell worked out an ingenious method called stimulated emission depletion (STED) microscopy. If you have some sort of fluorescent label on a small region of a sample, you get it to glow, as usual, by shining a particular wavelength of light on it. The key for STED is that if another particular wavelength of light is used at the same time, you can cause the resulting fluorescence to shift. Physically, fluorescence results when electrons get excited by light, and then relax back to where they were by emitting a different (longer) wavelength. If you stimulate those electrons by catching them once they're already excited by the first light, they fall back into a higher vibrational state than they would otherwise, which means less of an energy gap, which means less energetic light is emitted - it's red-shifted compared to the usual fluorescence. Pour enough of that second stimulating light into the system after the first excitation, and you can totally wipe out the normal fluorescence.
And that's what STED does. It uses the narrowest possible dot of "normal" excitation in the middle, and surround that with a doughnut shape of the second suppressing light. Scanning this bulls-eye across the sample gives you better-than-diffraction-limit imaging for your fluorescent label. Hell's initial work took several years just to realize the first images, but the microscopists have jumped on the idea over the last fifteen years or so, and it's widely used, with many variations (multiple wavelength systems at the same time, high frames-per-second rigs for recording video, and so on). There's a STED image of a labeled neurofilament compared to the previous state of the art. You'd think that this would be an obvious and stunning breakthrough that would speak for itself, but Hell himself is glad to point out that his original paper was rejected by both Nature and Science.
You can, in principle, make the excitation spot as small as you wish (more on this in the Nobel Foundation's scientific background on the prize here). In practice, the intensity of the light needed as you push to higher and higher resolution tends to lead to photobleaching of the fluorescent tags and to damage in the sample itself, but getting around these limits is also an active field of research. As it stands, STED already provides excellent and extremely useful images of all sorts of samples - many of those impressive fluorescence microscopy shots of glowing cells are produced this way.
The other two winners of the prize worked on a different, but related technique: single-molecule microscopy. Back in 1989, Moerner's lab was the first to be able to spectroscopically distinguish single molecules outside the gas phase - pentacene, imbedded in crystals of another aromatic hydrocarbon (terphenyl), down around liquid helium temperatures. Over the next few years, a variety of other groups reported single-molecule studies in all sorts of media, which meant that something that would have been thought crazy or impossible when someone like me was in college was now popping up all over the literature.
But as the Nobel background material rightly states, there are some real difficulties with doing single-molecule spectroscopy and trying to get imaging resolution out of it. The data you get from a single fluorescent molecule is smeared out in a Gaussian (or pretty much Gaussian) blob, but you can (in theory) work back from that to where the single point must have been to give you that data. But to do that, the fluorescent molecules have to scattered apart further than that diffraction limit. Fine, you can do that - but that's too far apart to reconstruct a useful image (Shannon and Nyquist's sampling theorem in information theory sets that limit).
Betzig himself took a pretty unusual route to his discovery that gets around this problem. He'd been a pioneer in another high-resolution imaging technique, near-field microscopy, but that one was such an impractical beast to realize that it drove him out of the field for a while. (Plenty of work continues in that area, though, and perhaps it'll eventually spin out a Nobel of its own). As this C&E News article from 2006 mentions, he. . .took some time off:
After a several-year stint in Michigan working for his father's machine tool business, Betzig started getting itchy again a few years ago to make a mark in super-resolution microscopy. The trick, he says, was to find a way to get only those molecules of interest within a minuscule field of view to send out enough photons in such a way that would enable an observer to precisely locate the molecules. He also hoped to figure out how to watch those molecules behave and interact with other proteins. After all, says Betzig, "protein interactions are what make life."
Betzig, who at the time was a scientist without a research home, knew also that interactions with other researchers almost always are what it takes these days to make significant scientific or technological contributions. Yet he was a scientist-at-large spending lots of time on a lakefront property in Michigan, often in a bathing suit. Through a series of both deliberate and accidental interactions in the past two years with scientists at Columbia University, Florida State University, and the National Institutes of Health, Betzig was able to assemble a collaborative team and identify the technological pieces that he and Hess needed to realize what would become known as PALM.
He and Hess actually built the first instrument in Hess's living room, according to the article. The key was to have a relatively dense field of fluorescent molecules, but to only have a sparse array of them emitting at any one time. That way you can build up enough information for a detailed picture through multiple rounds of detection, and satisfy both limits at the same time. Even someone totally outside the field can realize that this was a really, really good plan. Betzig describes very accurately the feeling that a scientist gets when an idea like this hits: it seems so simple, and so obvious, that you're sure that everyone else in the field must have been hit by it at the same time, or will be in the next five minutes or so. In this case, he wasn't far off: several other groups were working on similar schemes while he and Hess were commandeering space in that living room. (Here's a video of Hess and Betzig talking about their collaboration).
Shown here is what the technique can accomplish - this is from the 2006 paper in Science that introduced it to the world. Panel A is a section of a lysozome, with a labeled lysozyme protein. You can say that yep, the enzyme is in the outer walls of that structure (and not so many years ago, that was a lot to be able to say right there). But panel B is the same image done through Betzig's technique, and holy cow. Take a look at that small box near the bottom of the panel - that's shown at higher magnification in panel D, and the classic diffraction limit isn't much smaller than that scale bar. As I said earlier, if you'd tried to sell people on an image like this back in the early 1990s, they'd probably have called you a fraud. It wasn't thought possible.
The Betzig technique is called PALM, and the others that came along at nearly the same time are STORM, fPALM, and PAINT. These are still being modified all over the place, and other techniques like total internal reflection fluorescence (TIRF) are providing high resolution as well. As was widely mentioned when green fluorescent protein was the subject of the 2008 Nobel, we are currently in a golden (and green, and red, and blue) age of cellular and molecular imaging. (Here's some of Betzig's recent work, for illustration). It's wildly useful, and today's prize was well deserved.
It's been announced today that Jerry Meinwald (emeritus at Cornell) has won the Presidential Medal of Science in chemistry. That's well deserved - his work on natural product pheromone and signaling systems has had an impact all through chemistry, biology, agricultural science, ecology, and beyond. He and Thomas Eisner totally changed the way that we look at insect behavior, among others. Their work is the foundation of the whole field of chemical ecology.
So congratulations to Prof. Meinwald for a well-deserved honor, one of many he's received in his long career. But there's one that's escaped him:I note that (the late) Prof. Eisner has a Wikipedia entry, but Meinwald doesn't, which seems to be a bizarre oversight. If I had the time, I'd write one myself - won't someone?
A colleague brought this new JACS paper to my attention the other day. It's a complementary method to the classic reductive amination reaction. Instead of an aldehyde and amine (giving you a new alkylated amine), in this case, you use a carboxylic acid and an amine to give you the same product, knocking things down another oxidation state along the way.
This reaction, from Matthias Beller and co-workers at Rostock, uses Karstedt's catalyst (a platinum species) with phenylsilane as reducing agent. Double bonds don't get reduced, Cbz and Boc groups survive, as do aliphatic esters. Most of the examples in the paper are on substituted anilines, but there are several aliphatic amines as well. A wide variety of carboxylic acids seem to work, including fluorinated ones. I like it - as a medicinal chemist, I'm always looking for another way to make amines, and there sure are a lot of carboxylic acids out there in the world.
Well, we're getting close to the Nobel season, so it's time for the yearly "Who's going to win?" post. According to Thomson Reuters, some favorites are Tan/van Slyke for organic light-emitting diodes, Moad/Rizzardo/Thang for RAFT polymerization, and Kresge/Ryoo/Stucky for mesoporous materials. You can see a real materials-science drift to those picks, which would indicate that the Thomson-Reuters folks think that we're not going to get another that's-not-chemistry-that's-biology award this year (nor one in analytical chemistry).
But if they're wrong about that, there are several things that shade over into molecular biology that are queued up. Some sort of prize for nuclear receptors would be plausible, and the CRISPR gene editing technology is surely in line for one. Another surely-that-will-win technique is optogenetics, the photoswitchable gene regulation method that's being used all over biology. They could always give it to Venter (et al.) for gene sequencing, or to Bruce Ames for the Ames test. As usual, these could end up in chemistry, or over in the physiology/medicine prize. In the zone where analytical chemistry blends into physics, there's single-molecule-spectroscopy and SPR. I don't see a flat-out organic chemistry prize in the works, but Sharpless is still plausible as part of a click-chemistry/chemical biology sort of award.
Other predictions can be found at Wavefunction's blog (he has a different top pick) and Everyday Scientist. Add your own guesses to the comments section, and we'll see how wrong we all can be!
I've been enjoying the FBLD fragment conference in Basel. There have been many good talks, and it's been instructive to talk shop with people as well. Some things that various participants (and I) have noted:
(1) There are a lot of industry people here, from all over. Fragment-based methods have clearly made a big impression across drug discovery - academia finds it a low-barrier way to get into compound screening, and the industrial groups clearly find it useful as well. This has already lasted longer than the combichem boom of the 1990s (a point I'll be mentioning in my talk here tomorrow), and at this point, it would appear to be not a fad at all. Fragment-derived compounds are marching along through the clinic as we speak.
(2) The number of instrumental and biophysical techniques to do fragment work is still growing. There are always NMR screens, SPR, and X-ray crystallography (and many variations on each of these), calorimetry and thermal shift experiments, and so on. But there are mass spec methods, chromatographic ones, thermophoresis and electrophoresis, and more where those came from. Which is good - as anyone who'd done this can tell you, you need orthogonal ways of looking at the compounds.
(3) Several of these technique are, though, still a bit "operator-dependent". SPR, just to pick one, needs someone experienced at the controls for the trickier experiments, because there are a lot of things that can give you funny-looking data (aggregation, compounds that bind to the chip surface, super-stoichiometric binding, and more). It's not a good part-time occupation. You can say the same thing about a lot of other screening techniques that are used for traditional high-throughput screening, though, as anyone who's troubleshooted FRET, FP, or AlphaScreen assays will tell you at length.
(4) There are still a lot of important and interesting things we don't quite know about fragment collections and fragment binding. Some of these are just beginning to get sorted out - what are the physical characteristics of a good fragment screening set (beyond the obvious ones of size and solubility)? Are there different sorts of collections that could give you better hit rates against targets like protein-protein interactions (there have been many examples of these at the meeting). What are the relationships between selectivity at the fragment level and selectivity in the eventual optimized compounds? The answers the questions like these are still being written.
I have more thoughts, naturally, but some of those my employer has first call on. I have a talk to give tomorrow (one of those 30,000-foot-overview things), right at the end of the conference, and I'll probably unburden myself of a few opinions then.
I've been enjoying the FBLD fragment conference in Basel. There have been many good talks, and it's been instructive to talk shop with people as well. Some things that various participants (and I) have noted:
(1) There are a lot of industry people here, from all over. Fragment-based methods have clearly made a big impression across drug discovery - academia finds it a low-barrier way to get into compound screening, and the industrial groups clearly find it useful as well. This has already lasted longer than the combichem boom of the 1990s (a point I'll be mentioning in my talk here tomorrow), and at this point, it would appear to be not a fad at all. Fragment-derived compounds are marching along through the clinic as we speak.
(2) The number of instrumental and biophysical techniques to do fragment work is still growing. There are always NMR screens, SPR, and X-ray crystallography (and many variations on each of these), calorimetry and thermal shift experiments, and so on. But there are mass spec methods, chromatographic ones, thermophoresis and electrophoresis, and more where those came from. Which is good - as anyone who'd done this can tell you, you need orthogonal ways of looking at the compounds.
(3) Several of these technique are, though, still a bit "operator-dependent". SPR, just to pick one, needs someone experienced at the controls for the trickier experiments, because there are a lot of things that can give you funny-looking data (aggregation, compounds that bind to the chip surface, super-stoichiometric binding, and more). It's not a good part-time occupation. You can say the same thing about a lot of other screening techniques that are used for traditional high-throughput screening, though, as anyone who's troubleshooted FRET, FP, or AlphaScreen assays will tell you at length.
(4) There are still a lot of important and interesting things we don't quite know about fragment collections and fragment binding. Some of these are just beginning to get sorted out - what are the physical characteristics of a good fragment screening set (beyond the obvious ones of size and solubility)? Are there different sorts of collections that could give you better hit rates against targets like protein-protein interactions (there have been many examples of these at the meeting). What are the relationships between selectivity at the fragment level and selectivity in the eventual optimized compounds? The answers the questions like these are still being written.
I have more thoughts, naturally, but some of those my employer has first call on. I have a talk to give tomorrow (one of those 30,000-foot-overview things), right at the end of the conference, and I'll probably unburden myself of a few opinions then.
If you want to really push the frontiers of analytical chemistry, try making compounds of the superheavy elements. Scienceis reporting the characterization of seaborgium hexacarbonyl, which gives us all a chance to use Sg in an empirical formula. We're not going to be using it too often, though, because this work was conducted on eighteen atoms of Sg, and that's at least as hard as it sounds. You have several seconds in which to do all your work, and then it's back to the gigantic particle accelerator to see if you can make another atom or two. Separating these from the various decay products and other stuff is one of the hardest parts of that process, and was a key step in getting this experiment to work at all.
The reason for going to all this trouble was the predicted behavior of the valence electrons. Elements of this size are rather strange in that regard, in that the outer inner-shell electrons (corrected: jet-lag, I think - DBL) are relativistic - Sg's have velocities of about 0.8c, which leads to some unusual effects. The element itself doesn't differ as much from its other periodic relatives (as opposed to 104 Rutherfordium and 105 Dubnium), but compounds leaving some outer-shell electrons free were still calculated to show some changes. In this case, the hexacarbonyl had similar behavior to the molybdenum and tungsten complexes, but its properties only come out right if you take the relativistic effects into account. So both Mendeleev and Einstein come out well in this one.
Irrationally, this makes seaborgium more of a "real" element to me (there have been a couple of other compounds reported before as well). Single atoms seem to me to be the province of physics, but once you start describing compounds, it's chemistry.
Last year I mentioned a paper that described the well-known drug tramadol as a natural product, isolated from a species of tree in Cameroon. Rather high concentrations were found in the root bark, and the evidence looked solid that the compound was indeed being made biochemically.
Well, thanks to chem-blogger Quintus (and a mention on Twitter by See Arr Oh), I've learned that this story has taken a very surprising turn. This new paper in Ang. Chem. investigates the situation more closely. And you can indeed extract tramadol from the stated species - there's no doubt about it. You can extract three of its major metabolites, too - its three major mammalian metabolites. That's because, as it turns out, tramadol is given extensively to cattle (!) in the region, so much of it that the parent drug and its metabolites have soaked into the soil enough for the African peach/pincushion tree to have taken it up into its roots. I didn't see that one coming.
The farmers apparently take the drug themselves, at pretty high dosages, saying that it allows them to work without getting tiree. Who decided it would be a good thing to feed to the cows, no one knows, but the farmers feel that it benefits them, too. So in that specific region in the north of Cameroon, tramadol contamination in the farming areas has built up to the point that you can extract the stuff from tree roots. Good grief. In southern Cameroon, the concentrations are orders of magnitude lower, and neither the farmers nor the cattle have adopted the tramadol-soaked lifestyle. Natural products chemistry is getting trickier all the time.
Just how reactive are chemical functional groups in vivo? That question has been approached by several groups in chemical biology, notably the Cravatt group at Scripps. One particular paper from them that I've always come back to is this one, where they profiled several small electrophiles across living cells to see what they might pick up. (I blogged about more recent effort in this vein here as well).
Now there's a paper out in J. Med. Chem. that takes a similar approach. The authors, from the Klein group at Heidelberg, took six different electrophiles, attached them to a selection of nonreactive aliphatic and aromatic head groups, and profiled the resulting 72 compounds across a range of different proteins. There are some that are similar to what's been profiled in the Cravatt papers and others (alpha-chloroketones), but others I haven't seen run through this sort of experiment at all.
And what they found confirms the earlier work: these things, even fairly hot-looking ones, are not all that reactive against proteins. Acrylamides of all sorts were found to be quite clean, with no inhibition across the enzymes tested, and no particular reaction with GSH in a separate assay. Dimethylsulfonium salts didn't do much, either (although a couple of them were unstable to the assay conditions). Chloroacetamides showed the most reactivity against GSH, but still looked clean across the enzyme panel. 2-bromodihydroisoxazoles showed a bit of reactivity, especially against MurE (a member of the panel), but no covalent binding could be confirmed by MS (must be reversible). Cyanoacetamides showed no reactivity at all, and neither did acyl imidazoles.
Now, there are electrophiles that are hot enough to cause trouble. You shouldn't expect clean behavior from an acid chloride or something, but the limits are well above where most of us think they are. If some of these compounds (like the imidazolides) had been profiled across an entire proteome, then perhaps something would have turned up at a low level (as Cravatt and Weerapana saw in that link in the first paragraph). But these things will vary compound by compound - some of them will find a place where they can sit long enough for a reaction to happen, and some of them won't. Here's what the authors conclude:
An unexpected but significant consequence of the present study is the relatively low inhibitory potential of the reactive compounds against the analyzed enzymes. Even in cytotoxicity assays and when we looked for inhibitor enzyme adduct formation we did not find any elevated cytotoxicity or unspecific modification of proteins. Particularly in the case of chloroacetylamides/-anilides and dimethylsulfonium salts, which we consider to be among the most reactive in this series, this is a promising result. From these results the following consequences for moderately reactive groups in medicinal chemistry can be drawn. Promiscuous reactivity and off-target effects of electrophiles with moderate reactivity may often be overestimated. It also does not appear justified to generally exclude “reactive” moieties from compound libraries for screening purposes, since the nonspecific reactivity may turn out to be much inferior than anticipated.
There are a lot of potentially useful compounds that most of us have never thought of looking at, because of our own fears. We should go there.
Here is the updated version of the "smallest drugs" collection that I did the other day. Here are the criteria I used: the molecular weight cutoff was set, arbitrarily, at aspirin's 180. I excluded the inhaled anaesthetics, only allowing things that are oils or solids in their form of use. As a small-molecule organic chemist, I only allowed organic compounds - lithium and so on are for another category. And the hardest one was "Must be in current use across several countries". That's another arbitrary cutoff, but it excludes pemoline (176), for example, which has basically been removed from the market. It also gets rid of a lot of historical things like aminorex. That's not to say that there aren't some old drugs on the remaining list, but they're still in there pitching (even sulfanilamide, interestingly). I'm sure I've still missed a few.
What can be learned from this exercise? Well, take a look at those structures. There sure are a lot of carboxylic acids and phenols, and a lot more sulfur than we're used to seeing. And pretty much everything is polar, very polar, which makes sense: if you're down in this fragment-sized space, you've got to be making some strong interactions with biological targets. These are fragments that are also drugs, so fragment-based drug discovery people may find this interesting as the bedrock layer of the whole field.
Some of these are pretty specialized and obscure - you're only going to see pralidoxime if you have the misfortune to be exposed to nerve gas, for example. But there are some huge, huge compounds on the list, too, gigantic sellers that have changed their whole therapeutic areas and are still in constant use. Metformin alone is a constant rebuke to a lot of our med-chem prejudices: who among us, had we never heard of it, would not have crossed it off our lists of screening hits? So give these small things a chance, and keep an open mind. They're real, and they can really be drugs.
Well, the Discovery Channel has Shark Week, and apparently I have Sulfur Week going on around here. Or maybe it's Scripps Week (or Angewandte Chemie week), because here's the other paper from the Sharpless and Fokin labs on their new sulfonyl fluoride/sulfate ester chemistry. This is the extension to polymers: if you take a scaffold that has two of the sulfonyl fluoride esters on it, and react that with another OTMS-bearing monomer, you get a rapid and clean polymerization to a polysulfate.
This is a structural class that has been only lightly investigated, because of synthetic difficulties, but it's now wide open. You don't often get to see a whole new area appear like this. It'll be interesting to see what properties these have as bulk materials, when spun into fibers, and so on. And since it's polymer chemistry, the only way to find these things out is to make them and see what you get. Right off, they look like more chemically resistant forms of polycarbonates, which would surely find some use, but there are probably many other uses waiting out there as well.
The broader point made by Sharpless in this paper and the previous one is that sulfates are under-used and under-explored as functional groups. I suspect that most organic chemists will have encountered only dimethyl sulfate in their careers, and that one is hardly representative of a whole universe of compounds. Biological molecules get sulfated around their edges in vivo, but I don't think that the disulfate esters are used biologically at all (anyone know of any examples?) There must be a good reason for that, but I certainly don't know what it is. (Frank Westheimer's classic "Why Nature Chose Phosphate" is, as always, a good read in this area, but I don't think he considers sulfates in that paper). A more recent overview of the phosphorylation landscape also doesn't mention sulfates as an alternative, either. Phosphates clearly won out in the early days of biochemistry, but I don't know if that was all due to better thermodynamics, or availability (or both).
But as for organic synthesis, we can deal with all sorts of energy barriers, so there's no reason for us not to get some use out of sulfates. We just have to learn what they can do.
The Baran group has a good method out in Angewandte on preparing sulfinates. That's a class of compound that, until recently years, not too many people have cared about - partly because they're generally not a lot of fun to make. The sulfinate acid oxidation state does not appear to be a happy one. Reductive routes exist down from the sulfonic end, but they're not reliable, and oxidations often don't stop at this state on the way up, either. Back in my sulfone/sulfoxide days in the early 1990s, I used to prepare sulfinate salts from organolithiums or Grignard reagents by bubbling in sulfur dioxide. And while that works (although not all the time), it's not that wonderful a route, either. Just thinking about it, I can taste the swampy, choking vapors coming out of the saturated solution at the end of the reaction.
I was making sulfinate esters to prepare chiral sulfoxides, which was one of the main reasons anyone would visit that functional group in synthesis. But since then it's been appreciated that sulfinates are excellent radical precursors, allowing for some very useful carbon-carbon bond-forming reactions. And that's what this new paper provides: a way to produce sulfinates straight from a wide variety of carboxylic acids by using modified Barton chemistry. The paper shows several otherwise-nearly-impossible reagents (like N-Boc azetidine-3-sulfinate), and also shows how these things will step in and staple onto heterocyclic rings directly through the radical reaction.
Their prime example is a recently reported isostere for a t-butyl group, the trifluoromethylcyclopropyl. That's a hindered beast, so many routes that you might think of to use it are going to be troublesome, but the radical reaction works quite well. I think that medicinal chemists are going to be particularly interested in this reaction, since we're so often functionalizing hetercyclic rings, and we'd like to be able to quickly expand an SAR series by sticking one group after another onto the same substrate. This sort of C-H functionalization is not something you'd think of if you're just running over the classic reactions in your head, but everyone should start adding it to their mental lists.
I see from See Arr Oh's Twitter feed (from the ACS meeting in San Francisco) that he's attending a talk on macrocycles (and how some of them have bizarrely good PK and other properties. The speaker suggested a "Rule of 10" for orally available macrocycles: ring size greater than 10, molecular weight under 1000, and clogP around 10. That's quite a set.
The problem is, I'll bet that the percentage of reasonable-looking compounds that fit those criteria but are actually orally active is quite small. Probably significantly smaller than the percentage of reasonable-looking compounds that fit the Lipinski Ro5 criteria and behave decently. Orally-available macrocycles seem to have some sort of internal-hydrogen-bond thing going on that we don't really grasp, and you'd have to figure (hydrogen bonds being what they are) that there are far more ways to get it wrong than to get it right.
Barry Sharpless and his group have another addition to the "click" list: the reactions of sulfonyl fluorides. They're not compounds that too many people have encountered, and you might think that they react roughly like their familiar sulfonyl chloride counterparts. But they go off in a completely different direction in many cases.
I ran into this reactivity some years ago when I was making a series of sulfone derivatives. If you try to make sulfones by adding an organolithium to a sulfonyl chloride, you're going to be disappointed. You might get a little of the chloro derivative of the lithium reagent, but you won't get any sulfones. But sulfonyl fluorides do the straight nucleophilic substitution on the sulfur and give you the desired product. Stuart McCombie, who was down the hall from me at the time, put me on to that reaction, which he'd come across some years before himself. The fluorides themselves could be prepared conveniently from KF and the sulfonyl chloride (Sharpless has a better route, I should add, one that won't displace all the other electrophiles in the substrate).
This new paper shows that sulfonyl fluorides are capable of a number of reactions that are all their own, and that this reactivity extends all the way to sulfonyl sulfuryl fluoride itself. Inconveniently for chemists, that one's a gas, as opposed to the chloride (but that does make it widely used as a fumigant for termites). It does nothing when you expose it to water, for example, and under many conditions it's really quite unreactive. But it turns out to react with phenols very smoothly to give fluorosulfate esters whose further reactions are explored as well. The paper also mentions the uses of vinylsulfonyl fluoride, which was reported in 1979 to be a tremendous Michael acceptor (which one can believe). It may, in fact, be the Michael acceptor, reacting exhaustively with basic amines of all types within minutes. It's also shown that aryl silyl ethers react with sulfonyl fluorides very cleanly to produce unsymmetrical diarylsulfates, another class of compound that has not been well explored, probably because they're not easily prepared from sulfonyl chloride.
So there's a whole set of new reactions here, along with some nearly forgotten ones that deserved to be hauled back out into the light. The diarylsulfate click reaction alone should prove useful on tyrosines and other biological phenols, as well as providing a whole class of materials that have really never been explored, and there's a lot more here as well. I remember digging around in the older German literature on this stuff back in the early 1990s when I was doing the sulfone chemistry, but the fact that I was looking at a useful (and almost totally neglected) area of synthesis didn't register on me. At times like this, a person feels that a lot of stuff must have not registered on them over the years!
So I go away for a few days, and people are already planning to replace me (and my colleagues) with robots? Can't turn your back on anyone in this business. What that article is talking about is the long-term dream of a "synthesis machine", a device that would take whatever structure you fed into it and start in trying to make it. No such device exists - nothing even remotely close to it exists - but there's nothing impossible about it.
A British project called Dial-a-Molecule is laying the groundwork. Led by Whitby, the £700,000 (US$1.2-million) project began in 2010 and currently runs until May 2015. So far, it has mostly focused on working out what components the machine would need, and building a collaboration of more than 450 researchers and 60 companies to help work on the idea. The hope, says Whitby, is that this launchpad will help team members to attract the long-term support they need to achieve the vision. . .
. . .Some reckon it would take decades to develop an automated chemist as adept as a human — but a less capable, although still useful, device could be a lot closer. “With adequate funding, five years and we're done,” says Bartosz Grzybowski, a chemist at Northwestern University in Evanston, Illinois, who has ambitious plans for a synthesis machine of his own.
There's a lot of room in between "as adept as a human" and "still useful", let me tell you, and that word "useful" covers a lot of ground all by itself. But I agree that there's a lot of potential here, and that we're at least getting close to the point of being able to realize some of it. But then the arguing starts. This topic came up here just recently, and has before as well. But those examples were far more simple than the Turing-machine like ideal synthesis device.
That one really is decades away, I'd say. There are a lot of problems to be dealt with. For one, you have the physical handling of reagents and reaction purification. Steve Ley's group and Tim Jamison's group are two of the best at this, and I recently had a chance to hear Jamison talk about some DARPA-funded work he's been doing on an automated platform to make simple pharmaceuticals on demand. It has taken them no small amount of work just to get the dispensing, purification, and transfer parts to mesh correctly, and that's for compounds for which defined synthetic routes have been well worked out. Throw in the problem of figuring out a good synthetic plan (among the vast number of reaction and reagent possibilities) and the problem gets wildly, exponentially more hairy.
That last link discusses some very interesting work from the Grzybowski group, who have been developing a system called Chematica. Here's the latest on that:
These demonstrations have impressed synthetic chemists, although few have had a chance to test Chematica. That is because Grzybowski is hoping to commercialize the system: he is negotiating with Elsevier to incorporate the program into Reaxys, and is working with the pharmaceutical industry to test Chematica's synthesis suggestions for biologically active, naturally occurring molecules. Grzybowski is also bidding for a grant from the Polish government, worth up to 7 million złoty (US$2.3 million), to use Chematica as the brain of a synthesis machine that can prove itself by automatically planning and executing syntheses of at least three important drug molecules.
I'll watch this with great interest, but it's worth noting that no one has yet tried to hook Grzybowski-type software with Jamison/Ley-type hardware. And that'll be a real joy to execute. This looks to an outsider (me) like one of those cases where the software folks figure that the hardware is pretty much ready to go, and the hardware folks might be figuring that the software is more or less at the late-debugging stage. I suspect that anyone thinking down those lines (and there are some quotes in that Nature article that suggest it) is in for a rude shock. The kinds of reactions that useful software would suggest will be things that no one has ever tried to automate, and the range of reagents that can be accommodated by the existing hardware may well cripple the software algorithms before they even get off the floor. There's a lot of work to be done.
My guess is that we're going to see many years of machines that can do some things in very well-defined areas, but which will prove useless (or worse) if you try to push them into unknown territory. And unknown territory is what it's all about. The article mentions the most difficult level such a machine could work at: synthetic routes where new reactions have to be invented. Don't hold your breath for that one: a machine that could work its way through the first semester of an undergraduate organic lab course would, by current standards, be a tremendous accomplishment.
But at the same time, I want to re-emphasize that there's nothing intrinsically impossible about any of this. It's just crazily hard, and will require years of machete-hacking through thickets of engineering difficulties. I think that this really is the direction the field is heading, but it's going to be a long, long road.
There's a new report in the literature on the mechanism of thalidomide, so I thought I'd spend some time talking about the compound. Just mentioning the name to anyone familiar with its history is enough to bring on a shiver. The compound, administered as a sedative/morning sickness remedy to pregnant women in the 1950s and early 1960s, famously brought on a wave of severe birth defects. There's a lot of confusion about this event in the popular literature, though - some people don't even realize that the drug was never approved in the US, although this was a famous save by the (then much smaller) FDA and especially by Frances Oldham Kelsey. And even those who know a good amount about the case can be confused by the toxicology, because it's confusing: no phenotype in rats, but big reproductive tox trouble in mice and rabbits (and humans, of course). And as I mentioned here, the compound is often used as an example of the far different effects of different enantiomers. But practically speaking, that's not the case: thalidomide has a very easily racemized chiral center, which gets scrambled in vivo. It doesn't matter if you take the racemate or a pure enantiomer; you're going to get both of the isomers once it's in circulation.
The compound's horrific effects led to a great deal of research on its mechanism. Along the way, thalidomide itself was found to be useful in the treatment of leprosy, and in recent years it's been approved for use in multiple myeloma and other cancers. (This led to an unusual lawsuit claiming credit for the idea). It's a potent anti-angiogenic compound, among other things, although the precise mechanism is still a matter for debate - in vivo, the compound has effects on a number of wide-ranging growth factors (and these were long thought to be the mechanism underlying its effects on embryos). Those embryonic effects complicate the drug's use immensely - Celgene, who got it through trials and approval for myeloma, have to keep a very tight patient registry, among other things, and control its distribution carefully. Experience has shown that turning thalidomide loose will always end up with someone (i.e. a pregnant woman) getting exposed to it who shouldn't be - it's gotten to the point that the WHO no longer recommends it for use in leprosy treatment, despite its clear evidence of benefit, and it's down to just those problems of distribution and control.
But in 2010, it was reported that the drug binds to a protein called cereblon (CRBN), and this mechanism implicated the ubiquitin ligase system in the embryonic effects. That's an interesting and important pathway - ubiquitin is, as the name implies, ubiquitous, and addition of a string of ubiquitins to a protein is a universal disposal tag in cells: off to the proteosome, to be torn to bits. It gets stuck onto exposed lysine residues by the aforementioned ligase enzyme.
But less-thorough ubiquitination is part of other pathways. Other proteins can have ubiquitin recognition domains, so there are signaling events going on. Even poly-ubiquitin chains can be part of non-disposal processes - the usual oligomers are built up using a particular lysine residue on each ubiquitin in the chain, but there are other lysine possibilities, and these branch off into different functions. It's a mess, frankly, but it's an important mess, and it's been the subject of a lot of work over the years in both academia and industry.
The new paper has the crystal structure of thalidomide (and two of its analogs) bound to the ubiquitin ligase complex. It looks like they keep one set of protein-protein interactions from occurring while the ligase end of things is going after other transcription factors to tag them for degradation. Ubiquitination of various proteins could be either up- or downregulated by this route. Interestingly, the binding is indeed enantioselective, which suggests that the teratogenic effects may well be down to the (S) enantiomer, not that there's any way to test this in vivo (as mentioned above). But the effects of these compounds in myeloma appear to go through the cereblon pathway as well, so there's never going to be a thalidomide-like drug without reproductive tox. If you could take it a notch down the pathway and go for the relevant transcription factors instead, post-cereblon, you might have something, but selective targeting of transcription factors is a hard row to hoe.
If you ever find yourself needing to make large cyclic peptides, you now have a new option. This paper in Organic Letters describes a particularly clean way to do it: let glutathione-S-transferase (GST) do the work for you. Bradley Pentelute's group at MIT reports that if your protein has a glutathione attached at one end, and a pentafluoroaryl Cys at the other, that GST will step in and promote the nucleophilic aromatic substitution reaction to close the two ends together.
This is an application of their earlier work on the uncatalyzed reaction and on the use of GST for ligation.. Remarkably, the GST method seems to product very high yields of cyclic peptides up to at least 40 residues, and at reasonable concentration (10 mM) of the starting material, under aqueous conditions. Cyclic peptides themselves are interesting beasts, often showing unusual properties compared to the regular variety, and this method look as it will provide plenty more of them for study.
Catalysts are absolutely vital to almost every field of chemistry. And catalysis, way too often, is voodoo or a close approximation thereof. A lot of progress has been made over the years, and in some systems we have a fairly good idea of what the important factors are. But even in the comparatively well-worked-out areas one finds surprises and hard-to-explain patterns of reactivity, and when it comes to optimizing turnover, stability, side reactions, and substrate scope, there's really no substitute for good old empirical experimentation most of the time.
The heterogeneous catalysts are especially sorcerous, because the reactions are usually taken place on a poorly characterized particle surface. Nanoscale effects (and even downright quantum mechanical effects) can be important, but these things are not at all easy to get a handle on. Think of the differences between a lump of, say, iron and small particles of the same. The surface area involved (and the surface/volume ratio) is extremely different, just for starters. And when you get down to very small particles (or bits of a rough surface), you find very different behaviors because these things are no longer a bulk material. Each atom becomes important, and can perhaps behave differently.
Now imagine dealing with a heterogeneous catalyst that's not a single pure substance, but is perhaps an alloy of two or more metals, or is some metal complex that itself is adsorbed onto the surface of another finely divided solid, or needs small amounts of some other additive to perform well, etc. It's no mystery why so much time and effort goes into finding good catalysts, because there's plenty of mystery built into them already.
Here's a new short review article in Angewandte Chemie on some of the current attempts to lift some of the veils. A paper earlier this year in Science illustrated a new way of characterizing surfaces with X-ray diffraction, and at short time scales (seconds) for such a technique. Another recent report in Nature Communications describes a new X-ray tomography system to try to characterize catalyst particles.
None of these are easy techniques, and at the moment they require substantial computing power, very close attention to sample preparation, and (in many cases) the brightest X-ray synchrotron sources you can round up. But they're providing information that no one has ever had before about (in these examples) palladium surfaces and nanoparticle characteristics, with more on the way.
Here's a question for those of you who've used Selectfluor (Air Products trademark), the well-known fluorinating reagent. I've had an email from someone at Sigma-Aldrich, wondering if people have noticed corrosion problems with either glass or stainless steel when using or storing the reagent. I've hardly used it myself, so I don't have much to offer, but I figured that there was a lot of chemical experience out in the blog's readership, and someone may have something to add.
A look through some of the medicinal chemistry literature this morning got me to thinking: does anyone have any idea of which drug target has the most different/diverse chemical matter that's been reported against it? I realize that different scaffolds are in the eye of the beholder, so it's going to be impossible to come up with any exact counts. But I think that all the sulfonamides that hit carbonic anhydrase, for example, should for this purpose be lumped together: that interaction with the zinc is crucial, and everything else follows after. Non-sulfonamide CA inhibitors would each form a new class for each new zinc-interacting motif, and any compounds that don't hit the zinc at all (are there any?) would add to the list, too. Then you have allosteric compounds, which are necessarily going to look different than active-site inhibitors.
My guess is that some of the nuclear receptors would turn out to win this competition. They can have large, flexible binding pockets that seem to recognize a variety of chemotypes. So maybe this question should be divided up a bit more:
1. What enzyme is known to have the widest chemical variety of active-site inhibitors?
2. Which GPCR has the widest chemical variety of agonists? Antagonists? (The antagonists are going to win this one, surely).
3. And the the open field question asked above: what drug target of any kind has had the widest variety of molecules reported to act on it, in any fashion?
I don't imagine that we'll come to any definitive answer to any of these, but some people may have interesting nominations.
Update: in response to a query in the comments, maybe we should exempt the drug-metabolizing enzymes from the competition, since their whole reason for living is to take on a wide variety of unknown chemical structures.
Stephaie Kwolek has died at 90, and many might recognize her name as the discoverer of Kevlar. I have that story as an entry in my manuscript for "The Chemistry Book" (one of several polymer chemistry entries). What the organic chemists in the crowd might not realize is that one of Kwolek's key steps in the discovery was the use of a solvent that would actually dissolve the aromatic amide polymer for it to be spun. As you'd imagine, it's not the most soluble stuff in the world, but HMPA did the trick.
The response from today's lab bench would be "HMPA?", with a couple of raised eyebrows. That solvent, though powerful indeed, is now recognized to be a carcinogen, and you don't see it around the lab as much as you used to. DuPont's own toxicology department helped to recognize the problem, and the switch to an alternate solvent (N-methylpyrrolidone, I believe) was eventually made, at no small expense. Part of that expense was patent-related: Akzo Nobel in particular had arrived at processes for producing aramid fibers with that less toxic solvent, so there were legal fireworks as well.
Kwolek's HMPA solution didn't look promising even after she'd made it, though. It was too mobile and watery-looking for the folks in the next department to think that it was going to spin into fibers, but she finally persuaded someone to try it. To everyone's surprise, it made terrific fibers whose strength was off the scale of some of the testing equipment. Kevlar and other aramid fibers are famously used in bulletproof vests and Kwolek herself had many interactions over the years with police officers whose lives had been saved. The aramid fibers as a class show up in all sorts of test-of-strength applications, such as tires, brake linings, lightweight composite materials, flame-resistant clothing, boat hulls and so on.
I wanted to say a bit more about the Cyclofluidics automated med-chem system I wrote about the other day. Here's another PDF from the company about the software used in the device. And here's how the software is making its decisions:
These design methods utilise Random Forest activity prediction employing one or both of two design strategies – “Chase Objective” and “Best Objective Under-sampled”. Regression models are built using the Random Forest method implemented in R accessed via the Pipeline Pilot ‘Learn R Forest Model’ component4. The default settings of this component are used except for the molecular descriptors as follows:
2. Molecular Weight
3. Number of H Donors,
4. Number of H Acceptors,
5. Number of Rotatable Bonds, 6. Molecular Surface Area,
7. Molecular Polar Surface Area
These are used together with ECFP_6 fingerprints. The fingerprints are not folded and are used as an array of counts. The Random Forest algorithm was chosen because it can perform non-linear regression, is resistant to over- fitting and has very few tuning parameters. In all cases the dependent variable is pIC50.
That's about what I'd figured. I wrote about what turns out to be an early application of this device last year, in an effort to find Abl kinase inhibitors. At the time, it struck me more as "automated patent busting", because the paper had quite a leg up based on SAR from competitor compounds. That seems to me to be the best use for this sort of system - where you know something about the core (or cores) that you're interested in, and you want to vary the side chains in a systematic fashion. The appealing thing is the possibility of very fast cycle time with an integrated assay. I'm not sure if the machine is robust enough to leave it going over the weekend, but it would be nice to come in on Monday and find the most potent derivatives already rank-ordered for you.
Potency, of course, is not everything. But it's a reasonable place to start, because you'd rather have activity to burn in the pursuit of selectivity, ADMET, etc., because burn it you often must. And it's also true that not every SAR follows such an algorithmically reducible path, in which case I imagine that the software (and hardware) must end up thrashing around a bit. (Well, so do we med chemists, so why should the machines miss out on the experience?)
And as mentioned before, the chemistry has to be amenable to reasonable-yielding wide-ranging techniques. I would imagine a lot of amides, a lot of metal couplings, some of the easier condensations - to be honest, the run of compounds I'm making in the lab right now would work find. My grandmother could have prepared these, were she still with us, because it's dump-and-heat-overnight stuff. The problem is that the assay that these compounds are going into is most definitely not ready to be easily automated yet; it requires the ministrations of a good enzymologist. So this machine lives in the Venn-diagram intersection of "Robust chemistry/robust assay/comprehensible SAR", but in that space it would seem to be pretty useful. Oh, there's another circle in that diagram: "Chemistry that doesn't crash out in the flow channels". I'm a big fan of flow chemistry, but the wider you cast your net in an array of flow experiments, the better your chances of finding the Monolithic Mega-clog. I, too, have stop-started straight DMSO through the tubing while holding a heat gun.
A reader, "Brian", has sent along some background on the Cyclofluidics instrument:
I can answer your query of whether an expert system containing “med-chem lore” could exist.
The “Cyclofluidic” Business and Technical Plans of 2008 were developed and presented for UK Government SME funding by SMEs who in 2006 had combined their know-how and technology to provide a successful demonstration of an iterative ‘closed loop’ system using a flexible reagent palette and field-based software able to jump to unforeseeable chemotypes with improved activity. The system had two learning loops and relied on large knowledge databases. One loop used iterative ‘rational design, synthesis, test and re-design’ cycles set to define potential lead chemotypes. The other loop cycled much more frequently, levering the extreme frugality and speed of sub-microgram, nano-flow chemistry to investigate and optimize the reactions needed to reach novel hypothesized targets. As the approach was essentially based on 3-dimensional whole-molecule bioisosterism (rather than 2-D atom-centered charged structures), much of your concern regarding the need to avoid a “core and substituents” approach and have a rapid route-scoping process was addressed. More details of the method and how the next targets are chosen can be found here.
For reasons that they have never fully shared, the UK Government funding body (TSB) were persuaded that all of the SME fund allocated on the basis of the 2006 proof of concept would be better spent on a zero-based start by ex-Pfizer and UCB staff via a jointly fully owned SME, Cyclofluidic Ltd, created in August 2008 for the purpose. None of original system developers was funded or employed. Although many aspects of the original hardware configuration are clearly carried through to Cyclofluidic, as far as I know their “design algorithm” has never been described. Therefore while I can tell you how the next round of analogues were chosen automatically in the predecessor system, I cannot answer your specific question of how Cyclofluidic does it other than it is not the same.
Interesting. And I also wanted to highlight this comment from the previous post, in case the server problems have made it too hard for people to see it:
. . .Another big pharma company spent well over $200 million building 2 automated chemistry buildings and filling them with automation that was going to automate all of synthesis, purification, handling and screening of pharmaceuticals.
The systems clogged constantly and broke often, the software was too complex to anyone to use, and the chemistry was very limited and made amide and triazines. The company also bought two other companies that automated chemistry and made vast libraries of amides and triazines, both of which eventually also went away. The real issue is not finding leads, it is discovering drugs.
Both buildings are now empty and each one never produced as many compounds as a small team of chemists did at each of two other small groups which were not given the budget to fully automate chemistry, but merely allowed to buy some useful tools for simple automation, some of which worked really well, like automated vial weighers, prep HPLC systems, and Gilson 215 liquid handlers. And several other groups of chemists made simple compounds the old fashioned ways, which lead to more more clinical drugs than either.
So of course all of the groups of chemists were laid off, and the few groups than made zero marketed drugs were kept. I guess that statistically speaking, they are due.
Food for thought. The situation described in Arthur C. Clarke's classic story "Superiority" is always waiting for us.
See what you think about this PDF: Cyclofluidics is advertising the "Robot Medicinal Chemist". It's an integrated microfluidics synthesis platform, assay/screening module, with software to decide what the next round of analogs should be:
Potential lead molecules are synthesised, purified and screened in fast serial mode, incorporating activity data from each compound as it is generated before selecting the next compound to make.
To ensure data quality, each compound is purified by integrated high pressure liquid chromatography (HPLC), its identity confirmed by mass spectrometry and the
concentration entering the assay determined in real time by evaporative light scattering detection (ELSD). The compound's IC50 is then measured in an on-line biochemical assay and this result fed into the design software before the algorithm selects the next compound to make – thus generating structure-activity relationship data. The system is designed to use interchangeable design algorithms, assay formats and chemistries and at any stage a medicinal chemist can intervene in order to adjust the design strategy.
I can see where this might work, but only in special cases. The chemistry part would seem to require a "core with substituents" approach, where a common intermediate gets various things hung off of it. (That's how a lot of medicinal chemistry gets done anyway). Flow chemistry has improved to where many reactions would be possible, but each new reaction type would have to be optimized a bit before you turned the machine loose, I'd think.
The assay part is more problematic. There are assays suitable for in-line evaluation like this, but there are plenty of others that aren't. (I would think that SPR would be particularly well-suited, since it operates in flow, anyway). But that prior optimization that the chemistry needs is needed even more here, to make sure that things are robust enough that the machine doesn't generate crappy numbers (and more swiftly than you could do by hand!)
The software is the part I'm really wondering about. How is this thing picking the next round of analogs? Physiochemical descriptions like logD? Some sort of expert system with med-chem lore in it? Does it do any modeling or conformational analysis? Inquiring minds want to know. And I'd also like to know if they've sold any of these systems so far, and to hear some comments from their users.
I noticed some links to this post showing up on my Twitter feed over the weekend, and I wanted to be sure to mention it. There's a recipe for "all-natural" herbicide that goes around Facebook, etc., where you mix salt, vinegar, and bit of soap, so Andrew Kniss sits down and does some basic toxicology versus glyphosate. The salt-and-vinegar mix will work, it seems, especially on small weeds, but it's more persistent in the soil and its ingredients have higher mammalian toxicity (which I'm pretty sure is the opposite of what people expect).
I hope this one makes a few people think, but I always wonder. The sorts of people who need this most are the ones least likely the read it, and the ones most likely to immediately discount it as "Monsanto shill propaganda" or the like. I had email like that last time I wrote about glyphosate (the second link above) - people asking me how much Monsanto was paying me and so on. And these people are also not interested in hearing about any LD50 data (which they probably assume is all faked, anyway). They're ready to tell you about long-term cancer and everything else (not that there's any evidence for that, either).
Going after this sort of thing is a duty, but an endless chore. I was also sent a link to an interview with some actress where she talks about her all-natural beauty regimen - so pure and green and holistic, and so very expensive, from what I could see. One of the things she advocated was clay. No, not for your skin. To eat it. It has, she explained, "negative charge" so it picks up "negative isotopes". Yeah boy. You'll have heard of those, maybe the last time you were And of course, it also picks up all those heavy metal toxins your body is swimming in, which is why a friend of hers told her that she tried the clay, and like, when she went to the bathroom it like, smelled like metal. I am not making any of this up. A few comments on that site, gratifyingly, wondered if there was any actual evidence for that clay stuff, but most of them were just having spasms of delight over the whole thing (and trading obscure, expensive sources for the all-natural lifestyle). So there's a lot of catching up to do.
There's a follow-up paper to that one on the robustness of new synthetic methods that I blogged about here. This one's in Nature Protocols, so it's a detailed look at just how you'd go about their procedure for shaking down a new reaction.
The reaction they've chosen is a rhodium-catalyzed indole formation (phenylhydrazine plus substituted alkyne), which is a good test bed for the real world (heterocycles, metal-catalyzed mechanism). The authors suggest a matrix of additives and reaction conditions, analyzed by GC, as in their original paper, to profile what can be tolerated and what can't. It's good to have the detailed reference out there, and I hope it gives referees and journal editors something to point at.
But will they? I can imagine a world where new reactions all have a "standard additives" grid somewhere in the paper, showing how the yields change. You could even color-code them (the old stoplight slide scheme, red/yellow/green, would be fine), and then we'd all have a way to compare synthetic methods immediately. Aldrich and others could sell the pre-assembled kit of the standard compounds to use. This would also point out reactions where more useful work could be done, since it would be immediately obvious that the new Whatsit Cyclization fails in the presence of tertiary amines, etc. Too often now you have to work that our for yourself, usually by seeing what the authors left out.
So why don't we all do this? It's more work, that's for sure, but not an incredible amount more work. If the major synthetic chemistry journals starting asking for it, that would be that. It would also make the publication landscape even more clear, because the titels that don't ask for an extra few days to be spent on the reaction conditions would be hard-pressed to make a case that they weren't just venues for hackwork (or for people with something to hide). I'd rather read about reactions with a clear statement of what they'll work on and what they won't - wouldn't you?
Some readers may have seen the yogurt containers that advertise their low calorie count with the slogan "Nature got us to 100 calories. Not scientists. #howmatters." I was thinking of ripping into for that, but I see that this piece at Popular Science has done an excellent job of that (and there are others).
Yeah, I'm sure that a nationally distributed yogurt company ignores food science. They don't hire anyone who knows about shelf stability, reproducibility of ingredient sources, industrial fermentation, mixing effects, scale-up, sterilization of equipment, or any of that boring sciencey stuff. They just trust to nature, because what's more natural than fleets of 18-wheel trucks hauling loads of yogurt around in plastic cups? And the little aluminum lid that you peel off to find that idiotic slogan on it - they find those out in the meadow, just down from the herd of cows who give pasteurized milk.
The slogan itself is annoying, but what gets me more are the attitudes behind it. First, of course, is the "Science = icky" assumption, which seems to be just taken for granted by whoever wrote this thing. Its author also trusted everyone who read it to make the same connection, which isn't too comforting, either. But at another level, there's a really cynical outlook here, one that is somewhat at odds with the company's gosh-we're-so-natural image. Someone higher-up OKed this slogan, in full knowledge that the company's yogurt is produced - has to be produced - on an industrial scale, in a factory with lots of stainless steel equipment. But what the hell - sounds good, and people will buy it. The sorts of people who are favorably impressed by a slogan like that can be sold all sorts of stuff. They're valuable customers, and you want to tell them things that make them happy, whether they're true or not.
Alex (Sasha) Shulgin has died at the age of 88. Among some groups of people, he was the most famous chemist in the world - I refer specifically to people with a strong interest in psychedelic drugs. Shulgin was, of course, the author of PIHKAL and TIHKAL, books whose titles resolve to, respectively, Phenethylamines/Tryptamines I Have Known And Loved, which should tell you where he was coming from.
But his days were different from these days. When Shulgin was doing his earlier work, these compounds were (for the most part) not illegal. Even after their legal status changed, Shulgin had cordial relations with the DEA (up until the early 1990s, that is, when things went downhill). He was certainly not interested in becoming a drug lord, or coming up with the most efficient backyard synthesis of some profitable amphetamine. Shulgin was interested in the human brain and what happened to it when you messed around with its balance of neurotransmitters, and he was his own test subject (along with a circle of friends). The papers he published on this work read now like documents from another planet - there in the Journal of Organic Chemistry would be a paper on the SAR of some series of compounds, with an experimental section that looked normal until you got to the in vivo part. It would read something like "Six subjects with experience in psychoactive substances ingested doses ranging from. . .", and it would go on to detail their responses on the Shulgin Rating Scale. (A complete publication list can be found at Shulgin's Wikipedia entry).
He actually inspired a number of people to become organic chemists. I wasn't one of them (I didn't hear about him until I was already in grad school), but I do know of others. And even though I'm about as far from him as possible in my willingness to experiment with psychoactive substances (never touched any, never plan to), I always had a lot of sympathy for him. He wanted to find out what such things did, and he was willing to do what it took to find out. We disagree in philosophy as well - Shulgin felt (as have many people who've experienced such compounds) that they provided a window into a more complicated reality. I don't put much stock in that myself - it seems to me like hearing a snarl of static after pouring a cup of coffee into the back of a radio and then deciding that it was a new kind of radio station. I think that exposing oneself to these agents risks brain damage, and since I discount the experiences they provide, it's never seemed worthwhile to me. But never having taken any such substances myself, I realize that my authority to speak about them may be limited. Many people seem to have benefited from exposure to psychedelics, while others appear to have been permanently damaged. An inability to tell which group I might fall into does not increase my desire to try anything in this line, either.
Shulgin was a very unusual person, but he was also a pioneer and a real scientist. If he has imitators, psychedelic self-experimenters who are not interested in making money, they're keeping quiet. Instead, we have plenty of folks who don't mind experimenting on others, as long as the money comes in. Many of these people probably see themselves as Shulgin's heirs, and I wonder if he thought of them as such or not. Risking your own neurons in your isolated farmhouse can be plausibly thought of as your own business - selling piles of untested compounds to partygoers is (at least to me) a different matter.
Many drug discovery researchers now have an idea of what to expect when a fragment library is screened against a new target. And some have had the experience of screening covalent, irreversible inhibitor structures against targets (a hot topic in recent years). But can you screen with a library of irreversibly-binding fragments?
This intersection has occurred to more than one group, but this paper marks the first published example that I know of. The authors, Alexander Statsyuk and co-workers at Northwestern, took what seems like a very sound approach. They were looking for compounds that would modify the active-site residues of cysteine proteases, which are the most likely targets in the proteome. But balancing the properties of a fragment collection with those of a covalent collection is tricky. Red-hot functional groups will certainly label your proteins, but they'll label the first things they see, which isn't too useful. If you go all the way in the other direction, epoxides are probably the least reactive covalent modifier, but they're so tame that unless they fit into a binding site perfectly, they might not do anything at all - and what are the chances that a fragment-sized molecule will bind that well? How much room is there in the middle?
That's what this paper is trying to find out. The team first surveyed a range of reactive functional groups against a test thiol, N-acetylcysteine. They attached an assortment of structures to each reactive end, and they were looking for two things: absolute reactivity of each covalent modifier, and how much it mattered as their structures varied. Acrylamides dropped out as a class because their more reactive examples were just too hot - their reactivity varied up to 2000x across a short range of examples. Vinylsulfonamides varied 8-fold, but acrylates and vinylsulfones were much less sensitive to structural variation. They picked acrylates as the less reactive of the two.
A small library of 100 diverse acrylates were then prepared (whose members still only varied about twofold in reactivity), and these were screened (100 micromolar) against papain as a prototype cysteine protease. They'd picked their fragments so that everything had a distinct molecular weight, so whole-protein mass spec could be used as a readout. Screening ten sets of ten mixtures showed that the enzyme picked out three distinct fragments from the entire set, a very encouraging result. Pretreatment of the enzyme with a known active-site labeling inhibitor shut down any reaction with the three hits, as it should have.
Keep in mind that this also means that 97 reasonably-sized acrylates were unable to label the very reactive Cys in the active site of papain, and that they did not label any surface residues. This suggests that the compounds that did make it in did so because of some structure-driven binding selectivity, which is just the territory that you want to be in. Adding an excess of glutathione to the labeling experiments did not shut things down, which also suggests that these are not-very-reactive acrylates whose structures are giving them an edge. Screen another enzyme, and you should pick up a different set of hits.
And that's exactly what they did next. Screening a rhinovirus cysteine protease (HRV3C) gave three totally new hits - not as powerful against that target as the other three were against papain, but real hits. Two other screens, against USP08 and UbcH7, did not yield any hits at all (except a couple of very weak ones against the former when the concentration was pushed hard). A larger reactive fragment library would seem to be the answer here; 100 compounds really isn't very much, even for fragment space, when you get down to it.
So this paper demonstrates that you can, in fact, find an overlap between fragment space and covalent inhibition, if you proceed carefully. Now here's a question that I'm not sure has ever been answered: if you find such a covalent fragment, and optimize it to be a much more potent binder, can you then pull the bait-and-switch by removing the covalent warhead, and still retain enough potency? Or is that too much to ask?
The Science paper on chemogenomic signatures that I went on about at great length has been revised. Figure 2, which drove me and every other chemist who saw it up the wall, has been completely reworked:
To improve clarity, the authors revised Fig. 2 by (i) illustrating the substitution sites of fragments; (ii) labeling fragments numerically for reference to supplementary materials containing details about their derivation; and (iii) representing the dominant tautomers of signature compounds. The authors also discovered an error in their fragment generation software that, when corrected, resulted in slightly fewer enriched fragments being identified. In the revised Fig. 2, they removed redundant substructures and, where applicable, illustrated larger substructures containing the enriched fragment common among signature compounds.
Looking it over in the revised version, it is indeed much improved. The chemical structures now look like chemical structures, and some of the more offensive "pharmacophores" (like tetrahydrofuran) have now disappeared. Several figures and tables have been added to the supplementary material to highlight where these fragments are in the active compounds (Figure S25, an especially large addition), and to cross-index things more thoroughly.
So the most teeth-gritting parts of the paper have been reworked, and that's a good thing. I definitely appreciate the work that the authors have put into making the work more accurate and interpretable, although these things really should have been caught earlier in the process.
Looking over the new Figure S25, though, you can still see what I think are the underlying problems with the entire study. That's the one where "Fragments that are significantly enriched in specific sets of signature compounds (FDR ≤ 0.1 and signature compounds fraction ≥ 0.2) are highlighted in blue within the relevant signature compounds. . .". It's a good idea to put something like that in there, but the annotations are a bit odd. For example, the compounds flagged as "6_cell wall" have their common pyridines highlighted, even though there's a common heterocyclic core that that all but one those pyridines are attached to (it only varies by alkyl substitutents). That single outlier compound seems to be the reason that the whole heterocycle isn't colored in - but there are plenty of other monosubstituted pyridines on the list that have completely different signatures, so it's not like "monosubstituted pyridine" carries much weight. Meanwhile, the next set ("7_cell wall") has more of the exact same series of heterocycles, but in this case, it's just the core heterocycle that's shaded in. That seems to be because one of them is a 2-substituted isomer, while the others are all 3-substituted, so the software just ignores them in favor of coloring in the central ring.
The same thing happens with "8_ubiquinone biosynthesis and proteosome". What gets shaded in is an adamantane ring, even though every single one of the compounds is also a Schiff base imine (which is a lot more likely to be doing something than the adamantane). But that functional group gets no recognition from the software, because some of the aryl substitution patterns are different. One could just as easily have colored in the imine, though, which is what happens with the next category ("9_ubiquinone biosynthesis and proteosome"), where many of the same compounds show up again.
I won't go into more detail; the whole thing is like this. Just one more example: "12_iron homeostasis" features more monosubstituted pyridines being highlighted as the active fragment. But look at the list: there's are 3-aminopyridine pieces, 4-aminomethylpyridines, 3-carboxylpyridines, all of them substituted with all kinds of stuff. The only common thread, according to the annotation software, is "pyridine", but those are, believe me, all sorts of different pyridines. (And as the above example shows, it's not like pyridines form some sort of unique category in this data set, anyway).
So although the most eye-rolling features of this work have been cleaned up, the underlying medicinal chemistry is still pretty bizarre, at least to anyone who knows any medicinal chemistry. I hate to be this way, but I still don't see anyone getting an awful lot of use out of this.
A lot of people in med-chem and chemical biology would like to have more natural-product-like features in their libraries of organic compounds. But realizing that idea is not so easy. Natural product structures, er, naturally tend to be more complex, with a lot of functionality and stereochemistry compared to the sorts of things you usually find in compound libraries. These features are usually chemically intensive, giving you a choice of a lot of structural variety or a lot of compounds, but not both at the same time.
There have been many efforts to get around this problem, among them diversity-oriented synthesis. DOS does tend to make complex compounds, but they're often out in their own area of chemical space, "unnatural products" that probably don't partake of whatever evolutionary advantages natural products have accumulated. Now there's another solution, courtesy of boronate chemisty. Martin Burke at Illinois has been working on adapting MIDA boronates to produce libraries of natural product modules and side chains, ready to be stapled on to whatever other structures you might have.
This Nature Chemistry paper has details of this approach as applied to polyene structures. There's a power-law distribution to these things that makes it all feasible:
. . .we decided to ask the question, how many bifunctional MIDA boronate building blocks would be required to make most of the polyene motifs found in nature? To find the answer, we devised a general retrosynthetic algorithm for systematically deconstructing these motifs into the minimum total number of building blocks. This analysis generated the intriguing hypothesis that the polyene motifs found in >75% of all polyene natural products can be prepared using just 12 MIDA boronate building blocks and one coupling reaction.
They've had to do a good amount of synthetic optimization to get this to work - the more straightforward conditions had compatibility problems. But it seems as if they've got some protocols now that will allow the various MIDA and pinacol boronates to work together orthogonally, and they've synthesized several natural product test cases to demonstrate. The polyenes make up between 1 and 2% of all the known natural products (>2800 of >238,000), but the idea is to extend this plan into other structural classes:
It is stimulating to consider how many building blocks would be required to access most of the remaining 99%. Although the answer to this is not yet known, the strategy demonstrated herein, that is, systematically identifying common motifs and transforming them into bifunctional building blocks compatible with iterative coupling, provides a roadmap for pursuing this problem. For example, we have identified 12 additional common structural motifs that are collectively present in more than 100,000 natural products. Half of these motifs have already been transformed into bifunctional halo MIDA boronates that are now commercially available. To achieve the same goal with some of the others will require solutions to frontier methodological problems, which include new chemistry to make and stereospecifically cross-couple Csp3 boronates.
I look forward to seeing these realized. This chemistry looks to provide a lot of interesting new compounds, and it could also answer some questions about natural products themselves. What, for example, would the screening hit rates and activity profiles be like for a large library of almost-natural products, things with the same biophysical properties and functional motifs as the rest of natural product space, but without the evolutionary fine-tuning?
Now doesn't this sound delicious? Get your heavy-metal chicken wings, because there's no substitute for that tangy mercury sauce. The people at Buffalo Wild Wings obviously know nothing whatsoever about the periodic table, but they're providing amusement for those of us who do. This one goes near the top of the pile of Stupid Chemistry Ads, for sure. (From Reddit's r/chemistry).
Update: as pointed out in the comments, the company's web site now has a disclaimer that you just don't see as much these days: "Product does not contain Mercury". Five minutes of reading beforehand and you wouldn't have to tell people that, guys.
If you want to see some folks who are really blasting away with a photochemistry setup, check out this paper from Boston University (Aaron Beeler and John Porco's groups). They have a 1000-watt high-pressure mercury lamp going, with appropriate filters (with that sort of light flux, you have a lot of spectral windows to choose from).
They're using this setup in both batch and flow mode to do reaction discovery - picking out starting materials with appropriate chromphores and seeing that what they get when they irradiate them under different conditions (solvent, sensitizers, residence time and so on). From my own photochemical experiences, I can assume that the results will provide plenty of material to work on - these sorts of reactions can take drastically different courses all of a sudden. The time-intensive step is not doubt the part where you figure out what the heck happened.
The scheme shown is representative - the top product is a photo-Fries rearrangement, while the bottom one is clearly a 2+2 and some sort of sulfonyl migration. There are vast unexplored swaths of territory (both in reaction space and chemical space) to be found in photochemistry, and I look forward to seeing more weird things come out of this setup.
Google has recently marked the birthdays of Percy Julian and Dorothy Hodgkin, which is good of them, since both were outstanding chemists. And neither, it's safe to say, are known to the general public (no chemists are known to the general public). Not even all chemists today could tell you a lot about either of them, which is a shame as well. (Both of them show up in my "Chemistry Book" manuscript).
Percy Julian was a very accomplished natural products chemist, who made a large mark on the steroid field in its heroic days of the 1940s and 1950s. And he was black, born in 1899 in Montgomery, Alabama, which meant that the deck was very much stacked against his making a large mark of any kind, anywhere. His grandfather had been a slave, but his parents, greatly to their credit, strongly encouraged all their children to become well educated.
In Percy Julian's case, he did just that, but with many difficulties. He was Phi Beta Kappa at DePauw, but the university (like its town) was largely segregated. He went on to Harvard, but had no hope of obtaining a PhD there, as the university withdrew his teaching assistantship - he later completed his doctorate in Vienna, becoming perhaps the third black American chemist with a PhD at all. Later on, when he had moved from academia to industry, DuPont turned him down for a position, saying that they had been "unaware that he was a Negro". In later years, his house outside Chicago was fire-bombed when he moved his family in. So he had a hard way up, by anyone's standards. His biographical details also show that he was someone with a very strong personality, which didn't always smooth the way, either, but no one without a strong personality would have been able to make any way at all against such headwinds.
There must be a book describing the early days of steroid chemistry, but I haven't seen it. Perhaps it would have to be a bit too technical for a general readership, but the days in which Julian worked in steroids were like nothing seen before or since. Russell Marker getting his giant Mexican yams stolen off the top of a bus, the founding of Syntex, the rush to find new routes to some of the most exciting (and lucrative) molecules in the world, chemical intermediates being shipped under armed guard: it's a wild story featuring some pretty wild people, and Percy Julian was right in the middle of it, at Glidden and later at his own company, Julian Laboratories. His work shifted the entire steroid industry towards soybeans as a feedstock, among other things.
Dorothy Hodgkin was for some time the outstanding X-ray crystallographer in the world. Considering the treatment of ambitious female scientists in the 1930s, this was an even more extraordinary feat. There were plenty of scientific difficulties as well. Given the X-ray sources available (feeble, by any modern standard) and the computational tools of the time (nonexistent, by any modern standard), the crystallography of any complex molecule was a major undertaking. But Hodgkin worked out the first X-ray structure of a steroid, the structure of penicillin (in 1945), and the structure of vitamin B12, among many others. She also took on the problem of protein structure, well before the tools of the science were ready for it, and she was one of the main reasons that the field advanced enough so that by the 1960s protein structures were actually beginning to be solved. Her first crack at the insulin structure was in 1934, which shows an extraordinary degree of scientific bravery when you consider what sort of problem this was at the time. This level of work won her a well-deserved Nobel.
Hodgkin's personality was also complex. Like her mentor and colleague, J. D. Bernal, she had what I can only characterize as a lifelong weakness for communism, and an apparent blind spot about it and its history which now looks like something that must have covered half the sky. Both she and Bernal are on record with some extraordinary statements about Stalin and other communist leaders, which might barely have been excused in the early 1930s, but continued long afterwards. See, for example, Hodgkins' own cringe-inducing forward to a volume of chemical research allegedly by Elena Ceausescu, the horrible and fraudulent wife of the Romanian dictator. Perhaps, as Montesquieu says of Pompey, she went on saying these things mainly because she'd always said them, but her case (and Bernal's, and Haldane's and others) make for an interesting and somewhat depressing psychological study.
Great works in both art and science, though, are often done by people whose other opinions can be all over the landscape, and Hodgkins did indisputably great work in her field. She also inspired great personal affection and respect - Margaret Thatcher was a student of hers at Oxford, and could not have possibly disagreed with anyone more about the Soviet Union and many other topics, but she had Hodgkins' portrait hung at 10 Downing Street while she was Prime Minister.
Now for a sensitive topic. The natural suspicion, when something like these Google tributes come up, is to assume that Hodgkin is getting recognition because she was a woman and Julian because he was black. This is one of the more insidious effects of discrimination (and of attempts to remediate it), this conscious or unconscious devaluation of achievement. Both Hodgkin and Julian would have made the history books not matter what their gender or color, but their ability to accomplish what they did in spite of those factors makes them even more worthy of recognition. The goal (I would say) is for no one to give a damn about any of these things, and to look with wonder on the days when they were so bizarrely important. But they were important then, and how. Growing up black in the Deep South was no joke, and neither was trying to make headway in Oxford and Cambridge as a woman. Julian, Hodgkins, and others deserve recognition both for what they did and the conditions under which they were able to do it.
Chemjobber is right - you don't see many people scaling up a fluorination with fluorine gas. But this paper in Organic Process R&D manages to get away with it in a preparation of a fluoronaphthyridine. It's a good illustration of what process chemistry is all about - not ruling anything out in the quest for a reliable, scalable route. Sure glad it wasn't me, though - even diluted in nitrogen, I have no desire to work with elemental fluorine. I'm fine with all those effete, over-refined, decadent fluorination reagents - but then, I don't have to clean their by-products out of a multikilo reaction, either!
The inevitable reply to the Schulz paper on the validity of ligand efficiency measurements has appeared in ACS Medicinal Chemistry Letters. Sent in from a number of prominent fragment chemists at Astex, Carmot, Dundee, GSK, the Hungarian Academy of Sciences, and Gfree, it takes the earlier paper to task in a number of ways:
(The earlier article) states that LE and related metrics “violate the quotient rule of logarithms” and “appear plausible but are mathematical impossibilities”.
The primary purpose of our viewpoint article is to correct these mathematical statements and prevent them from propagating through the literature. We also examine the behavior of LE and lipophilic ligand eﬃciency (LLE) for two matched chemical pairs and compare this with a simple example of fuel eﬃciency. Finally we brieﬂy consider genuine deﬁciencies of LE metrics so as to put valid criticism into perspective.
They start off by coming to the same conclusion that I did here - that there's nothing wrong with dividing a log by an integer, and that it's rather odd to think that there is. They also respond to one of Shultz's other objections, that LE needs to remain constant for each heavy atom that increases potency, by saying, essentially, that no it doesn't. Changes are larger on smaller molecules, and this has to be realized by anyone working with these metrics. I might note that the same thing happens with logP, and indeed with plain old molecular weight, if you think of it on a percentage basis: adding a trifluoromethyl (for example) is a bigger change on a smaller molecule in both those departments.
Another part of the Shultz paper was his admiration for LLE as a measure of ligand efficiency. The current authors agree that it's useful, but add this:
Lipophilicity is an extremely important quantity to control
during optimization, and Shultz rightly extols the virtues of LLE. LLE explicitly considers the balance of lipophilicity with potency and can be very useful in comparing HTS hits or during lead optimization. However, despite its strengths, it can be diﬃcult to use LLE in comparing molecules of very diﬀerent sizes and potencies. Also LLE will be less useful where the target requires very polar molecules. . .
They finish up by saying the Shulz makes some valid points (along with some invalid ones), but that his viewpoint article is rendered less useful by a tone that is "sometimes unhelpful to effective debate". I've been known to take some unhelpful tones myself in the past, so I have no particular standing in discussing that part of things, but for what it's worth, I agree with Shulz that far too many compound metrics have been proposed. I disagree with him that most of them are somehow mathematically invalid, but I agree that some of them are bound to be more useful than others. And I agree with the current paper that there's no such thing as a single one-size-fits-all compound metric, and that medicinal chemists (and their managers!) are just going to have to learn to deal with that.
Update: here's the post on this at Practical Fragments, where they're also taking a poll on what metrics people actually use.
A reader (and former colleague) sends along this picture from the Connecticut Science Center. When my family and lived in the area, we used to take the kids there, in the old facility. As of 2009, they have a new building in downtown Hartford, which I haven't seen personally, but if this is the sort of science they've filled it with, I haven't missed much. Update: turns out that this new building is a new organization, different from the older site in West Hartford. Readers also tell me that this is from from the only nonsensical chemical decoration to be found inside the place.
What gets me about this sort of thing is (1) how idiotic it is and (2) how avoidable it is. That structure is probably the worst fake chemical drawing I've ever seen - it makes no sense, in several different directions at once, and anyone who knows chemistry will feel like spitting on the floor after seeing it. And it would have taken the decorator about five minutes to come up something real instead.
But that, I guess, is what really gets me about this sort of thing. It's the attitude: close enough, right? Who's going to notice, a bunch of geeks? 'Sall the same, anyway, buncha lines and chemically-stuff. If someone had a display up about (say) Thailand, but the people in the video display were just gabbling funny noises instead of speaking Thai, there would be complaints. If a wall featured a background illustration of a map of the world, with the countries mislabeled and some of the letters backwards, there would be complaints. And there should be complaints about this, and by golly, this is one right here. The Connecticut Science Center should demonstrate that it actually gives a flying *&!# about science, and fix this. And perhaps they should check the other exhibits while they're at it, to make sure that the same decorative flair hasn't been applied to them as well.
Update: Good news! The Science Center has now said that they apologize for the bad chemistry, and that the graphics are being removed.
I found this paper from the MacMillan group on photochemical decarboxylation pretty interesting, but I always find new ways to make carbon-carbon bonds worth reading about. It uses the more-and-more-popular photoredox catalysts (in this case, the iridium series), and it builds on this earlier publication from the group. That showed that aryl nitriles were good coupling partners in these reactions, and those feature in every example in the new paper.
One thing that I noticed, though, when I got to the experimental details (which are all in the Supporting Information). When you see, for example, N-Boc proline being coupled with dicyanobenzene, which one do you think the yield is based on? Maybe it's just my weltanschauung as a medicinal chemist, but I see the proline as the core and the dicyanobenzene as a partner, probably used in some excess, that's being coupled to it. But the experiments are run with the nitrile as the limiting reagent, with three equivalents of Boc-proline (or most any other amino acid).
I'm going to give the reaction a try, though - it looks simple enough to run, and it gets you into some interesting structures very quickly. I definitely appreciate the way this works with Boc and Cbz amino acids, as opposed to N-tosyl derivatives (seeing those is always a letdown - I know that they can be removed, but no one uses them as nitrogen protecting groups if anything else works instead). And the paper shows an impressive range of examples, which is very welcome. I have some other electron-deficient coupling partners in mind, and I'd like to see if you can run the thing in some other solvent besides straight DMSO, but first I'll see how well the canonical version works for me.
The Fujita group is out with another paper using their metal-organic framework crystallization technique. In this case, they soak a palladium complex into the crystal, and they're apparently able to see (over a period of hours) changes in the electron density as a halogenation reaction proceeds.
You don't see much time-lapse X-ray work. Even seeing the starting palladium complex is a feat, since in solution it doesn't hang around very long at all (in this case, it tends to lose its acetonitrile ligand and go on to dimerize). The fleeting intermediates in this crystalline state aren't going to show up, either, since they're going to make such tiny impacts on the electron density map, but you can see the main species as they shift. I'm pretty sure that I buy their interpretation of the data, but I'd be glad to hear from X-ray folks with other views.
You'll note that the starting sulfur-complexed Pd species is a bit odd - you don't see many sulfur-complexed palladium complexes like this in organic synthesis. The reason emerges as you read the text closely. They started out with Pd(acac)2, but had to replace that ligand with the xanthate, because otherwise they got ligand exchange with the zinc iodide parts of the metal-organic framework (which I can well believe). And after that, a solvent exchange was needed, soaking out the nitrobenzene from the original crystal formation and replacing it with acetonitrile.
I find that part very interesting indeed. That's because I've done some personal follow-up in this area (as many readers will have guessed), and I can tell you that in my hands, exposing uncomplexed crystals of this metal-organic framework to acetonitrile (or any polar solvent, for that matter) leads to their rapid decomposition. Having the pores filled with a guest, as in this case, apparently gives very different results, as well it might.
The paper calls this a "crystalline flask", and we are going to need some sort of new nomenclature to describe this state. It's certainly not the solution phase (far from it), but it's not the crystalline phase as we usually think of it, either. Studying reactions this way comes with the same sorts of footnotes as any other phase, though (solution versus gas versus solid): the energies of the participating species can be (and surely are) influenced by the medium. So their geometries and the resulting product distributions need to be interpreted with that in mind, but there's still a lot to be learned.
I've actually never heard of "Ebuychem.com", but a reader sent this interesting page along to me. So if you're in the market for "octa-azacubane" (yep, that's just as crazy as it sounds, if not crazier, a solid cube o' nitrogen atoms), then they're inviting you to query their sellers about availability. If that's in their database, what isn't?
Several people have written me after noticing this article on Taco Bell's ingredients. ABC gave me a call, since their reporter remembered my take on the Buzzfeed banned-ingredients stuff, and I was glad to say that although I haven't eaten at Taco Bell for years, it's not because their tacos have some potassium chloride in them.
But I also mentioned this post by Chemjobber, where he highlights a letter that showed up recently in C&E News, because I think it makes a good point about the attitude towards food ingredients. As I said to the ABC reporter, I don't have any patience for the "Ooh, that's hard to pronouce, so it must be icky" school of thought. Chemical nomenclature sounds weird and technical to people, for sure, but if you give the IUPAC names to the compounds found in the most pristine, all-natural piece of fruit plucked from a distant wild tree, it'll also sound like a witch's brew of toxic sludge - that is, if you don't know anything about chemistry or biology. James Kennedy at Monash University has made this point vividly in his posters of food ingredients. They're excellent. Here's his banana poster, and that link will take you to more.
But my guess is that they still won't convince many people. It's that same problem I mentioned the other day, about how you can't use reason to talk someone out of a position that wasn't arrived at by reason. There's a very human "ick" reflex, and it has nothing to do with the higher brain functions at all. As that C&E News letter writer says:
". . .with the notable exceptions of salt, water, and a few necessary minerals, many people, if not most, find the use of ingredients in their food that are not derived by simple processes from living things to be offensive."
It's an emotional response, and I suspect that it has to do with a lot of very well-ingrained reflexes about what looks safe to eat, and what doesn't. Dyeing a piece of bacon green makes it less appetizing, and serving wine out of an Erlenmeyer would throw off a taste-testing pretty thoroughly. As humans, we have instinctual responses to the question "Would I eat that?", with cultural and learned-behavior ones laid on top of those. The mental picture many people have of "chemicals" as a bubbling vat of toxic waste, paired with the food-or-not instinctual response, is enough for most folks to rule out anything that sounds like it came from a lab bench.
So Taco Bell's efforts to explain their ingredients list is probably doomed. Telling people that "Hey, you've already had this in other food" or "It's just there to make the meat easier to handle" won't address those emotional responses. It's the same problem as azodicarbonamide in bread dough (another good whack at this one is to be found here in the Montreal Gazette). When the "Food Babe" (Vani Hari) goes after something like this, the entire appeal is to the emotional "ick" response, whether it's valid or not. I don't think she has any interest at all in going past that point, anyway.
The thing is, azodicarbonamide, trehalose, and other such ingredients are all in the same category: stuff that's added to industrial-quantity food to make it easier/cheaper to make, mix, handle, store, or ship. That gives these ingredients still another level of unfamiliarity, since no home cook ever uses such things. But scale-up problems are just as real in the food business as they are in the lab: what works on the kitchen counter often does not work when you're supplying 1100 franchises with a fleet of trucks. Especially when every one of those locations is expected to provide a meal that is exactly the same as every other one, every time. You can have only familiar ingredients in the food, have hundreds (or thousands) of identical locations, and serve everything for low prices - but probably not all of those at the same time.
To add to the emotional response, it's worth remembering that in the past there have indeed been toxic ingredients added to food. We can go back to the use of lead salts by the Romans to sweeten wine and work up from there. But those sorts of stories have, thankfully, mostly disappeared from the world. The food supply in the industrialized countries is, as far as I can tell, safer than it's ever been. Even hunter-gatherers, wandering through untouched forests and eating the natural diet of wholesome goodness, manage to consume some things that are borderline by our standards. There are, of course, plenty of poisons in Eden. The emotional response to a beautiful tropical beach is one thing, but the plants growing nearby (and the plankton in the water) can be producing compounds whose use would be banned by chemical weapons treaties, were they only available in enough quantity to have been listed.
Now, given a choice, I do not eat Taco Bell's food, but that's because I don't find it all that tasty. Given a choice, I'll have a tomato that I raised myself over one from the store, because (again) it's more likely to taste better. But it's not exactly tomato season in Massachusetts, especially this week, so if I want one to make my own Mexican food, I have to buy it from a big supplier who hauled it to me from somewhere warmer. More seasonally, no one sells Black-Seeded Simpson lettuce in the stores (it probably doesn't ship well), but I have some growing out in the back yard right now in a small greenhouse. I'm not eating it because it's "organic", though. There's other fast food and processed food that I'll eat with no problems at all.
But as a chemist, I'm an outlier (as are most of the regular readers of this site). I'm not going to look at some long chemical name on a list of ingredients and immediately assume that I'm being poisoned, because I know that the chemical name of (say) Vitamin C sounds pretty fearsome by those standards, too: (R)-3,4-dihydroxy-5-((S)-1,2-dihydroxyethyl)furan-2(5H)-one, anyone? But I can understand why people do react that way. They're wrong, and I'm glad to point that out and to provide details about why I think they're wrong. But I'm not optimistic that I'm changing many opinions when I do it.
I wanted to mention another crowdfunded organic chemistry effort, this one on a small (and useful) scale. A former colleague of mine, Brent Chandler, is an assistant professor of chemistry at Illinois College, and he's working with undergraduates in organic synthesis. At the moment, he's trying to get funding for a summer undergraduate to work on a new synthesis of muscone. Synthesis of these macrocyclic musk compounds has only recently become economical at all, and prices are still high, so there's an opportunity.
I got my own start in the business as a summer undergrad back in 1981 at Hendrix College, and it was a great experience. The Indiegogo site for this effort is here. Chandler is trying to find an economical route to muscone, to train a young chemist, and to demonstrate to his institution that this can be a viable way to fund targeted research projects. We'll see how it works out!
Update: the author of this paper has appeared in the comments here (and elsewhere) saying that he's withdrawing the paper. These are apparently reviewer's comments on it, although I have no way of verifying that. Many of them don't sound like the comments I might have expected. There's more here as well.
Here we have one of the oddest papers to appear in Drug Discovery Today, which is saying something. The journal has always ranged wider than some of the others in this space, but this is the furthest afield I've seen to date. The title is "DrugPrinter: print any drug instantly", and I don't think I can do better than letting the abstract speak for itself:
In drug discovery, de novo potent leads need to be synthesized for bioassay experiments in a very short time. Here, a protocol using DrugPrinter to print out any compound in just one step is proposed. The de novo compound could be designed by cloud computing big data. The computing systems could then search the optimal synthesis condition for each bond–bond interaction from databases. The compound would then be fabricated by many tiny reactors in one step. This type of fast, precise, without byproduct, reagent-sparing, environmentally friendly, small-volume, large-variety, nanofabrication technique will totally subvert the current view on the manufactured object and lead to a huge revolution in pharmaceutical companies in the very near future.
Now, you may well read that and ask yourself "What is this DrugPrinter, and how can I get one?" But note how it's all written in the conditional - lots of woulds and coulds, which should more properly be mights and maybes. Or maybe nots. The whole thing is a fantasy of atomic-level nanotechnology, which I, too, hope may be possible at some point. But to read about the DrugPrinter, you'd think that someone's ready to start prototyping. But no one is, believe me. This paper "tells" you all the "steps" that you would need to "print" a molecule, but it leaves out all the details and all the hard parts:
Thus, if DrugPrinter can one day become a reality it will be a huge step forward in drug discovery. The operator needs only to sit down in front of a computer and draw the structure of compound, which is then inputted into the computer, and the system will automatically search by cloud computing for suitable reaction conditions between bond and bond. . .
That actually captures the tone of this paper pretty well - it exists on a slightly different plane of reality, and what it's doing in Drug Discovery Today is a real mystery, because there's not much "Today" in it, for one thing. But there's something else about it, too - try this part out and see what you think:
Thus, this novel protocol only needs one step instead of the five-to-ten steps of the current synthesis process. In actual fact, it is even better than click chemistry, with lower costs and with better precision of synthesis. A world-leading group led by Lee Cronin has made advances with the technology named ‘Chemputer’. However, it is different to our concept. We specifically address the detail of how to pick up each atom and react. We also disagree that it is possible for anyone to simply download the software (app) from the internet and use it to print one's own drug. It is not feasible and should be illegal in the future.
Some of this, naturally, can be explained by non-native English usage, although the editorial staff at DDT really should have cleaned that up a bit. But there's an underlying strain of grandiose oddness about the whole manuscript. It makes for an interesting reading experience, for sure.
The paper proposes a molding process to fit the shape of the desired target molecule, which is not prima facie a crazy idea at all (templated synthesis). But remember, we're down on the atomic scale here. The only thing to build the mold out of is more atoms, at the same scale as the material filling the mold, and that's a lot harder than any macroscale molding process that you can make analogies to. The MIP (molecularly imprinted polymer) idea is the closest real-world attempt at this sort of thing, but it's been around for quite a while now without providing a quick route into molecular assembly. There is no quick route into molecular assembly, and you’re certainly not going to get one from stuff like this:
Benzene has six carbon atoms joined in a ring, with one hydrogen atom attached to each carbon atom. It can be divided into six reactors for three atoms: C, H and C (Fig. 3). After inputting the chemical structure of benzene, the system will search for the best synthesis condition for each bond. The best optimal condition will be elucidated by computer and controlled by a machine with optical tweezers to pick up the reactant and the atoms of carbon and hydrogen. The carbon atom will be picked up by optical tweezers in the right position in these tiny reactors (just like a color laser printer). DrugPrinter technology will work just like a color laser printer but instead of a four-color (red, yellow, blue and black) printer toner cartridge there will be various atoms.
Right. The computer will search for the best reaction conditions for building up benzene by individual carbon atoms? There are no best conditions for that. You can make benzene from acetylene, if you’re so minded, but you need metal catalysts (Reppe chemistry). And how are these “conditions” to work inside some sort of benzene-shaped mold? How are the intermediates (propene? butadiene?) to be held in there while another carbon atom comes in? Making benzene in this manner would be Nobel-level stuff, and this paper’s just getting warmed up:
. . .The chamber for the storage of elements is divided into three parts based on the character of each atom according to the periodic table of elements. Roughly, there are three categories: nonmetals, metals and transition metals. Of course, most drugs are organic compounds, thus it is reasonable to expect that carbon (C), hydrogen (H) and oxygen (O) will be the major consumables (just as the black toner cartridge always runs out before the other three colors in a printer). . .
I don’t know what the author’s background is, but honestly, you get the impression that it doesn’t include much organic chemistry. The whole paper is written about a world where you take individual atoms from these reservoirs and fly them down small channels “with lasers or plasma” to be caught by optical tweezers and put into the right position. Apparently, things are just going to snap together like so many molecular model pieces once that happens. Reaction mechanisms, thermodynamics, reactivity and selectivity make no appearance at all that I can see. What does make an appearance is stuff like this:
Big data is applied suddenly in any field. For DrugPrinter, we allow the user to upload their desired compound by a webserver. A cloud computing system and fast dealing and optimal of the chemical reaction must be searched immediately. All the bond–bond reactions will be collected in an intelligent system by cloud computing. Because we built a world-first intelligent cloud computing drug screening system called iScreen (http://iscreen.cmu.edu.tw/) and an integrated webserver (http://ismart.cmu.edu.tw/) including the world’s largest traditional Chinese medicine (TCM) database (http://tcm.cmu.edu.tw/), this has enabled our technology. . .
I’m not trying to be unkind here, but some of this reads rather like the spam comments that pile up on this blog and others. “The buzzword will be made by high-tech buzzword by use of buzzword systems”. None of this is real, and as speculation it’s not too exciting, either. Eric Drexler is far more interesting reading – you can certainly find many things to argue about with him (as Richard Smalley famously did), but he’s thought about these problems in a useful way, as have many others. Drexler’s name, by the way, appears nowhere in this current paper, although the whole thing reads like a smudged tenth-generation photocopy of his work from the 1980s.
And that brings up an editorial question: who reviewed this? How did the staff at Drug Discovery Today find this worth publishing in its current form? I have no problem with them running papers about speculative nanotech chemical synthesis, I should add. I like that stuff; I like reading about it. But I don’t like reading hand-waving hoo-hah illustrated with videos of traditional egg-cake molding machines (I kid you not). As published, I found this paper to be an irritating, head-shaking, eye-rolling waste of time, and I would gladly have said so in a referee report. I see that Chemjobber is baffled as well. Who wouldn’t be?
I think that several of us in medicinal chemistry have been keeping our eyes out for a chance to work in a pentafluorosulfanyl (SF5) group. I know I have - I actually have a good-sized folder on the things, and some of the intermediates as well, but I've never found the right opportunity. Yeah, I know, they're big and greasy, but since when that that ever stop anyone in this business?
Well, here are are some new routes to (pentafluorosulfanyl)difluoroacetic acid, a compound that had previously only existed in a few scattered literature reports (and those from nasty chemistry). So we all have even less of an excuse to start polluting enhancing our screening collections with these things. Who's first?
Here's a review of protein-protein interaction "hot spots" and their application to drug discovery. There have been several overviews like this over the years. This one doesn't break much new ground, but it does provide a number of recent examples, all in one place.
People approach this subject because of its intrinsic interest (how proteins interact), and in hopes of finding small molecules that can interfere. The hot spot concept meshes well with the latter - if there's some key interaction, then you have a much better chance of messing with it via a drug-like molecule, compared to the one-wrinkled-surface-approaching-another-one mode of binding. There are probably no examples at either pure end of that continuum. Alanine scanning of a protein-protein interaction will always, I think, tell you that some residues are more important than others. But are they important enough that disrupting just that one would mess up the whole binding event? And (a bigger problem) is there any reason for a small molecule to be there in the first place? That's the real kicker, because while there are probably plenty of PPIs that wouldn't take place if you jammed a 350-MW small molecule into the middle of them, there aren't as many protein surfaces offering enough binding energy for the small molecule to want to do that.
And that word "small" probably needs to be in quotation marks. One excuse for the low hit rates in screening such things has been that existing compound libraries aren't stocked with the sorts of structures that are more likely to hit. I'm not sure how valid this argument is. It's the sort of statement that's very close to tautology: the reason we didn't find any good hits in the screen is because we don't have good hit compounds - thanks! But there may well be structural biases as you go towards protein-surface binders - big lunker molecules with lots of aryl rings, if this attempt to calculate their properties is valid. Now, I don't know about your screening libraries, but the ones I've worked with already seem to have plenty of big flattish things in them already, so you still wonder a bit. But it does seem as if this area has a significantly greater chance of posing PK and formulation challenges, even if you do find something. The struggle continues.
Steve Ley and co-workers have published what is surely the most ambitious flow-chemistry-based total synthesis ever attempted. Natural products spirodienal A and spirangien A methyl ester are prepared with almost every step (and purification) being done in flow mode.
The scheme shown (for one of the intermediates) will give you the idea. There are some batch-mode portage steps, such as 15 to 16, mainly because of extended reaction times that weren't adaptable to flow conditions. But the ones that could be adapated were, and it seems to have helped out with the supply of intermediates (which is always a tedious job in total synthesis, because you're either bored, when things are working like they always do, or pissed off, because something's gone wrong). Aldehyde 11 could be produced from 10 at a rate of 12 mmol/hour, for example.
The later steps of the synthesis tend much more towards batch mode, as you might imagine, since they're pickier (and not run as many times, either, I'll bet, compared to the number of times the earlier sequences were). Flow is perfect for those "Make me a pile of this stuff" situations. Overall, this is impressive work, and demonstrates still more chemistry that can be adapted usefully to flow conditions. Given my attitude towards total synthesis, I don't care much about spirodienal A, but I certainly do care about new ways to make new compounds more easily, and that's what this paper is really aiming for.
This will be a long one. I'm going to take another look at the Science paper that stirred up so much comment here on Friday. In that post, my first objection (but certainly not my only one) was the chemical structures shown in the paper's Figure 2. A number of them are basically impossible, and I just could not imagine how this got through any sort of refereeing process. There is, for example, a cyclohexadien-one structure, shown at left, and that one just doesn't exist as such - it's phenol, and those equilibrium arrows, though very imbalanced, are still not drawn to scale.
Well, that problem is solved by those structures being intended as fragments, substructures of other molecules. But I'm still positive that no organic chemist was involved in putting that figure together, or in reviewing it, because the reason that I was confused (and many other chemists were as well) is that no one who knows organic chemistry draws substructures like this. What you want to do is put dashed bonds in there, or R groups, as shown. That does two things: it shows that you're talking about a whole class of compounds, not just the structure shown, and it also shows where things are substituted. Now, on that cyclohexadienone, there's not much doubt where it's substituted, once you realize that someone actually intended it to be a fragment. It can't exist unless that carbon is tied up, either with two R groups (as shown), or with an exo-alkene, in which case you have a class of compounds called quinone methides. We'll return to those in a bit, but first, another word about substructures and R groups.
Figure 2 also has many structures in it where the fragment structure, as drawn, is a perfectly reasonable molecule (unlike the example above). Tetrahydrofuran and imidazole appear, and there's certainly nothing wrong with either of those. But if you're going to refer to those as common fragments, leading to common effects, you have to specify where they're substituted, because that can make a world of difference. If you still want to say that they can be substituted at different points, then you can draw a THF, for example, with a "floating" R group as shown at left. That's OK, and anyone who knows organic chemistry will understand what you mean by it. If you just draw THF, though, then an organic chemist will understand that to mean just plain old THF, and thus the misunderstanding.
If the problems with this paper ended at the level of structure drawing, which many people will no doubt see as just a minor aesthetic point, then I'd be apologizing right now. Update: although it is irritating. On Twitter, I just saw that someone spotted "dihydrophyranone" on this figure, which someone figured was close enough to "dihydropyranone", I guess, and anyway, it's just chemistry. But they don't. It struck me when I first saw this work that sloppiness in organic chemistry might be symptomatic of deeper trouble, and I think that's the case. The problems just keep on coming. Let's start with those THF and imidazole rings. They're in Figure 2 because they're supposed to be substructures that lead to some consistent pathway activity in the paper's huge (and impressive) yeast screening effort. But what we're talking about is a pharmacophore, to use a term from medicinal chemistry, and just "imidazole" by itself is too small a structure, from a library of 3200 compounds, to be a likely pharmacophore. Particularly when you're not even specifying where it's substituted and how. There are all kinds of imidazole out there, and they do all kinds of things.
So just how many imidazoles are in the library, and how many caused this particular signature? I think I've found them all. Shown at left are the four imidazoles (and there are only four) that exhibit the activity shown in Figure 2 (ergosterol depletion / effects on membrane). Note that all four of them are known antifungals - which makes sense, given that the compounds were chosen for the their ability to inhibit the growth of yeast, and topical antifungals will indeed do that for you. And that phenotype is exactly what you'd expect from miconazole, et al., because that's their known mechanism of action: they mess up the synthesis of ergosterol, which is an essential part of the fungal cell membrane. It would be quite worrisome if these compounds didn't show up under that heading. (Note that miconazole is on the list twice).
But note that there are nine other imidazoles that don't have that same response signature at all - and I didn't even count the benzimidazoles, and there are many, although from that structure in Figure 2, who's to say that they shouldn't be included? What I'm saying here is that imidazole by itself is not enough. A majority of the imidazoles in this screen actually don't get binned this way. You shouldn't look at a compound's structure, see that it has an imidazole, and then decide by looking at Figure 2 that it's therefore probably going to deplete ergosterol and lead to membrane effects. (Keep in mind that those membrane effects probably aren't going to show up in mammalian cells, anyway, since we don't use ergosterol that way).
There are other imidazole-containing antifungals on the list that are not marked down for "ergosterol depletion / effects on membrane". Ketonconazole is SGTC_217 and 1066, and one of those runs gets this designation, while the other one gets signature 118. Both bifonazole and sertaconazole also inhibit the production of ergosterol - although, to be fair, bifonazole does it by a different mechanism. It gets annotated as Response Signature 19, one of the minor ones, while sertaconazole gets marked down for "plasma membrane distress". That's OK, though, because it's known to have a direct effect on fungal membranes separate from its ergosterol-depleting one, so it's believable that it ends up in a different category. But there are plenty of other antifungals on this list, some containing imidazoles and some containing triazoles, whose mechanism of action is also known to be ergosterol depletion. Fluconazole, for example, is SGTC_227, 1787 and 1788, and that's how it works. But its signature is listed as "Iron homeostasis" once and "azole and statin" twice. Itraconzole is SGTC_1076, and it's also annotated as Response Signature 19. Voriconazole is SGTC_1084, and it's down as "azole and statin". Climbazole is SGTC_2777, and it's marked as "iron homeostasis" as well. This scattering of known drugs between different categories is possibly and indicator of this screen's ability to differentiate them, or possibly an indicator of its inherent limitations.
Now we get to another big problem, the imidazolium at the bottom of Figure 2. It is, as I said on Friday, completely nuts to assign a protonated imidazole to a different category than a nonprotonated one. Note that several of the imidazole-containing compounds mentioned above are already protonated salts - they, in fact, fit the imidazolium structure drawn, rather than the imidazole one that they're assigned to. This mistake alone makes Figure 2 very problematic indeed. If the paper was, in fact, talking about protonated imidazoles (which, again, is what the authors have drawn) it would be enough to immediately call into question the whole thing, because a protonated imidazole is the same as a regular imidazole when you put it into a buffered system. In fact, if you go through the list, you find that what they're actually talking about are N-alkylimidazoliums, so the structure at the bottom of FIgure 2 is wrong, and misleading. There are two compounds on the list with this signature, in case you were wondering, but the annotation may well be accurate, because some long-chain alkylimidazolium compounds (such as ionic liquid components) are already known to cause mitochondrial depolarization.
But there are several other alkylimidazolium compounds in the set (which is a bit odd, since they're not exactly drug-like). And they're not assigned to the mitochondrial distress phenotype, as Figure 2 would have you think. SGTC_1247, 179, 193, 1991, 327, and 547 all have this moeity, and they scatter between several other categories. Once again, a majority of compounds with the Figure 2 substructure don't actually map to the phenotype shown (while plenty of other structural types do). What use, exactly, is Figure 2 supposed to be?
Let's turn to some other structures in it. The impossible/implausible ones, as mentioned above, turn out to be that way because they're supposed to have substituents on them. But look around - adamantane is on there. To put it as kindly as possible, adamantane itself is not much of a pharmacophore, having nothing going for it but an odd size and shape for grease. Tetrahydrofuran (THF) is on there, too, and similar objections apply. When attempts have been made to rank the sorts of functional groups that are likely to interact with protein binding sites, ethers always come out poorly. THF by itself is not some sort of key structural unit; highlighting it as one here is, for a medicinal chemist, distinctly weird.
What's also weird is when I search for THF-containing compounds that show this activity signature, I can't find much. The only things with a THF ring in them seem to be SGTC_2563 (the complex natural product tomatine) and SGTC_3239, and neither one of them is marked with the signature shown. There are some imbedded THF rings as in the other structural fragments shown (the succinimide-derived Diels-Alder ones), but no other THFs - and as mentioned, it's truly unlikely that the ether is the key thing about these compounds, anyway. If anyone finds another THF compound annotated for tubulin folding, I'll correct this post immediately, but for now, I can't seem to track one down, even though Table S4 says that there are 65 of them. Again, what exactly is Figure 2 supposed to be telling anyone?
Now we come to some even larger concerns. The supplementary material for the paper says that 95% of the compounds on the list are "drug-like" and were filtered by the commercial suppliers to eliminate reactive compounds. They do caution that different people have different cutoffs for this sort of thing, and boy, do they ever. There are many, many compounds in this collection that I would not have bothered putting into a cell assay, for fear of hitting too many things and generating uninterpretable data. Quinone methides are a good example - as mentioned before, they're in this set. Rhodanines and similar scaffolds are well represented, and are well known to hit all over the place. Some of these things are tested at hundreds of micromolar.
I recognize that one aim of a study like this is to stress the cells by any means necessary and see what happens, but even with that in mind, I think fewer nasty compounds could have been used, and might have given cleaner data. The curves seen in the supplementary data are often, well, ugly. See the comments section from the Friday post on that, but I would be wary of interpreting many of them myself.
There's another problem with these compounds, which might very well have also led to the nastiness of the assay curves. As mentioned on Friday, how can anyone expect many of these compounds to actually be soluble at the levels shown? I've shown a selection of them here; I could go on. I just don't see any way that these compounds can be realistically assayed at these levels. Visual inspection of the wells would surely show cloudy gunk all over the place. Again, how are such assays to be interpreted?
And one final point, although it's a big one. Compound purity. Anyone who's ever ordered three thousand compounds from commercial and public collections will know, will be absolutely certain that they will not all be what they say on the label. There will be many colors and consistencies, and LC/MS checks will show many peaks for some of these. There's no way around it; that's how it is when you buy compounds. I can find no evidence in the paper or its supplementary files that any compound purity assays were undertaken at any point. This is not just bad procedure; this is something that would have caused me to reject the paper all by itself had I refereed it. This is yet another sign that no one who's used to dealing with medicinal chemistry worked on this project. No one with any experience would just bung in three thousand compounds like this and report the results as if they're all real. The hits in an assay like this, by the way, are likely to be enriched in crap, making this more of an issue than ever.
Damn it, I hate to be so hard on so many people who did so much work. But wasn't there a chemist anywhere in the room at any point?
A reader sent along a puzzled note about this paper that's out in Science. It's from a large multicenter team (at least nine departments across the US, Canada, and Europe), and it's an ambitious effort to profile 3250 small molecules in a broad chemogenomics screen in yeast. This set was selected from an earlier 50,000 compounds, since these realiably inhibited the growth of wild-type yeast. They're looking for what they call "chemogenomic fitness signatures", which are derived from screening first against 1100 heterozygous yeast strains, one gene deletion per, representing the yeast essential genome. Then there's a second round of screening against 4800 homozygous deletions strain of non-essential genes, to look for related pathways, compensation, and so on.
All in all, they identified 317 compounds that appear to perturb 121 genes, and many of these annotations are new. Overall, the responses tended to cluster in related groups, and the paper goes into detail about these signatures (and about the outliers, which are naturally interested for their own reasons). Broad pathway effects like mitrochondrial stress show up pretty clearly, for example. And unfortunately, that's all I'm going to say for now about the biology, because we need to talk about the chemistry in this paper. It isn't good.
As my correspondent (a chemist himself) mentions, a close look at Figure 2 of the paper raises some real questions. Take a look at that cyclohexadiene enamine - can that really be drawn correctly, or isn't it just N-phenylbenzylamine? The problem is, that compound (drawn correctly) shows up elsewhere in Figure 2, hitting a completely different pathway. These two tautomers are not going to have different biological effects, partly because the first one would exist for about two molecular vibrations before it converted to the second. But how could both of them appear on the same figure?
And look at what they're calling "cyclohexa-2,4-dien-1-one". No such compound exists as such in the real world - we call it phenol, and we draw it as an aromatic ring with an OH coming from it. Thiazolidinedione is listed as "thiazolidine-2,4-quinone". Both of these would lead to red "X" marks on an undergraduate exam paper. It is clear that no chemist, not even someone who's been through second-year organic class, was involved in this work (or at the very least, involved in the preparation of Figure 2). Why not? Who reviewed this, anyway?
There are some unusual features from a med-chem standpoint as well. Is THF really targeting tubulin folding? Does adamantane really target ubiquinone biosynthesis? Fine, these are the cellular effects that they noted, I guess. But the weirdest thing on Figure 2's annotations is that imidazole is shown as having one profile, while protonated imidazole is shown as a completely different one. How is this possible? How could anyone who knows any chemistry look at that and not raise an eyebrow? Isn't this assay run in some sort of buffered medium? Don't yeast cells have any buffering capacity of their own? Salts of basic amine drugs are dosed all the time, and they are not considered - ever - as having totally different cellular effects. What a world it would be if that were true! Seeing this sort of thing makes a person wonder about the rest of the paper.
More subtle problems emerge when you go to the supplementary material and take a look at the list of compounds. It's a pretty mixed bag. The concentrations used for the assays vary widely - rapamycin gets run at 1 micromolar, while ketoconazole is nearly 1 millimolar. (Can you even run that compound at that concentration? Or that compound at left at 967 micromolar? Is it really soluble in the yeast wells at such levels? There are plenty more that you can wonder about in the same way.
And I went searching for my old friends, the rhodanines, and there they were. Unfortunately, compound SGTC_2454 is 5-benzylidenerhodanine, whose activity is listed as "A dopamine receptor inhibitor" (!). But compound SGTC_1883 is also 5-benzylidenerhodanine, the same compound, run at similar concentration, but this time unannotated. The 5-thienylidenerhodanine is SGTC_30, but that one's listed as a phosphatase inhibitor. Neither of these attributions seem likely to me. There are other duplicates, but many of them are no doubt intentional (run by different parts of the team).
I hate to say this, but just a morning's look at this paper leaves me with little doubt that there are still more strange things buried in the chemistry side of this paper. But since I work for a living (dang it), I'm going to leave it right here, because what I've already noted is more than troubling enough. These mistakes are serious, and call the conclusions of the paper into question: if you can annotate imidazole and its protonated form into two different categories, or annotate two different tautomers (one of which doesn't really exist) into two different categories, what else is wrong, and how much are these annotations worth? And this isn't even the first time that Science has let something like this through. Back in 2010, they published a paper on the "Reactome" that had chemists around the world groaning. How many times does this lesson need to be learned, anyway?
Update: this situation brings up a number of larger issues, such as the divide between chemists and biologists (especially in academia?) and the place of organic chemistry in such high-profile publications (and the place of organic chemists as reviewers of it). I'll defer these to another post, but believe me, they're on my mind.
Update 2 Jake Yeston, deputy editor at Science, tells me that they're looking into this situation. More as I hear it.
Update 3: OK, if Figure 2 is just fragments, structural pieces that were common to compounds that had these signatures, then (1) these are still not acceptable structures, even as fragments, and (2), many of these don't make sense from a medicinal chemistry standpoint. It's bizarre to claim a tetrahydrofuran ring (for example) as the key driver for a class of compounds; the chance that this group is making an actual, persistent interaction with some protein site (or family of sites) is remote indeed. The imidazole/protonated imidazole pair is a good example of this: why on Earth would you pick these two groups to illustrate some chemical tendency? Again, this looks like the work of people who don't really have much chemical knowledge.
A closer look at the compounds themselves does not inspire any more confidence. There's one of them from Table S3, which showed a very large difference in IC50 across different yeast strains. It was tested at 400 micromolar. That, folks, was sold to the authors of this paper by ChemDiv, as part of a "drug-like compound" library. Try pulling some SMILES strings from that table yourself and see what you think about their drug likeness.
I'd sort of suspected this um, breakthrough, in catalysis that See Arr Oh is reporting. But how come more of my reactions don't work, eh? 'Cause there's been all kinds of crud in them, I feel pretty sure. Maybe the various crud subtypes (cruddotypes?) are canceling each other out. . .
New fluorination reactions are always welcome, and there's one out in Ang. Chem. that looks really interesting. Robert Britton's group at Simon Fraser University report using tetrabutylammonium decatungstate as a photochemistry catalyst with N-fluorobenzenesulfonimide (NFSI). This system fluorinates unsubstituted alkanes, as shown at left, and apparently tolerates several functional groups in the process.
Note that the amino acids were fluorinated as their hydrochloride salts; the free bases didn't work. There aren't any secondary or tertiary amine substrates in the paper, nor are there any heterocycles, both of which are cause to wonder whenever you see a new fluorination method. But I think I'm going to order up some tungstate, turn on the lamp, and see what I get.
Here's the sort of review that every working medicinal chemist will want to take a look at: Jeffrey Bode and graduate student Cam-Van T. Vo are looking at recent methods to prepare saturated nitrogen heterocycles. If you do drug discovery, odds are that you've worked with more piperidines, pyrrolidines, piperazines, morpholines, etc. than sticks can be shaken at. New ways to make substituted variations on these are always welcome, and it's good to see the latest work brought together into one place.
There's still an awful lot to do in this area, though. As the review mentions, a great many methods rely on nitrogen protecting groups. From personal experience, I can tell you that my heart sinks a bit when I see some nice ring-forming reaction in the literature and only then notice that the piperidine (or what have you) has a little "Ts" or "Ns" stuck to it. I know that these things can be taken off, but it's still a pain to do, and especially if you want to make a series of compounds. Protecting-group-free routes in saturated heterocyclic chemistry are welcome indeed.
An Australian reader sends this along from The Economist. Apparently xenon has been used for several years now to enhance athletic performance - who knew? Well, athletes, for one - here's an Australian cycling magazine talking about it, and Russian athletic federations have been recommending it for some time. That cycling article has a copy of a letter from the Russian Olympic committee, thanking a supplier for providing xenon to help prepare the team for the 2006 winter games in Turin.
One's first impulse would be to snort and say "Snake oil!", but one's first impulse would probably be wrong. Xenon exposure is known to set off production of the protein Hif-1-alpha, which makes sense, given that "Hif" stands for "hypoxia-inducible-factor". Increased levels are known to stimulate production of erythropoetin (a natural response to hypoxia, for sure), and xenon's effect on this whole system (demonstrated in mice and in rat cell assays) seems to be unusually long-lasting. I'd speculate that that has to do with its lipid solubility; a good strong dose of xenon probably takes longer to clear out of the tissues than you might think.
But as the Australian article goes on to argue, correctly, we don't have much reliable human data, on xenon's effects on Hif-1A in humans, on the corresponding increase in EPO, and on whether those increases are enough to really affect performance. A placebo effect would need to be ruled out, at the very least. It's also not a banned substance by the World Anti-Doping Agency (and banning it might be tricky), so athletes competing with it are not in violation of any rules. Given that xenon is already of medical interest for preventing hypoxia-related injury, I'll bet that it won't be going away any time soon.
Over at LifeSciVC, guest blogger Jonathan Montagu talks about small molecules in drug discovery, and how we might move beyond them. Many of the themes he hits have come up around here, understandably - figuring why (and how) some huge molecules manage to have good PK properties, exploiting "natural-product-like" chemical space (again, if we can figure out a good way to do that), working with unusual mechanisms (allosteric sites, covalent inhibitors and probes), and so on. Well worth a read, even if he's more sanguine about structure-based drug discovery than I am. Most people are, come to think of it.
His take is very similar to what I've been telling people in my "state of drug discovery" presentations (at Illinois, most recently) - that we medicinal chemists need to stretch our definitions and move into biomolecule/small molecule hybrids and the like. These things need the techniques of organic chemistry, and we should be the people supplying them. Montagu goes even further than I do, saying that ". . .I believe that small molecule chemistry, as traditionally defined and practiced, has limited utility in today’s world." That may or may not be correct at the moment, but I'm willing to bet that it's going to become more and more correct in the future. We should plan accordingly.
I enjoyed this post over at Synthetic Remarks on "Five things synthetic chemists hate". And I agree; I hate all of 'em, too. Allow me to add a few to the list:
1. The Mysterious Starting Material. How many times have you looked through an experimental section only to see a synthesis start cold, from a non-commercial compound whose preparation isn't given, or even referenced? One that doesn't seem to have any foundation anywhere else in the literature, either? I think that this is a bit more common in the older literature, but it shouldn't be happening anywhere.
2. It Works on Benzaldehyde; What More Do You Want? What about those new method papers that include a wide, diverse array of examples showing how versatile the new reaction is - but when you look at the list, you realize that it's full of things like cyclohexanone, benzaldehyde. . .and then 4-methylcyclohexanone, p-fluorobenzaldehyde, and so on? Turns out that the reaction lands flat on its nose, stretched out on the sand if there's a basic amine within five hundred yards. But you have to find that out for yourself. It ain't in the text.
3. The Paper Chase. In these days of humungous supplementary info files, what excuse is there to write a paper where all the reactions use one particular reagent - and then send people back to your previous paper to learn how to make it? Sure, reference yourself. But don't march everyone back to a whole other experimental. Are authors getting some sort of nickel-a-page-view deal from the publishers now that I haven't heard about?
4. If I Don't See It, It Isn't There. When I review papers, one of the things I end up dinging people about, more than anything else, is the reluctance to cite relevant literature. In some cases, it's carelessness, but in others, well. . .everyone's seen papers that basically rework someone else's reaction without ever citing the original. And in these days of modern times, as the Firesign Theatre guys used to say, what excuse is there?
5. Subtle Is the Lord. Once in a while, you find an experimental writeup that makes you wrinkle your brow and wonder if someone's pulling your leg. The reaction gets run at -29 degrees C, for 10.46 hours, whereupon it's brought up to -9 and quenched with pH 7.94 buffer solution. That kind of thing. If you're going to put that Proustian level of detail in there, you'd better have a reason (Proust did). No one just stumbles into conditions like that - what happened when you ran your reaction like a normal human, instead of like Vladimir Nabokov on Adderall?
Via the Baran lab's Twitter feed, here's a provocative article on whether total organic synthesis has a place in the modern world or not.
One may wonder why this situation has passed undisputed for such a long time. Currently however, wide parts of the chemical community look upon total synthesis as a waste of time, resources and talents. Behind the scene, it may even be argued that the obsession to synthesize almost any natural product irrespective of its complexity and practical importance has blocked the development of other more relevant fields. Therefore, it is high time to consider a reorientation of the entire discipline
That's a bit of a straw man in that paragraph, and I have to note it, even though I do feel odd sticking up for total synthesis (about which I've been pretty caustic myself, for many years now.). I don't think that there's been an "obsession to synthesize almost any natural product", although it's true that many new synthetic methods have used some natural product or another as demonstration pieces. But the author, Johann Mulzer, came out of the Corey group in the old days, and has spent his career doing total synthesis, so he's speaking from experience here.
He goes on to argue that the field does have a place, but that it had better shape up. Short syntheses have to take priority over "first syntheses", because (let's face it), just about anything can be made if you're willing to throw enough time, money, and postdocs at it. The paper is full of examples from Mulzer's own career (and others'), and if you read it carefully, you'll see some unfavorable contrasts drawn to some Nicolaou syntheses. He finishes up:
In conclusion, this article tries to show how various strategies may be used to streamline and to shorten otherwise long synthetic routes to complex target molecules. The reader may get the impression that it pays very well to think intensively about cascade reactions, intramolecular cycloadditions, suitable starting materials and so on, instead of plunging into a brute-force and therefore mostly inefficient sequence. After all, there is an iron maxim: if a target cannot be reached within, say, 25 steps, it is better to drop it. For what you will get is a heroic synthesis, at best, but never an efficient one.
A 25-step limit would chop an awful lot out of the synthetic literature, wouldn't it? But it's not fair to apply that retrospectively. What if we apply it from here on out, though? What would the total synthetic landscape look like then?
Walensky and Bird have a Miniperspective out in J. Med. Chem. on stapled peptides, giving advice on how to increase one's chances of success in the area. Worth checking out, unless you're at Genentech or WEHI, of course. The authors might say that it's especially worth reading in those cases, come to think of it. I await the day when this dispute gets resolved, although a lot of people awaited the day that the nonclassical carbocation controversy got resolved, too, and look how long that took.
And in Science, Tehshik Yoon has a review on visible-light catalyzed photochemistry. I like these reactions a lot, and have run a few myself. The literature has been blowing up all over the place in this field, and it's good to have an overview like this to keep things straight.
There's an interesting report from the Buchwald group using the Fujita "molecular sponge" crystallography technique. The last report on this was a correction, amid reports that the method was not as widely applicable as had been hoped, so I'm very happy to see it being used here.
They're revising the structure of a new reagent (from the Lu and Shen groups in Shanghai) for introducing the SCF3 group. It was proposed to be a hypervalent iodine (similar to other reagents in this class), but Buchwald's group found some NMR data and reactivity trends that suggested the structure might be in the open form, rather than the five-membered iodine ring one.
Soaking this reagent into the MOF crystal provided a structure, although if you read the supporting information, it wasn't easy. The compound was still somewhat disordered in the MOF lattice, and there were still nitrobenzene and cyclohexane solvent molecules present. The SCF3 reagent showed up in two crystallographically independent sites, one of them associated with residual nitrobenzene. After a good deal of work, though, they did show that open-form structure was present. (The Shen et al. paper's conclusions on its synthetic uses, though, are all still valid; it's just the the structure doesn't fall into the same series as expected).
So the MOF crystallography method lives, although I've still yet to hear of it giving a structure with a nitrogen-containing compound (which rather limits its use in drug discovery work, as you might imagine).
Just Like Cooking has an overview of some interesting new chemistry from the Hartwig group. They're using a rhodium catalyst to directly functionalize aryl rings with silyl groups (which can be used in a number of transformations downstream). One nice thing is that the selectivities are basically the opposite of the direct borylation reactions, so this could open up some isomers that are otherwise difficult to come by.
See Arr Oh makes a good point about the paper, too - it has a lot of detail in it and a lot of information. If you check out the Supplementary Information, there are about thirty pages of further details, and about sixty pages of spectral data. I particularly like the tables of various reaction conditions, hydrogen acceptors, and ligands. The main paper shows the conditions that work the best, but this gives you a chance to see under the hood at everything else that was tried. Every new methods paper should do this - in fact, every new methods paper should be required to do this. Good stuff.
I've received word that well-known organic chemist Alan Katritzky has passed away. He's famous for his work on the use of benzotriazole compounds, and a great deal of other heterocyclic chemistry besides (2,170 papers!)
I first heard him speak in the early 1990s at the Heterocycles Gordon Conference, back in its old location in New Hampshire. And although I 'd been warned to sit near the back of the conference room, I still wasn't ready for the. . .vigor he brought to his presentation. Katritzky had clearly honed his lecturing style in large, unamplified halls, and could be easily heard outside on the lawn. The next day, Stuart McCombie opened the morning program by thanking him for ". . .sharing with me the last secret of benzotriazole. He sprinkled some down my throat AND I NEVER NEED A MICROPHONE AGAIN!"
Katritzky was a link to another era of chemistry (he studied under Sir Robert Robinson), but he leaves behind a huge legacy of work for the modern researcher. He may well have been too productive for his accomplishments to be easily categorized, or at least not yet (those 2,170 papers. . .), but there's no doubt that his name will live on.
The thesis is miserable. One and a half years of new substances prepared like baker’s bread rolls… and in addition, lots of negative results just where I was looking for significant results, and further, results that I cannot even publish because I fear that a competent chemist will find them and prove to me that the camel is missing its humps. One learns to be modest.
Now, Haber was definitely someone to take seriously. He's showing up in "The Chemistry Book", for sure, both for his historic ammonia process and his work in chemical warfare. He was a good enough chemist to know that his doctoral work was not all that great, although he seems to have followed my own recommended path to get that degree as soon as is consistent with honor and not making enemies.
The post's author, MB, wonders what this says about organic synthesis in general. How much of it is just baking bread rolls, and how bad is that? My own take is that the sort of think that Haber was regretting is the lowest form of synthesis. We've all seen the sorts of papers - here is a heterocyclic core, of no particular interest that anyone has ever been able to show. Here it has an amine. Here are twenty-five amides of that amine. Here is our paper telling you about them. Part fourteen in a series. In six months, the sulfonamides. This sort of things gets published, when it does, in the lowest tiers of the journals, and rightly so. There's nothing wrong with it (well, not usually, although this stuff isn't always the most careful work in the world). But there's nothing right with it either. It's reference data. Someone, someday, might stumble into this area of chemical space again, and when they do, they'll find a name scratched onto the wall and below it, a yellowing pile of old spectral data.
I've wondered before about what to do with those sorts of papers. There are so many compounds in the world of organic chemistry that the marginal utility of describing new random ones, while clearly not zero, is very, very close to it, especially if they're not directed towards any known use other than to make a manuscript. So if this is what's meant by baking rolls, then it's not too useful.
But I'm a medicinal chemist. When I start working on a new hit structure, I will most likely turn around and put the biggest pan of bread rolls into the biggest oven I can find. This, though, is chemistry with a purpose - there's some activity that I'm seeking, and if cranking out compounds is the best and/or fastest way to move in on it, then crank away. I'm not going to turn that blast of analogs into a paper; most (maybe all) of them will be tested, found wanting, and make their way into our compound archives. Their marginal utility is pretty low, too, given the numbers of compounds already in there, but it's still by far the best thing to do with them. Any that show activity, though, will get more attention.
I really don't mind that aspect of the synthesis I do. Setting up a row of easy reactions is actually kind of pleasant, because I know that (1) they're likely to work, and (2) they're going to tell me something I really want to know after I send them off for testing. Maybe they aren't bread rolls after all - they're bricks, and I can just possibly build something from them.
I can strongly recommend this article by Carmen Drahl in C&E News on the way that we chemists pick fights over nomenclature. She has examples of several kinds of disagreement (competing terms for the same thing, terms that overlap but are still different, competing ways to measure something in different ways, and terms that are fuzzy enough that some want to eliminate them entirely).
As several of the interviewees note, these arguments are not (always) petty, and certainly not always irrational. Humans are good at reification - turning something into a "thing". Name a concept well, and it sort of shimmers into existence, giving people a way to refer to it as if it were a solid object in the world of experience. This has good and bad aspects. It's crucial to the ability to have any sort of intellectual discussion and progress, since we have to be able to speak of ideas and other entities that are not actual physical objects. But a badly fitting name can do real harm, obscuring the most valuable or useful parts of an idea and diverting thoughts about it unproductively.
My own favorite example is the use of "agonist" and "antagonist" to describe the actions of nuclear receptor ligands. This (to my way of thinking) is not only useless, but does real harm to the thinking of anyone who approaches nuclear receptors having first learned about GPCRs. Maybe the word "receptor" never should have been used for these things in the first place, although realizing that would have required supernatural powers of precognition.
There are any number of examples outside chemistry, of course. One of my own irritants is when someone says that something has been "taken to the next level". You would probably not survive watching a sports channel if that phrase were part of a drinking game. But it presupposes that some activity comes in measurable chunks, and that everyone agrees on what order they come in. I'm reminded of the old blenders with their dials clicking between a dozen arbitrary "levels", labeled with tags like "whip", "chop", and "liquify". Meaningless. It's an attempt to quantify - to reify - what should have been a smooth rheostat knob with lines around it.
OK, I'll stop before I bring up Wittgenstein. OK, too late. But he was on to something when he told people to be careful about the way language is used, and to watch out when you get out onto the "frictionless ice" of talking about constructions of thought. His final admonition in his Tractacus Logico-Philosophicus, that if we cannot speak about something that we have to pass over it in silence, has been widely quoted and widely unheeded, since we're all sure that we can, of course, speak about what we're speaking about. Can't we?
Origin-of-life studies have been a feature of chemistry for a long time, and over the years some key questions have become clear. It's clear from astronomical and planetary science data that the common molecules of organic chemistry are more or less soaking the universe. Amino acids and simple carbohydrates are apparently part of the cloud of gunk that makes up a new solar system, with more forming all the time. But a major step is how (and why) molecules would have organized themselves into gradually more complex systems. Some parts of the process may have been modeled already; there are a number of interesting ways that primitive membranes might have formed, which would seem to be a necessary step in distinguishing the relatively concentrated inside of a proto-cell from the more watery outside.
But a new paper (discussed here as well) has a theory that says this might have been flat-out inevitable:
From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life. . .
. . .“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.
Self-replication would be an excellent way of doing this, and if England is right, then the development of self-organizing and replicating systems would be "baked in" to thermodynamics under the right conditions. Combine that with the organic chemistry that seems to obtain under astrophysical conditions, and we should, in theory, not be a bit surprised to find living creatures hopping around, full of amino acids and carbohydrates, using sunlight and chemical energy to do their thing.
England's theory is still fairly speculative, but he seems to be moving right along in applying it to living systems, at least on paper. What I like about this idea is that it would seem to be testable, in both living and nonliving systems. Perhaps something can be done at the level of bacteria, yeast, or even viruses or bacteriophages. I look forward to seeing some data!
Well, just after blasting antioxidant supplements for cancer patients (and everyone else) comes this headline: "Vitamin C Injections Ease Ovarian Cancer Treatments". Here's the study, in Science Translational Medicine. So what's going on here?
A closer look shows that this, too, appears to fit into the reactive-oxygen-species framework that I was speaking about:
Drisko and her colleagues, including cancer researcher Qi Chen, who is also at the University of Kansas, decided that the purported effects of the vitamin warranted a closer look. They noticed that earlier trials had partially relied on intravenous administration of high doses of vitamin C, or ascorbate, whereas the larger follow-up studies had used only oral doses of the drug.
This, they reasoned, could be an important difference: ascorbate is processed by the body in different ways when administered orally versus intravenously. Oral doses act as antioxidants, protecting cells from damage caused by reactive compounds that contain oxygen. But vitamin C given intravenously can have the opposite effect by promoting the formation of one of those compounds: hydrogen peroxide. Cancer cells are particularly susceptible to damage by such reactive oxygen-containing compounds.
Drisko, Chen and their colleagues found that high concentrations of vitamin C damaged DNA and promoted cell death in ovarian cancer cells grown in culture. In mice grafted with human ovarian cancer cells, treatment with intravenous vitamin C combined with conventional chemotherapy slowed tumour growth, compared to chemotherapy treatment alone.
The concentrations attained by the intravenous route are apparently necessary to get these effects, and you can't reach those by oral dosing. This 2011 review goes into the details - i.v. ascorbate reaches at least 100x the blood concentrations provided by the maximum possible oral dose, and at those levels it serves, weirdly, as a percursor of hydrogen peroxide (and a much safer one than trying to give peroxide directly, as one can well imagine). There's a good amount of evidence from animal models that this might be a useful adjunct therapy, and I'm glad to see it being tried out in the clinic.
So does this mean that Linus Pauling was right all along? Not exactly. This post at Science-Based Medicine provides an excellent overview of that question. It reviews the earlier work on intravenous Vitamin C, and also Pauling's earlier advocacy. Unfortunately, Pauling was coming at this from a completely different angle. He believed that oral Vitamin C could prevent up to 75% of cancers (his words, sad to say). His own forays into the clinic with this idea were embarrassing, and more competently run trials (several of them) have failed to turn up any benefit. Pauling had no idea that for Vitamin C to show any efficacy, that it would have to be run up to millimolar concentrations in the blood, and he certainly had no idea that it would work by actually promoting reactive oxygen species. (He had several other mechanisms in mind, such as inhibition of hyaluronidase, which do not seem to be factors in the current studies at all). In fact, Pauling might well have been horrified. Promoting rampaging free radicals throughout the bloodstream was one of the last things he had in mind; he might have seen this as no better than traditional chemotherapy (since it's also based on a treatment that's slightly more toxic to tumor cells than it is to normal ones). At the same time, he also showed a remarkable ability to adapt to new data (or to ignore it), so he might well have claimed victory, anyway.
This brings up another topic - not Vitamin C, but Pauling himself. As I've been writing "The Chemistry Book" (coming along fine, by the way), one of the things I've enjoyed is a chance to re-evaluate some of the people and concepts in the field. And I've come to have an even greater appreciation of just what an amazing chemist Linus Pauling was. He seems to show up all over the 20th century, and in my judgment could have been awarded a second science Nobel, or part of one, without controversy. I mean, you have The Nature of the Chemical Bond (a tremendous accomplishment by itself), the prediction of noble gas fluorides as possible, the alpha-helix and beta-pleated sheet structures of proteins, the mechanism of sickle cell anemia (and the concept of a "molecular disease"), the suggestion that enzymes work by stabilizing transitions states, and more. Pauling shows up all over the place - encouraging the earliest NMR work ("Don't listen to the physicists"), taking a good cut at working out the structure of DNA, all sorts of problems. He was the real deal, and accomplished about four or five times as much as anyone would consider a very good career.
But that makes it all the more sad to see what became of him in his later years. I well remember his last hurrah, which was being completely wrong about quasicrystals, from when I was in graduate school. But naturally, I'd also heard of his relentless advocacy for Vitamin C, which gradually (or maybe not so gradually) caused people to think that he had slightly lost his mind. Perhaps he had; there's no way of knowing. But the way he approached his Vitamin C work was a curious (and sad) mixture of the same boldness that had served him so well in the past, but now with a messianic strain that would probably have proven fatal to much of his own earlier work. Self-confidence is absolutely necessary for a great scientist, but too much of it is toxic. The only way to find out where the line stands is to cross it, but you won't realize it when you have (although others will).
We remember Isaac Newton for his extraordinary accomplishments in math and physics, not for his alchemical and religious calculations (to which he devoted much time, and which shocked John Maynard Keynes when he read Newton's manuscripts). Maybe in another century or two, Pauling will be remembered for his accomplishments, rather than for the times he went off the rails.
This morning I heard reports of formaldehyde being found in Charleston, West Virginia water samples as a result of the recent chemical spill there. My first thought, as a chemist, was "You know, that doesn't make any sense". A closer look confirmed that view, and led me to even more dubious things about this news story. Read on - there's some chemistry for a few paragraphs, and then near the end we get to the eyebrow-raising stuff.
The compound that spilled was (4-methylcyclohexane)methanol, abbreviated as 4-MCHM. That's its structure over there.
For the nonchemists in the audience, here's a chance to show how chemical nomenclature works. Those lines represent bonds between atoms, and if the atom isn't labeled with its own letter, it's a carbon (this compound has one one labeled atom, that O for oxygen). These sorts of carbons take four bonds each, and that means that there are a number of hydrogens bonded to them that aren't shown. You'd add one, two, or three hydrogens as needed to each to take each one up to four bonds.
The six-membered ring in the middle is "cyclohexane" in organic chemistry lingo. You'll note two things coming off it, at opposite ends of the ring. The small branch is a methyl group (one carbon), and the other one is a methyl group subsituted with an alcohol (OH). The one-carbon alcohol compound (CH3OH) is methanol, and the rules of chemical naming say that the "methanol-like" part of this structure takes priority, so it's named as a methanol molecule with a ring stuck to its carbon. And that ring has another methyl group, which means that its position needs to be specified. The ring carbon that has the "methanol" gets numbered as #1 (priority again), so the one with the methyl group, counting over, is #4. So this compound's full name is (4-methylcyclohexane)methanol.
I went into that naming detail because it turns out to be important. This spill, needless to say, was a terrible thing that never should have happened. Dumping a huge load of industrial solvent into a river is a crime in both the legal and moral senses of the word. Early indications are that negligence had a role in the accident, which I can easily believe, and if so, I hope that those responsible are prosecuted, both for justice to be served and as a warning to others. Handling industrial chemicals involves a great deal of responsibility, and as a working chemist it pisses me off to see people doing it so poorly. But this accident, like any news story involving any sort of chemistry, also manages to show how little anyone outside the field understands anything about chemicals at all.
I say that because among the many lawsuits being filed, there are some that show (thanks, Chemjobber!) that the lawyers appear to believe that the chemical spill was a mixture of 4-methylcyclohexane and methanol. Not so. This is a misreading of the name, a mistake that a non-chemist might make because the rest of the English language doesn't usually build up nouns the way organic chemistry does. Chemical nomenclature is way too logical and cut-and-dried to be anything like a natural language; you really can draw a complex compound's structure just by reading its name closely enough. This error is a little like deciding that a hairdryer must be a device made partly out of hair.
I'm not exaggerating. The court filing, by the law firm of Thompson and Barney, says explicitly:
30. The combination chemical 4-MCHM is artificially created by combining methylclyclohexane (sic) with methanol.
31. Two component parts of 4-MCHM are methylcyclohexane and methanol which are both known dangerous and toxic chemicals that can cause latent dread disease such as cancer.
Sure thing, guys, just like the two component parts of dogwood trees are dogs and wood. Chemically, this makes no sense whatsoever. Now, it's reasonable to ask if 4-MCHM can chemically degrade to methanol and 4-methylcyclohexane. Without going into too much detail, the answer is "No". You don't get to break carbon-carbon bonds that way, not without a lot of energy. If you ran the chemical (at high temperature) through some sort of catalytic cracking reactor at an oil refinery, you might be able to get something like that to happen (although I'd expect other things as well, probably all at the same time), but otherwise, no. For the same sorts of reasons, you're not going to be able to get formaldehyde out of this compound, either, not without similar conditions. Air and sunlight and water aren't going to do it, and if bacteria and fungi metabolize it, I'd expect things like (4-methylcyclohexane)carboxaldehyde and (4-methylcyclohexane)carboxylic acid, among others. I would not expect them to break off that single-carbon alcohol as formaldehyde.
So where does all this talk of formaldehyde come from? Well, one way that formaldehyde shows up is from oxidation of methanol, as shown in that reaction (this time I've drawn in all the hydrogens). This is, in fact, one of the reasons that methanol is toxic. In the body, it gets oxidized to formaldehyde, and that gets oxidized right away to formic acid, which shuts down an important enzyme. Exposure to formaldehyde itself is a different problem. It's so reactive that most cancers associated with exposure to it are in the upper respiratory tract; it doesn't get any further.
As that methanol oxidation reaction pathway shows, the body actually has ways of dealing with formaldehyde exposure, up to a point. In fact, it's found at low levels (around 20 to 30 nanograms/milliliter) in things like tomatoes and oranges, so we can assume that these exposure levels are easily handled. I am not aware of any environmental regulations on human exposure to orange juice or freshly cut tomatoes. So how much formaldehyde did Dr. Scott Simonton find in his Charleston water sample? Just over 30 nanograms per milliliter. Slightly above the tomato-juice level (27 ng/mL). For reference, the lowest amount that can be detected is about 6 ng/mL. Update: and the amount of formaldehyde in normal human blood is about 1 microgram/mL, which is over thirty times the levels that Simonton says he found in his water samples. This is produced by normal human metabolism (enzymatic removal of methyl groups and other reactions). Everyone has it. And another update: the amount of formaldehyde in normal human saliva can easily be one thousand times that in Simonton's water samples, especially in people who smoke or have cavities. If you went thousands of miles away from this chemical spill, found an untouched wilderness and had one of its natives spit in a collection vial, you'd find a higher concentration of formaldehyde.
But Simonton is a West Virginia water quality official, is he not? Well, not in this capacity. As this story shows, he is being paid in this matter by the law firm of Thompson and Barney to do water analysis. Yes, that's the same law firm that thinks that 4-MCHM is a mixture with methanol in it. And the water sample that he obtained was from the Vandalia Grille in Charleston, the owners of which are defendants in that Thompson and Barney lawsuit that Chemjobber found.
So let me state my opinion: this is a load of crap. The amounts of formaldehyde that Dr. Simonton states he found are within the range of ozonated drinking water as it is, and just above those of fresh tomato juice. These are levels that have never been shown to be harmful in humans. His statements about cancer and other harm coming to West Virginia residents seem to me to be irresponsible fear-mongering. The sort of irresponsible fear-mongering that someone might do if they're being paid by lawyers who don't understand any chemistry and are interested in whipping up as much panic as they can. Just my freely offered opinions. Do your own research and see what you think.
Update: I see that actual West Virginia public health officials agree.
Another update: I've had people point out that the mixture that spilled may have contained up to 1% methanol. But see this comment for why this probably doesn't have any bearing on the formaldehyde issue. Update, Jan 31: Here's the MSDS for the "crude MHCM" that was spilled. The other main constituent (4-methoxymethylcyclohexane)methanol is also unlikely to produce formaldehyde, for the same reasons given above. The fact remains that the levels reported (and sensationalized) by Dr. Simonton are negligible by any standard.
Here's a long article from the Raleigh News and Observer (part one and part two) on the Eaton/Feldheim/Franzen dispute in nanoparticles, which some readers may already be familiar with (I haven't covered it on the blog myself). The articles are clearly driven by Franzen's continued belief that research fraud has been committed, and the paper makes the most of it.
The original 2004 publication in Science claimed that RNA solutions could influence the crystal form of palladium nanoparticles, which opened up the possibility of applying the tools of molecular biology to catalysts and other inorganic chemistry applications. Two more papers in JACS extended this to platinum and looked at in vitro evolutionary experiments. But even by 2005, Franzen's lab (who had been asked to join the collaboration between Eaton and Feldheim, who were now at Colorado and a startup company) was generating disturbing data: the original hexagonal crystals (a very strange and interesting form for palladium) weren't pure palladium at all - on an elemental basis, they were mostly carbon. (Later work showed that they were unstable crystals of (roughly) Pd(dba)3, with solvated THF. And they were produced just as well in the negative control experiments, with no RNA added at all.
N. C. State investigated the matter, and the committee agreed that the results were spurious. But they found Feldheim guilty of sloppy work, rather than fraud, saying he should have checked things out more thoroughly. Franzen continued to feel as if justice hadn't been done, though:
In fall 2009, he spent $1,334 of his own money to hire Mike Tadych, a Raleigh lawyer who specializes in public records law and who has represented The News & Observer. In 2010, the university relented and allowed Franzen into the room where the investigation records were locked away.
Franzen found the lab notebooks, which track experiments and results. As he turned the pages, he recognized that Gugliotti kept a thorough and well-organized record.
“I found an open-and-shut case of research fraud,” Franzen said.
The aqueous solution mentioned in the Science article? The experiments routinely used 50 percent solvent. The experiments only produced the hexagonal crystals when there was a high level of solvent, typically 50 percent or more. It was the solvent creating the hexagonal crystals, not the RNA.
On Page 43 of notebook 3, Franzen found what he called a “smoking gun.”
(Graduate student Lina) Gugliotti had pasted four images of hexagonal crystals, ragged around the edges. The particles were degrading at room temperature. The same degradation was present in other samples, she noted.
The Science paper claimed the RNA-templated crystals were formed in aqueous solution with 5% THF and were stable. NC State apparently offered to revoke Gugliotti's doctorate (and another from the group), but the article says that the chemistry faculty objected, saying that the professors involved should be penalized, not the students. The university isn't commenting, saying that an investigation by the NSF is still ongoing, but Franzen points out that it's been going on for five years now, a delay that has probably set a record. He's published several papers characterizing the palladium "nanocrystals", though, including this recent one with one of Eaton and Feldheim's former collaborators and co-authors. And there the matter stands.
It's interesting that Franzen pursued this all the way to the newspaper (known when I Iived in North Carolina by its traditional nickname of the Nuisance and Disturber). He's clearly upset at having joined what looked like an important and fruitful avenue of research, only to find out - rather quickly - that it was based on sloppy, poorly-characterized results. And I think what really has him furious is that the originators of the idea (Feldheim and Eaton) have tried, all these years, to carry on as if nothing was wrong.
I think, though, that Franzen is having his revenge whether he realizes it or not. It's coming up on ten years now since the original RNA nanocrystal paper. If this work were going to lead somewhere, you'd think that it would have led somewhere by now. But it doesn't seem to be. The whole point of the molecular-biology-meets-materials-science aspect of this idea was that it would allow a wide variety of new materials to be made quickly, and from the looks of things, that just hasn't happened. I'll bet that if you went back and looked up the 2005 grant application for the Keck foundation that Eaton, Feldheim (and at the time, Franzen) wrote up, it would read like an alternate-history science fiction story by now.
The Baran group has published a neat olefin-coupling reaction which looks like something pretty useful. Building on heteroatom/olefin couplings from Boger, Carriera, and others, they use an iron catalyst and a silane to form carbon-carbon bonds between olefins, inter- or intra-molecularly. As long as you've got one olefin with an electron-withdrawing group on it, things seem to fall into place (no homocoupling of the other olefin, for example). Update: here are more details from the Baran group blog about how this reaction came to be.
I like several things about this setup: the reagents are easy to come by, for one thing (no nine-step glovebox procedure to make the catalyst). And they've taken care to run it on larger scales (by bench standards) to see if it holds up (that reaction of 14 to 15 was done on gram scale, for example). They've also checked and found that the reaction doesn't mind if it's under nitrogen or not, and that you don't have to dry the solvents. These are exactly the questions that people ask every time a spiffy new reaction comes up, and all too often the answers are "We don't know" or "Well, yeah, about that. . ."
The only thing that worries me, looking over the tables of reactions, is that there's only one with a basic nitrogen (where 3-vinylpyridine was used). Boc-nitrogen seems to be OK, but a lot of the examples are rather alkane-ish. I've no doubt that people will be testing the limits of the system soon, because it looks like a reaction worth running.
Organic synthesis is, as many have put it, a victim of its own success. Synthetic chemists can, it's true, pretty much make whatever plausible structures you can draw on the board, or whatever product some tropical fungus or toxic sponge thinks is a good idea. But we can make those only if constraints on time and money are removed. "Give me enough postdocs and I will move the Earth".
Those aren't realistic conditions, though. There are many types of compounds, some of them quite simple, for which no good synthetic routes are known. Under infinite-postdoc conditions, many of these can be worked out for specific cases (step 43 of the total synthesis of shootmenowicene), but (and here's my industrial bias showing), a good synthetic route is one that works on a variety of substrates, with readily available reagents, in reliably useful yields, under non-strenuous conditions. We're missing a lot of those.
But it looks like one might have been crossed off the list. This paper in Science, from UT-Southwestern and Brigham Young, reports a new method to make aziridines, including NH ones, in one step under mild conditions. There are quite a few methods to make aziridines, but most of them are N-substituted, particularly N-Boc and N-tosyl. A direct reaction analogous to epoxidation to give you an NH aziridine is pretty rare, but this seems to be the answer. It's a rhodium-catalyzed route that has been applied to a range of olefins, and it looks pretty mild and pretty general.
This should simplify routes to a number of natural products with this motif, but it should also prompt some new chemistry as we get easier access to that functional group. Congratulations to the authors!
From Monash University comes this colorful (and doubtless extremely useful) chart of Smells of Chemistry. See if you agree with its assessments - I think it's broadly correct, but I might be a bit more descriptive in some of the boxes. Although "Unique and Unpleasant" does sum up some of them pretty well, and I do like the boxes marked "Old People", "Seaweed", and "Dead Animals".
A reader sends along this mysterious glassware set, which was donated to a nonprofit that he's working with. They're thinking of selling it on EBay, if they can figure out how to list it and what it is.
Looking at it, the lack of ground-glass joints makes you think "diazomethane kit", but I don't think that's quite right. (What are those gas impinger tubes doing in there, for example?) Kjeldahl apparatus? I haven't seen one in so long that I'm not sure about that, either. If anyone has any ideas, please feel free to take a crack.
The Danishefsky group has published their totally synthetic preparation of erythropoetin. This is a work that's been in progress for ten years now (here's the commentary piece on it), and it takes organic synthesis into realms that no one's quite experienced yet:
The ability to reach a molecule of the complexity of 1 by entirely chemical means provides convincing testimony about the growing power of organic synthesis. As a result of synergistic contributions from many laboratories, the aspirations of synthesis may now include, with some degree of realism, structures hitherto referred to as “biologics”— a term used to suggest accessibility only by biological means (isolation from plants, fungi, soil samples, corals, or microorganisms, or by recombinant expression). Formidable as these methods are for the discovery, development, and manufacturing of biologics, one can foresee increasing needs and opportunities for chemical synthesis to provide the first samples of homogeneous biologics. As to production, the experiments described above must be seen as very early days. . .
I can preach that one both ways, as the old story has it. I take the point about how synthesis can provide these things in more homogeneous form than biological methods can, and it can surely provide variations on them that biological systems aren't equipped to produce. At the same time, I might put my money on improving the biological methods rather than stretching organic synthesis to this point, at least in its present form. I see the tools of molecular biology as hugely powerful, but in need of customization, whereas organic synthesis can be as custom as you like, but can (so far) only reach this sort of territory by all-out efforts like Danishefsky's. In other words, I think that molecular biology has to improve less than organic chemistry has to get the most use out of such molecules.
That said, I think that the most impressive part of this impressive paper is the area where we have the fewest molecular biology tools: the synthesis of the polysaccharide side chains. Assembling the peptide part was clearly no springtime stroll (and if you read the paper, you find that they experienced the heartbreak of having to go back and redesign things when the initial assembly sequence failed). But polyglycan chemistry has been a long-standing problem (and one that Danishefsky himself has been addressing for years). I think that chemical synthesis really has a much better shot at being the method of choice there. And that should tell you what state the field is in, because synthesis of those things can be beastly. If someone manages to tame the enzymatic machinery that produces them, that'll be great, but for now, we have to make these things the organic chemistry way when we dare to make them at all.
Here's some good news for open (free) access to chemical information. A company called SureChem was trying to make a business out of chemical patent information, but had to fold. They've donated their database to the EMBL folks, and now we have SureChEMBL. At the moment, that link is taking me to the former SureChem site, but no doubt that's changing shortly.
This will give access to millions of chemical structures in patents, a resource that's been hard to search without laying out some pretty noticeable money. This isn't just the database dump, either - the software has been donated, too, so things will stay up to date:
SureChEMBL takes feeds of full text patents, identifies chemical objects from either the in-line text or from images and adds 2-D chemical structures. This is then loaded into a database and is searchable by chemical structure, so you can do substructure, similarity searching and so forth - all the good things you'd expect from a chemical database. This chemical search functionality is unavailable from the public, published patent documents, and is really essential for anyone seriously using the patent literature. Oh, and the system does this live, so as patents are published, they are processed and added to the system - the delay between publication and structures being available in SureChEMBL is about a day when converted from text, and a few days when converted from image sources.
Chemical Abstracts, Reaxsys, and the others in that business should take note: if they want people to keep paying for their systems, they'll need to keep providing more value for the money. Good news all around.
Chemjobber has a good post on a set of papers from Pfizer's process chemists. They're preparing filibuvir, and a key step along the way is a Dieckmann cyclization. Well, no problem, say the folks who've never run one of these things - just hit the diester compound with some base, right?
But which base? The example in CJ's post is a good one to show how much variation you can get in these things. As it turned out, LiHMDS was the base of choice, much better than NaHMDS or KHMDS. Potassium t-butoxide was just awful. But the hexamethyldisilazide was even much better than LDA, and those two are normally pretty close. But there were even finer distinctions to be made: it turned out that the reaction was (reproducibly) slightly better or slightly worse with LiHMDS from different suppliers. The difference came down to two processes used to prepare the reagent - via n-BuLi or via lithium metal, and the Pfizer team still isn't sure what the difference is that's making all the difference (see the link for more details).
That's pure, 100-proof process chemistry for you, chasing down these details. It's a good thing for people who don't do that kind of work at all, though, to read some of these papers, because it'll give you an appreciation of variables that otherwise you might not think of at all. When you get down to it, a lot of our reactions are balancing on some fairly wobbly tightropes strung across the energy-surface landscape, and it doesn't take much of a push to send them sliding off in different directions. Choice of cation, of Lewis acid, of solvent, of temperature, order of addition - these and other factors can be thermodynamic and kinetic game-changers. We really don't know too many details about what happens in our reaction flasks.
And a brief med-chem note, for context: filibuvir, into which all this work was put, was dropped from development earlier this year. Sometimes you have to do all the work just to get to the point where you can drop these things - that's the business.
Here's a roundup of the top chem-blog posts of the year, as picked by Nature's Sceptical Chymist blog. I made the list, but a lot of other good stuff did, too - have a look. Edit - link fixed now - sorry!
Pick an empirical formula. Now, what's the most stable compound that fits it? Not an easy question, for sure, and it's the topic of this paper in Angewandte Chemie. Most chemists will immediately realize that the first problem is the sheer number of possibilities, and the second one is figuring out their energies. A nonscientist might think that this is the sort of thing that would have been worked out a long time ago, but that definitely isn't the case. Why think about these things?
What is this “Guinness” molecule isomer search good for? Some astrochemists think in such terms when they look for molecules in interstellar space. A rule with exceptions says that the most stable isomers have a higher abundance (Astrophys. J. 2009, 696, L133), although kinetic control undoubtedly has a say in this. Pyrolysis or biotechnology processes, for example, in anaerobic biomass-to-fuel conversions, may be classified on the energy scale of their products. The fate of organic aerosols upon excitation with highly energetic radiation appears to be strongly influenced by such sequences because of ion-catalyzed chain reactions (Phys. Chem. Chem. Phys. 2013, 15, 940). The magic of protein folding is tied to the most stable atomic arrangement, although one must keep in mind that this is a minimum-energy search with hardly any chemical-bond rearrangement. We should rather not think about what happens to our proteins in a global search for their minimum-energy structure, although the peptide bond is not so bad in globally minimizing interatomic energy. Regularity can help and ab initio crystal structure prediction for organic compounds is slowly coming into reach. Again, the integrity of the underlying molecule is usually preserved in such searches.
Things get even trickier when you don't restrict yourself to single compounds. It's pointed out that the low-energy form of the hexose empirical formula (C6H12O6) might well be a mixture of methane and carbon dioxide (which sounds like the inside of a comet to me). That brings up another reason this sort of thinking is useful: if you want to sequester carbon dioxide, what's the best way to do it? What molecular assemblies are most energetically favorable, and at what temperatures do they exist, and what level of complexity? At larger scales, we'll also need to think about such things in the making of supramolecular assemblies for nanotechnology.
The author, Martin Suhm of Göttingen, calls for a database of the lowest-energy species for each given formula as an invitation for people to break the records. I'd like to see someone give it a try. It would provide challenges for synthesis, spectroscopy and (especially) modeling and computational chemistry.
A look back at the way it used to be, courtesy of ChemTips. What did you do without NMR, without LC-mass spec? You tried all kinds of tricks to get solids that you could recrystallize, and liquids that you could distill. I missed out on that era of chemistry, and most readers here can say the same. But it's a good mental exercise to picture what things used to be like.
Here's a very surprising idea that looks like it can be put to an experimental test. Mao-Sheng Miao (of UCSB and the Beijing Computational Sciences Research Center) has published a paper suggesting that under high-pressure conditions, some elements could show chemical bonding behavior involving their inner-shell electrons. Specific predictions include high-pressure forms of cesium fluoride - not just your plain old CsF, but CsF3 and CsF5, and man, do I feel odd writing down those formulae.
These have completely different geometries, and should be readily identifiable should they actually form. I'm thinking of this as cesium giving up its lone valence electron, and then you're left with a xenon-like arrangement. And xenon, as Neil Bartlett showed the world in 1962, can certainly go on to form fluorides. Throw in some pressure, and (perhaps) the deed it done in cesium's case. So I very much look forward to an experimental test of this idea, which I would imagine we'll see pretty shortly.
Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.
When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.
In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.
Here's something that you don't see every day: an article in the New York Times praising the sophomore organic chemistry course. It's from the Education section, and it's written from the author's own experience:
Contemplating a midlife career change from science writer to doctor, I spent eight months last year at Harvard Extension School slogging through two semesters of organic chemistry, or orgo, the course widely known for weeding out pre-meds. At 42, I was an anomaly, older than most of my classmates (and both professors), out of college for two decades and with two small children. When I wasn’t hopelessly confused, I spent my time wondering what the class was actually about. Because I’m pretty sure it wasn’t just about organic chemistry. For me, the overriding question was not “Is this on the test?” but rather “What are they really testing?”
That's a worthwhile question. Organic chemistry is a famous rite of passage for pre-med students, but it's safe to say that its details don't come up all that often in medical practice, at least not in the forms one finds them in most second-year courses. Of course, there's a lot to the viewpoint expressed by Chemjobber on Twitter, that if you can't understand sophomore organic, there are probably a lot of other topics in medical science you're going to have trouble understanding, too. The article touches on this, too:
But the rules have many, many exceptions, which students find maddening. The same molecule will behave differently in acid or base, in dark or sunlight, in heat or cold, or if you sprinkle magic orgo dust on it and turn around three times. You can’t memorize all the possible answers — you have to rely on intuition, generalizing from specific examples. This skill, far more than the details of every reaction, may actually be useful for medicine.
“It seems a lot like diagnosis,” said Logan McCarty, Harvard’s director of physical sciences education, who taught the second semester. “That cognitive skill — inductive generalization from specific cases to something you’ve never seen before — that’s something you learn in orgo.”
Or it's something you should learn, anyway. Taught poorly (or learned poorly) it's a long string of reactions to be memorized - this does that, that thing goes to this thing, on and on. Now, there are subjects that have to be given this treatment - the anatomy that those med students will end up studying is a good example - but you'd think that students would want to put off as much brute-force memorization as possible, in favor of learning some general principles. But sometimes those principles don't come across, and sometimes a student's natural response to new material is just to stuff it as it comes into the hippocampus. That's not a good solution, but in some cases organic chemistry gets to be the course that teaches that lesson. I don't suppose that knowing the Friedel-Crafts reaction helps out many physicians, but having to learn it might.
There's still a case for (future) physicians to know organic chemistry for the sake of knowing organic chemistry, though. You can't have much of a grasp of biochemistry without learning organic, and it comes in rather handy for pharmacology and toxicology, too. Depending on what kind of medicine a person's practicing, these may vary in utility. But I'd rather not have anyone as a physician who doesn't give them a thought.
Medicinal chemists have long been familiar with the "magic methyl" effect. That's the dramatic change in affinity that can be seen (sometimes) with the addition of a single methyl group in just the right place. (Alliteration makes that the phrase of choice, but there are magic fluoros, magic nitrogens, and others as well). The methyl group is also particularly startling to a chemist, because it's seen as electronically neutral and devoid of polarity - it's just a bump on the side of the molecule, right?
Some bump. There's a very useful new paper in Angewandte Chemie that looks at this effect, and I have to salute the authors. They have a number of examples from the recent literature, and it couldn't have been easy to round them up. The methyl groups involved tend to change rotational barriers around particular bonds, alter the conformation of saturated rings, and/or add what is apparently just the right note of nonpolar interaction in some part of a binding site. It's important to remember just how small the energy changes need to be for things like this to happen.
The latter part of the paper summarizes the techniques for directly introducing methyl groups (as opposed to going back to the beginning of the sequence with a methylated starting material). And the authors call for more research into such reactions: wouldn't it be useful to be able to just staple a methyl group in next to the nitrogen of a piperidine, for example, rather than having to redo the whole synthesis? There are ways to methylate aryl rings, via metal-catalyzed couplings or lithium chemistry, but alkyl methylations are thin on the ground. (The ones that exist tend to rely on those same sorts of mechanisms).
Methyl-group reagents of the same sort that have been found for trifluoromethyl groups in recent years would be welcome - the sorts of things you could expose a compound to and have it just methylate the most electrophilic or nucleophilic site(s) to see what you'd get. This is part of a general need for alkyl C-H activation chemistries, which people have been working on for quite a while now. It's one of the great undersolved problems in synthetic chemistry, and I hope that progress gets made. Otherwise I might have to break into verse again, and no one wants that.
I'm actually going to ignore the headline on this article at Chemistry World, although coming up with it must have made someone's day. Once I'd gotten my head back up out of my hands and read the rest of the piece, it was quite interesting.
It's a summary of this paper in Nature Chemistry, which used the ingenious system shown to measure what the alkyl-chain interactions are worth in different solvents.
The team has now used a synthetic molecular balance to measure the strength of van der Waals interactions between apolar alkyl chains in more than 30 distinct organic, fluorous and aqueous solvent environments. The balance measurements show that the interaction between alkyl chains is an order of magnitude smaller than estimates of dispersion forces derived from measurements of vaporisation enthalpies and dispersion-corrected calculations. Moreover, the team found that van der Waals interactions between the alkyl chains were strongly attenuated by competitive dispersion interactions with the surrounding solvent molecules.
There are two ways to look at this, and they're not mutually exclusive. One, which the Chemistry World article takes (in a quote from lead author Scott Cockcroft), is that this could simplify computational approaches to compound interactions, because calculating van der Waals forces is a much more intensive process. If solvent interactions are just going to cancel them out, why spend the resources? And that's true, but it brings up the other question: why did we think that vdW forces were so strong in the first place? As the quote above indicates, a lot of the experimental evidence is from gas-phase measurements, and the addition of solvent molecules clearly means that those values aren't as generalizable as had been thought. But that brings up the next question: why haven't computational methods shown before now that the gas-phase experimental data could be leading things astray?
I don't know the literature of the field well enough to answer that question, but given the sorts of exchanges that were taking place back in that recent Nobel Prize post, I'll bet that there are some people out there who can. Have there been computational methods that pointed toward the experimental data? Or have some of those efforts been directed more towards just seeing if the gas-phase data could be reproduced?
As a medicinal chemist, naturally, I'm wondering how we need to be thinking about binding of molecules to the active sites of enzymes. That's certainly not a solvent-filled environment, but inside a protein, water molecules are the bridge between being in a vacuum and being in solution. It'll depend on how many you have to worry about, what their roles are interacting with the protein and the ligand, how defined they are spatially, and how much of the molecule will be exposed to solvent. These things we already knew - will these new experimental results help us to get better at it?
Update: see the comments - lead author Scott Cockcroft says that his group is looking for computational collaborators for some of these very purposes.
Here's a neat paper from Oliver Kappe's group on diazomethane flow chemistry. They're using the gas-permeable tube-in-tube technique (as pioneered by Steve Ley's group). Flow systems have been described for using diazomethane before, but this looks like a convenient lab-scale method.
Diazomethane is the perfect reagent to apply flow chemistry to. It's small, versatile, reactive, does a lot of very interesting and useful chemistry (generally quickly and in high yields). . .and it's also volatile, extremely toxic, and a really significant explosion hazard. Generating it as you use it (and reacting it quickly) is a very appealing thought. In this system, the commercial diazomethane precursor Diazald is mixed with a KOH flow in one tube, while a THF solution of the reactants flows past it on the other side of a gas-permeable membrane. Methyl ester formation, cyclopropanation, and Arndt–Eistert reactions all worked well.
Aldrich or someone should work this up into a small commercial apparatus, a bespoke diazomethane generator for general use. I think it would sell. I suggest, free of charge, the brand name "Diazoflow".
I wanted to mention a new blog, Totally Microwave, that's set up to cover all sorts of developments in microwave-assisted chemistry. Full disclosure: it's from a former colleague of mine. I don't know of another site that's working this area, so it could be a good addition to the list - have a look!
If you're in the mood for some truly 100-proof synthetic organic chemistry, this post from Mark Peczuh at UConn is going to be just what you need. He's going through the 2002 synthesis of ingenol from the Winkler group, line by line, in an effort to show his own students how to read such highly compressed reports. Here's a bit of it, to give you the idea:
Paragraph 7: The payment for using 6 in place of 5 has come due. In paragraph 7, the authors quickly move through a series of transformations that convert 16 to 22. The key player that enables these transformations is the hydroxymethyl group attached to C6. Oxidation of that group to the corresponding aldehyde allows sequential eliminations that create the diene in 22. The authors report flatly in this paragraph that the seven steps reported are “to introduce the A ring functionality present in ingenol”. They don’t put emphasis on it, but it’s logical to think that they’d have preferred to carry an oxygenated C3 up to this point and done only one or two steps to be in a much better position than they presently are. So it goes.
That is indeed exactly the sort of thinking that you have to do to follow one of these things, and it definitely requires time and effort. This chemistry is the sort of work that I don't do (and have, in fact, questioned the utility of), but there's no doubt that it's an extreme intellectual and practical challenge, which this view can really make you appreciate.
The 2013 Nobel Prize in Chemistry has gone to Martin Karplus of Harvard, Michael Levitt of Stanford, and Arieh Warshel of USC. This year's prize is one of those that covers a field by recognizing some of its most prominent developers, and this one (for computational methods) has been anticipated for some time. It's good to see it come along, though, since Karplus is now 83, and his name has been on the "Could easily win a Nobel" lists for some years now. (Anyone who's interpreted an NMR spectrum of an organic molecule will know him for a contribution that he's not even cited for by the Nobel committee, the relationship between coupling constants and dihedral angles).
Here's the Nobel Foundation's information on this year's subject matter, and it's a good overview, as usual. This one has to cover a lot of ground, though, because the topic is a large one. The writeup emphasizes (properly) the split between classical and quantum-mechanical approaches to chemical modeling. The former is easier to accomplish (relatively!), but the latter is much more relevant (crucial, in fact) as you get down towards the scale of individual atoms and bonds. Computationally, though, it's a beast. This year's laureates pioneered some very useful techniques to try to have it both ways.
This started to come together in the 1970s, and the methods used were products of necessity. The computing power available wouldn't let you just brute-force your way past many problems, so a lot of work had to go into figuring out where best to deploy the resources you had. What approximations could you get away with? How did you use your quantum-mechanical calculations to give you classical potentials to work with? Where should be boundaries between the two be drawn? Even with today's greater computational power these are still key questions, because molecular dynamics calculations can still eat up all the processor time you can throw at them.
That's especially true when you apply these methods to biomolecules like proteins and DNA, and one thing you'll notice about all three of the prize winners is that they went after these problems very early. That took a lot of nerve, given the resources available, but that's what distinguishes really first-rate scientists: they go after hard, important problems, and if the tools to tackle such things don't exist, they invent them. How hard these problems are can be seen by what we can (and still can't) do by computational simulations here in 2013. How does a protein fold, and how does it end up in the shape it has? What parts of it move around, and by how much? What forces drive the countless interactions between proteins and ligands, other proteins, DNA and RNA molecules, and all the rest? What can we simulate, and what can we predict?
I've said some critical things about molecular modeling over the years, but those have mostly been directed at people who oversell it or don't understand its limitations. People like Karplus, Levitt, and Warshel, though, know those limitations in great detail, and they've devoted their careers to pushing them back, year after year. Congratulations to them all!
More coverage:Curious Wavefunction and C&E News. The popular press coverage of this award will surely be even worse than usual, because not many people charged with writing the headlines are going to understand what it's about.
Addendum: for almost every Nobel awarded in the sciences, there are people that miss out due to the "three laureate" rule. This year, I'd say that it was Norman Allinger, whose work bears very much on the subject of this year's prize. Another prominent computational chemist whose name comes up in Nobel discussions is Ken Houk, whose work is directed more towards mechanisms of organic reactions, and who might well be recognized the next time computational chemistry comes around in Sweden.
Second addendum: for a very dissenting view of my "Kumbaya" take on today's news, see this comment, and scroll down for reactions to it. I think its take is worth splitting out into a post of its own shortly!
Does anyone know what the MIT Press Office is getting at with this intro?
In all the centuries that humans have studied chemical reactions, just 36 basic types of reactions have been found. Now, thanks to the work of researchers at MIT and the University of Minnesota, a 37th type of reaction can be added to the list.
I don't think I've ever heard of any scheme quite like that. Looking over the paper itself, it's an interesting piece of computational work on low-temperature oxidation pathways. It shows that gamma-keto hydroperoxides (as had been hypothesized) can form a cyclic peroxide intermediate, which then fragments into a carboxylic acid and an aldehyde. This would seem to clear up some discrepancies in the production of COOH compounds in several oxidation and combustion pathways, where they show up much more often than theory predicts.
And that's all fine, but what's this 36 reaction business? It appears nowhere in the paper, which makes me wonder if someone who worked on the press release got something tangled up. Or is there some classification scheme that I haven't heard of?
(Noted on the "Ask Science" section of Reddit), where a baffled reader of the press release tried to find out what was going on).
Element shortages are in the news these days. The US has been talking about shutting down its strategic helium reserve, and there are plenty of helium customers worried about the prospect. The price of liquid helium, not a commodity that you usually hear quoted on the afternoon financial report, has apparently more than tripled in the last year.
I think that this is more of a gap problem than a running-out-of-helium one, though. There's still a lot of helium in the world, and the natural gas boom of recent years has made even more of it potentially available. Trapping it, though, is not cheap - this is something that has to be done on a large scale to work at all, and substantial investment is needed. Air Liquide has a liquefaction plant starting up in Qatar, but that won't be running at full capacity for a while yet, it appears. I think, though, that this plant and other such efforts will end up providing enough helium for industry and research, at a price. We aren't running out of helium, but the cheap helium is going to be in short supply for a few years.
At the other end of the periodic table, though, it looks like we really are running out of plutonium-238. One's first impulse is to say "Good!", because the existing stockpiles are largely the result of nuclear weapons production in years past. But it's an excellent material to power radiothermal generators, since it has a reasonable half-life (87.7 years), a high decay energy, and is an alpha emitter (thus needing less heavy shielding). Note this picture of a pellet of the oxide glowing under its own heat. There are a number of proposed deep space missions that will only launch if they can use Pu-238 that no one seems to have. Russia sold about 16 kilos to the US in the early 1990s, but just a few years ago they backed out of a deal for another 10. No one's sure - or no one's saying - if that's because they would rather hold on to it themselves, or if they don't really have that amount left any more. To give you an idea, the proposed Europa mission to Jupiter would need about 22 kilos.
There are efforts to restart Pu-238 production, but as you would imagine, this is not the work of a moment. As opposed to helium, which is sitting around in natural gas underground, you're not going to be mining any plutonium. It has to be made from neptunium-237, which you only get from spent nuclear fuel rods, and the process is expensive, no fun, and hot as blazes in every sense of the word. Even if the proposed restart gets going, it'll only produce about 1.5kg per year. So if you have any plans that involve large amounts of plutonium - and they'd better involve space exploration, dude - you should take this into account.
Well, nearly nothing. That's the promise of a technique that's been published by the Ernst lab from the University of Basel. They first wrote about this in 2010, in a paper looking for ligands to the myelin-associated glycoprotein (MAG). That doesn't sound much like a traditional drug target, and so it isn't. It's part of a group of immunoglobulin-like lectins, and they bind things like sialic acids and gangliosides, and they don't seem to bind them very tightly, either.
One of these sialic acids was used as their starting point, even though its affinity is only 137 micromolar. They took this structure and hung a spin label off it, with a short chain spacer. The NMR-savvy among you will already see an application of Wolfgang Jahnke's spin-label screening idea (SLAPSTIC) coming. That's based on the effect of an unpaired electron in NMR spectra - it messes with the relaxation time of protons in the vicinity, and this can be used to determine whatever might be nearby. With the right pulse sequence, you can easily detect any protons on any other molecules or residues out to about 15 or 20 Angstroms from the spin label.
Jahnke's group at Novartis attached spin labels to proteins and used these the find ligands by NMR screening. The NMR field has a traditional bias towards bizarre acronyms, which sometimes calls for ignoring a word or two, so SLAPSTIC stands for "Spin Labels Attached to Protein Side chains as a Tool to identify Interacting Compounds". Ernst's team took their cue from yet another NMR ligand-screening idea, the Abbott "SAR by NMR" scheme. That one burst on the scene in 1996, and caused a lot of stir at the time. The idea was that you could use NMR of labeled proteins, with knowledge of their structure, to find sets of ligands at multiple binding sites, then chemically stitch these together to make a much more potent inhibitor. (This was fragment-based drug discovery before anyone was using that phrase).
The theory behind this idea is perfectly sound. It's the practice that turned out to be the hard part. While fragment linking examples have certainly appeared (including Abbott examples), the straight SAR-by-NMR technique has apparently had a very low success rate, despite (I'm told by veterans of other companies) a good deal of time, money, and effort in the late 1990s. Getting NMR-friendly proteins whose structure was worked out, finding multiple ligands at multiple sites, and (especially) getting these fragments linked together productively has not been easy at all.
But Ernst's group has brought the idea back. They did a second-site NMR screen with a library of fragments and their spin-labeled sialic thingie, and found that 5-nitroindole was bound nearby, with the 3-position pointed towards the label. That's an advantage of this idea - you get spatial and structural information without having to label the protein itself, and without having to know anything about its structure. SPR experiments showed that the nitroindole alone had affinity up in the millimolar range.
They then did something that warmed my heart. They linked the fragments by attaching a range of acetylene and azide-containing chains to the appropriate ends of the two molecules and ran a Sharpless-style in situ click reaction. I've always loved that technique, partly because it's also structure-agnostic. In this case, they did a 3x4 mixture of coupling partners, potentially forming 24 triazoles (syn and anti). After three days of incubation with the protein, a new peak showed up in the LC/MS corresponding to a particular combination. They synthesized both possible candidates, and one of them was 2 micromolar, while the other was 190 nanomolar.
That molecule is shown here - the percentages in the figure are magnetization transfer in STD experiments, with the N-acetyl set to 100% as reference. And that tells you that both ends of the molecule are indeed participating in the binding, as that greatly increased affinity would indicate. (Note that the triazole appears to be getting into the act, too). That affinity is worth thinking about - one part of this molecule was over 100 micromolar, and the other was millimolar, but the combination is 190 nanomolar. That sort of effect is why people keep coming back to fragment linking, even though it's been a brutal thing to get to work.
When I read this paper at the time, I thought that it was very nice, and I filed it in my "Break Glass in Case of Emergency" section for interesting and unusual screening techniques. One thing that worried me, as usual, was whether this was the only system this had ever worked on, or ever would. So I was quite happy to see a new paper from the Ernst group this summer, in which they did it again. This time, they found a ligand for E-selectin, another one of these things that you don't expect to ever find a decent small molecule for.
In this case, it's still not what an organic chemist would be likely to call a "decent small molecule", because they started with something akin to sialyl Lewis, which is already a funky tetrasaccharide. Their trisaccharide derivative had roughly 1 micromolar affinity, with the spin label attached. A fragment screen against E-selectrin had already identified several candidates that seemed to bind to the protein, and the best guess what that they probably wouldn't be binding in the carbohydrate recognition region. Doing the second-site screen as before gave them, as fate would have it, 5-nitroindole as the best candidate. (Now my worry is that this technique only works when you run it with 5-nitroindole. . .)
They worked out the relative geometry of binding from the NMR experiments, and set about synthesizing various azide/acetylene combinations. In this case, the in situ Sharpless-style click reactions did not give any measurable products, perhaps because the wide, flat binding site wasn't able to act as much of a catalyst to bring the two compounds together. Making a library of triazoles via the copper-catalyzed route and testing those, though, gave several compounds with affinities between 20x and 50x greater than the starting structure, and with dramatically slower off-rates.
They did try to get rid of the nitro group, recognizing that it's only an invitation to trouble. But the few modifications they tried really lowered the affinity, which tells you that the nitro itself was probably an important component of the second-site binding. That, to me, is argument enough to consider not having those things in your screening collection to start with. It all depends on what you're hoping for - if you just want a ligand to use as a biophysical tool compound, then nitro on, if you so desire. But it's hard to stop there. If it's a good hit, people will want to put it into cells, into animals, into who knows what, and then the heartache will start. If you're thinking about these kinds of assays, you might well be better off not knowing about some functionality that has a very high chance of wasting your time later on. (More on this issue here, here, here, and here). Update: here's more on trying to get rid of nitro groups).
This work, though, is the sort of thing I could read about all day. I'm very interested in ways to produce potent compounds from weak binders, ways to attack difficult low-hit-rate targets, in situ compound formation, and fragment-based methods, so these papers push several of my buttons simultaneously. And who knows, maybe I'll have a chance to do something like this all day at some point. It looks like work well worth taking seriously.
I mentioned ChemDraw for the iPad earlier this year, but the folks at PerkinElmer tell me that they've released a new version of it, and of Chem3D. (They're here and here at Apple, respectively). They've added text annotation, which seems to have been a highly requested feature, adjustable arrows, and a number of other features. Worth a look for the chemist-on-the-go.
Here's a paper from the Carreira group at the ETH, in collaboration with Roche, that falls into a category I've always enjoyed. I put these under the heading of "Synthetic routes into cute functionalized ring systems", and you can see my drug-discovery bias showing clearly.
Med-chem people like these kinds of molecules. (I have a few of them drawn here, but all the obvious variations are in the paper, too). They aren't in all the catalogs (yet), they're in no one's screening collection, and they have a particular kind of shape that might not be covered by anything else we already have in our files. There's no reason why something like this might not be the core of a bunch of useful compounds - small saturated nitrogen heterocycles fused to other rings sure do show up all over the place.
And the purpose of this sort of paper matches a drug discovery person's worldview exactly: here's a reasonable way into a large number of good-looking compounds that no one's ever screened, so go to it. (Here's an earlier paper from Carreira in the same area). The chemistry involved in making this things is good, solid stuff: it's not cutting-edge, but it doesn't have to be. It's done on a reasonable scale, and it certainly looks like it would work just fine. I can understand why readers from other branches of organic chemistry would skip over a paper like this. No theoretical concerns are addressed in the syntheses, no natural products are produced, no new catalysts are developed, and no new reactions are discovered. But new scaffolds are being made, and for a medicinal chemist, that's more than enough right there. This is chemistry that does just what it needs to do, quickly, and gets out of the way, and I wouldn't mind seeing a paper or two like this every time I open up my RSS feeds.
Acetate is used in vivo as a starting material for all sorts of ridiculously complex natural products. So here's a neat idea: why not hijack those pathways with fluoroacetate and make fluorinated things that no one's ever seen before? That's the subject of this new paper in Science, from Michelle Chang's lab at Berkeley.
There's the complication that fluoroacetate is a well-known cellular poison, so this is going to be synthetic biology all the way. (It gets processed all the way to fluorocitrate, which is a tight enough inhibitor of aconitase to bring the whole citric acid cycle to a shuddering halt, and that's enough to do the same thing to you). There a Streptomyces species that has been found to use fluoroacetate without dying (just barely), but honestly, I think that's about it for organofluorine biology.
The paper represents a lot of painstaking work. Finding enzymes (and enzyme variants) that look like they can handle the fluorinated intermediates, expressing and purifying them, and getting them to work together ex vivo are all significant challenges. They eventually worked their way up to 6-deoxyerythronolide B synthase (DEBS), which is a natural goal since it's been the target of so much deliberate re-engineering over the years. And they've managed to produce compounds like the ones shown, which I hope are the tip of a larger fluorinated iceberg.
It turns out that you can even get away with doing this in living engineered bacteria, as long as you feed them fluoromalonate (a bit further down the chain) instead of fluoroacetate. This makes me wonder about other classes of natural products as well. Has anyone ever tried to see if terpenoids can be produced in this way? Some sort of fluorinated starting material in the mevalonate pathway, maybe? Very interesting stuff. . .
We chemists have always looked at the chemical machinery of living systems with a sense of awe. A billion years of ruthless pruning (work, or die) have left us with some bizarrely efficient molecular catalysts, the enzymes that casually make and break bonds with a grace and elegance that our own techniques have trouble even approaching. The systems around DNA replication are particularly interesting, since that's one of the parts you'd expect to be under the most selection pressure (every time a cell divides, things had better work).
But we're not content with just standing around envying the polymerase chain reaction and all the rest of the machinery. Over the years, we've tried to borrow whatever we can for our own purposes - these tools are so powerful that we can't resist finding ways to do organic chemistry with them. I've got a particular weakness for these sorts of ideas myself, and I keep a large folder of papers (electronic, these days) on the subject.
So I was interested to have a reader send along this work, which I'd missed when it came out on PLOSONE. It's from Pehr Harbury's group at Stanford, and it's in the DNA-linked-small-molecule category (which I've written about, in other cases, here and here). Here's a good look at the pluses and minuses of this idea:
However, with increasing library complexity, the task of identifying useful ligands (the ‘‘needles in the haystack’’) has become increasingly difficult. In favorable cases, a bulk selection for binding to a target can enrich a ligand from non-ligands by about 1000-fold. Given a starting library of 1010 to 1015 different compounds, an enriched ligand will be present at only 1 part in 107 to 1 part in 1012. Confidently detecting such rare molecules is hard, even with the application of next-generation sequencing techniques. The problem is exacerbated when biologically-relevant selections with fold-enrichments much smaller than 1000-fold are utilized.
Ideally, it would be possible to evolve small-molecule ligands out of DNA-linked chemical libraries in exactly the same way that biopolymer ligands are evolved from nucleic acid and protein libraries. In vitro evolution techniques overcome the ‘‘needle in the haystack’’ problem because they utilize multiple rounds of selection, reproductive amplification and library re-synthesis. Repetition provides unbounded fold-enrichments, even for inherently noisy selections. However, repetition also requires populations that can self-replicate.
That it does, and that's really the Holy Grail of evolution-linked organic synthesis - being able to harness the whole process. In this sort of system, we're talking about using the DNA itself as a physical prod for chemical reactivity. That's also been a hot field, and I've written about some examples from the Liu lab at Harvard here, here, and here. But in this case, the DNA chemistry is being done with all the other enzymatic machinery in place:
The DNA brings an incipient small molecule and suitable chemical building blocks into physical proximity and induces covalent bond formation between them. In so doing, the naked DNA functions as a gene: it orchestrates the assembly of a corresponding small molecule gene product. DNA genes that program highly fit small molecules can be enriched by selection, replicated by PCR, and then re-translated into DNA-linked chemical progeny. Whereas the Lerner-Brenner style DNA-linked small-molecule libraries are sterile and can only be subjected to selective pressure over one generation, DNA-programmed libraries produce many generations of offspring suitable for breeding.
The scheme below shows how this looks. You take a wide variety of DNA sequences, and have them each attached to some small-molecule handle (like a primary amine). You then partition these out into groups by using resins that are derivatized with oligonucleotide sequences, and you plate these out into 384-well format. While the DNA end is stuck to the resin, you do chemistry on the amine end (and the resin attachment lets you get away with stuff that would normally not work if the whole DNA-attached thing had to be in solution). You put a different reacting partner in each of the 384 wells, just like in the good ol' combichem split/pool days, just with DNA as the physical separation mechanism.
In this case, the group used 240-base-pair DNA sequences, two hundred seventeen billion of them. That sentence is where you really step off the edge into molecular biology, because without its tools, generating that many different species, efficiently and in usable form, is pretty much out of the question with current technology. That's five different coding sequences, in their scheme, with 384 different ones in each of the first four (designated A through D), and ten in the last one, E. How diverse was this, really? Get ready for more molecular biology tools:
We determined the sequence of 4.6 million distinct genes from the assembled library to characterize how well it covered ‘‘genetic space’’. Ninety-seven percent of the gene sequences occurred only once (the mean sequence count was 1.03), and the most abundant gene sequence occurred one hundred times. Every possible codon was observed at each coding position. Codon usage, however, deviated significantly from an expectation of random sampling with equal probability. The codon usage histograms followed a log-normal distribution, with one standard deviation in log- likelihood corresponding to two-to-three fold differences in codon frequency. Importantly, no correlation existed between codon identities at any pair of coding positions. Thus, the likelihood of any particular gene sequence can be well approxi- mated by the product of the likelihoods of its constituent codons. Based on this approximation, 36% of all possible genes would be present at 100 copies or more in a 10 picomole aliquot of library material, 78% of the genes would be present at 10 copies or more, and 4% of the genes would be absent. A typical selection experiment (10 picomoles of starting material) would thus sample most of the attainable diversity.
The group had done something similar before with 80-codon DNA sequences, but this system has 1546, which is a different beast. But it seems to work pretty well. Control experiments showed that the hybridization specificity remained high, and that the micro/meso fluidic platform being used could return products with high yield. A test run also gave them confidence in the system: they set up a run with all the codons except one specific dropout (C37), and also prepared a "short gene", containing the C37 codon, but lacking the whole D area (200 base pairs instead of 240). When they mixed that in with the drop-out library (in a ratio of 1 to 384), and split that out onto a C-codon-attaching array of beads. They then did the chemical step, attaching one peptoid piece onto all of them except the C37 binding well - that one got biotin hydrazide instead. Running the lot of them past streptavidin took the ratio of the C37-containing ones from 1:384 to something over 35:1, an enhancement of at least 13,000-fold. (Subcloning and sequencing of 20 isolates showed they all had the C37 short gene in them, as you'd expect).
They then set up a three-step coupling of peptoid building blocks on a specific codon sequence, and this returned very good yields and specificities. (They used a fluorescein-tagged gene and digested the product with PDE1 before analyzing them at each step, which ate the DNA tags off of them to facilitate detection). The door, then, would now seem to be open:
Exploration of large chemical spaces for molecules with novel and desired activities will continue to be a useful approach in academic studies and pharmaceutical investigations. Towards this end, DNA-programmed combinatorial chemistry facilitates a more rapid and efficient search process over a larger chemical space than does conventional high-throughput screening. However, for DNA-programmed combinatorial chemistry to be widely adopted, a high-fidelity, robust and general translation system must be available. This paper demonstrates a solution to that challenge.
The parallel chemical translation process described above is flexible. The devices and procedures are modular and can be used to divide a degenerate DNA population into a number of distinct sub-pools ranging from 1 to 384 at each step. This coding capacity opens the door for a wealth of chemical options and for the inclusion of diversity elements with widely varying size, hydrophobicity, charge, rigidity, aromaticity, and heteroatom content, allowing the search for ligands in a ‘‘hypothesis-free’’ fashion. Alternatively, the capacity can be used to elaborate a variety of subtle changes to a known compound and exhaustively probe structure-activity relationships. In this case, some elements in a synthetic scheme can be diversified while others are conserved (for example, chemical elements known to have a particular structural or electrostatic constraint, modular chemical fragments that independently bind to a protein target, metal chelating functional groups, fluorophores). By facilitating the synthesis and testing of varied chemical collections, the tools and methods reported here should accelerate the application of ‘‘designer’’ small molecules to problems in basic science, industrial chemistry and medicine.
Anyone want to step through? If GSK is getting some of their DNA-coded screening to work (or at least telling us about the examples that did?), could this be a useful platform as well? Thoughts welcome in the comments.
Fragment-based screening comes up here fairly often (and if you're interested in the field, you should also have Practical Fragments on your reading list). One of the complaints both inside and outside the fragment world is that there are a lot of primary hits that fall into flat/aromatic chemical space (I know that those two don't overlap perfectly, but you know the sort of things I mean). The early fragment libraries were heavy in that sort of chemical matter, and the sort of collections you can buy still tend to be.
The UK-based 3D Fragment Consortium has a paper out now in Drug Discovery Today that brings together a lot of references to work in this field. Even if you don't do fragment-based work, I think you'll find it interesting, because many of the same issues apply to larger molecules as well. How much return do you get for putting chiral centers into your molecules, on average? What about molecules with lots of saturated atoms that are still rather squashed and shapeless, versus ones full of aromatic carbons that carve out 3D space surprisingly well? Do different collections of these various molecular types really have differences in screening hit rates, and do these vary by the target class you're screening against? How much are properties (solubility, in particular) shifting these numbers around? And so on.
The consortium's site is worth checking out as well for more on their activities. One interesting bit of information is that the teams ended up crossing off over 90% of the commercially available fragments due to flat structures, which sounds about right. And that takes them where you'd expect it to:
We have concluded that bespoke synthesis, rather than expansion through acquisition of currently available commercial fragment-sized compounds is the most appropriate way to develop the library to attain the desired profile. . .The need to synthesise novel molecules that expand biologically relevant chemical space demonstrates the significant role that academic synthetic chemistry can have in facilitating target evaluation and generating the most appropriate start points for drug discovery programs. Several groups are devising new and innovative methodologies (i.e. methyl activation, cascade reactions and enzymatic functionalisation) and techniques (e.g. flow and photochemistry) that can be harnessed to facilitate expansion of drug discovery-relevant chemical space.
And as long as they stay away from the frequent hitters/PAINS, they should end up with a good collection. I look forward to future publications from the group to see how things work out!
The Baran group at Scripps has a whopper of a total synthesis out in Science. They have a route to the natural product ingenol, which is isolated from a Euphorbia species, a genus that produces a lot of funky diterpenoids. A synthetic ester of the compound as recently been approved to treat actinic keratosis, a precancerous skin condition brought on by exposure to sunlight.
The synthesis is 14 steps long, but that certainly doesn't qualify it for the "whopper" designation that I used. There are far, far longer total syntheses in the literature, but as organic chemists are well aware, a longer synthesis is not a better one. The idea is to make a compound as quickly and elegantly as possible, and for a compound like ingenol, 14 steps is pretty darn quick.
I'll forgo the opportunity for chem-geekery on the details of the synthesis itself (here's a write-up at Chemistry World). it is, of course, a very nice approach to the compound, starting from the readily available natural product (+) 3-carene, which is a major fraction of turpentine. There's a pinacol rearrangement as a key step, and from this post at the Baran group blog, you can see that it was a beast. Most of 2012 seems to have been spent on that one reaction, and that's just what high-level total synthesis is like: you have to be prepared to spend months and months beating on reactions in every tiny, picky variation that you can imagine might help.
Let me speak metaphorically, for those outside the field or who have never had the experience. Total synthesis of a complex natural product is like. . .it's like assembling a huge balloon sculpture, all twists and turns, knots and bulges, only half of the balloons are rubber and half of them are made of blown glass. And you can't just reach in and grab the thing, either, and they don't give you any pliers or glue. What you get is a huge pile of miscellaneous stuff - bamboo poles, cricket bats, spiral-wound copper tubing, balsa-wood dowels, and several barrels of even more mixed-up junk: croquet balls, doughnuts, wadded-up aluminum foil, wobbly Frisbees, and so on.
The balloon sculpture is your molecule. The piles of junk are the available chemical methods you use to assemble it. Gradually, you work out that if you brace this part over here in a cradle of used fence posts, held together with turkey twine, you can poke this part over here into it in a way that makes it stick if you just use that right-angled metal doohicky to hold it from the right while you hit the top end of it with a thrown tennis ball at the right angle. Step by step, this is how you proceed. Some of the steps are pretty obvious, and work more or less the way you pictured them, using things that are on top of one of the junk piles. Others require you to rummage through the whole damn collection, whittling parts down and tying stuff together to assemble some tool that you don't have, maybe something that no one has ever made at all.
What I like most about this new synthesis is that it's done on a real scale. LEO Pharmaceuticals is the company that sells the ingenol gel, and they're interested in seeing if there's something better. That post from Baran's group shows people holding flasks with grams of material in them. Mind you, that's what you need to get all these reactions figured out; I can only imagine how much material they must have burned off trying to get some of these steps optimized. But now that it's worked out, real quantities of analogs can be produced. Everyone who does total synthesis talks about making analogs for testing, but the follow-through is sometimes lacking. This one looks like it'll be more robust. Congratulations to everyone involved - with any luck, you'll never have to do something like this again, unless it's by choice!
Since I was talking about microwave heating of reactions here the other week, I wanted to mention this correspondence in Angewandte Chemie. Oliver Kappe is the recognized expert on microwave heating in chemistry, and recently published an overview of the topic. One of the examples he cited was a report of some Friedel-Crafts reactions that were accelerated by microwave heating. The authors did not take this very well, and fired back with a correspondence in Ang. Chem., clearly feeling that their work had been mistreated in Kappe's article. They never claimed to be seeing some sort of nonthermal microwave effect, they say, and resent the implication that they were.
Kappe himself has replied now, and seems to feel that Dudley et al. are trying to have things both ways:
In their Correspondence, Dudley and co-workers have suggested that we attempt to impugn their credibility by associating their rationalization for the observed effect with the concept of nonthermal microwave effects. This is clearly not the case. On the contrary, we specifically state in the Essay that “The proposed effect perhaps can best be classified as a specific microwave effect involving selective heating of a strongly microwave-absorbing species in a homogeneous reaction mixture (”molecular radiators).“ As we have already pointed out, our Essay was mainly intended to provide an overview on the current state-of-affairs regarding microwave chemistry and microwave effects research. Not surprisingly, therefore, out of the incriminated 22 uses of the word ”nonthermal“ in our Essay, this word was used only twice in reference to the Dudley chemistry, and in both of these instances in conjunction with the term ”specific microwave effect“.
The confusion perhaps arises since in the original publication by Dudley, the authors provide no clear-cut classification (thermal, specific, nonthermal) of the microwave effect that they have observed. In fact, they do not unequivocally state that they believe the effect is connected to a purely thermal phenomenon, but rather invoke arguments about molecular collisions and the pre-exponential factor A in the Arrhenius equation (for example: “Chemical reactions arise from specific molecular collisions, which typically increase as a function of temperature but also result from incident microwave irradiation”). Statements like this that appear to separate a thermal phenomenon from a microwave irradiation event clearly invite speculation by non-experts about the involvement of microwave effects that are not purely thermal in nature. This is very apparent by the news feature in Chemistry World following the publication of the Dudley article entitled: “Magical microwave effects revived. Microwaves can accelerate reactions without heating”
Based on his own group's study of the reaction, Kappe believes that what's going on is local superheating of the solvent, not something more involved and/or mysterious. His reply is a lengthy, detailed schooling in microwave techniques - why the stated power output of a microwave reactor is largely meaningless, the importance (and difficulty) of accurate temperature measurements, and the number of variables that can influence solvent superheating. The dispute here seems to be largely a result of the original paper trying to sound coy about microwave effects - if they'd played things down a bit, I don't think this whole affair would have blown up.
But outside of this work, on the general topic of nonthermal microwave reaction effects, I side with Kappe (and, apparently, so do Dudley and co-authors). I haven't seen any convincing evidence for microwave enhancement of reactions that doesn't come down to heating (steep gradient, localized superheating, etc.)
Here it is 2013, and the last shot has just now been fired in the norbornyl cation debate. I'm too young to have lived through that one, although it was still echoing around as I learned chemistry. But now we have a low-temperature crystal structure of the ion itself, and you know what? It's nonclassical. Winstein was right, Olah and Schleyer were right, and H. C. Brown was wrong.
Everyone's been pretty sure of that for a long time now, of course. But that article from Chemistry World has a great quote from J. D. Roberts, who is now an amazing 95 years old and goes back a long, long way in this debate. He's very happy to see this new structure, but says that it still wouldn't have fazed H. C. Brown: "Herb would be Herb no matter what happened", he says, and from everything I've heard about him, that seems accurate.
Here's a paper in Nature Chemistry that addresses something that isn't explicitly targeted as often as it should be: the robustness of new reactions. The authors, I think, are right on target with this:
We believe a major hurdle to the application of a new chemical methodology to real synthetic problems is a lack of information regarding its application beyond the idealized conditions of the seminal report. Two major considerations in this respect are the functional group tolerance of a reaction and the stability of specific chemical motifs under reaction conditions. . .
Taking into account the limitations of the current methods, we propose that a lack of understanding regarding the application of a given reaction to non-idealized synthetic problems can result in a reluctance to apply new methodology. Confidence in the utility of a new reaction develops over time—often over a number of years—as the reaction is gradually applied within total syntheses, follow-up methodological papers are published, or personal experience is developed. Unfortunately, even when this information has evolved, it is often widely dispersed, fragmented and difficult to locate. To address this problem, both the tolerance of a reaction to chemical functionality and of the chemical functionality to the reaction conditions must be established when appropriate, and reported in an easily accessible manner, preferably alongside the new methodology.
This is as opposed to the current standard of one or two short tables of different substrates, and then a quick application to some natural product framework. Even those papers, I have to say, are better than some of the stuff in the literature, but we still could be doing better. This paper proposes an additional test: running the reaction in the presence of various added compounds, and reporting the % product that forms under these conditions, the % starting material remaining, and the % additive remaining as well. (The authors suggest using a simple, robust method like GC to get these numbers, which is good advice). This technique will give an idea of the tolerance of the reagents and catalysts to other functional groups, without incorporating them into new substrates, and can tell you if the reaction is just slowed down, or if something about the additive stops everything dead.
Applying this setup to a classic Buchwald amination reaction shows that free aliphatic and aromatic alcohols and amines kill the reaction. Esters and ketones are moderately tolerated. Extraneous heterocycles can slow things down, but not in all cases. But alkynes, nitriles, and amides come through fine: the product forms, and the additives aren't degraded.
I like this idea, and I hope it catches on. But I think that the only way it will is if editors and reviewers start asking for it. Otherwise, it'll be put in the "More work" category, which is easy for authors to ignore. If something like this became the standard, though, all of us synthetic chemists would be better off.
Here's a neat bit of reaction optimization from the Aubé lab at Kansas. Update: left the link out before - sorry!) They're trying to make one of their workhorse reactions, the intramolecular Schmidt, a bit less nasty by cutting down on the amount of acid catalyst. The problem with that is product inhibition: the amide that's formed in the reaction tends to vacuum up any Lewis acid around, so you've typically had to use that reagent in excess, which is not a lot of fun on scale.
By varying a number of conditions, they've found a new catalyst/solvent system that's quite a bit friendlier. I keep meaning to try some of these reactions out (they make some interesting molecular frameworks), and maybe this is my entry into them. But the general problem here is one that every working organic chemist has faced: reactions that, for whatever reason, stop partway through. In this situation, there's at least a reasonably hypothesis why things grind out, and there's always been a less-than-elegant way around it (dump in more Lewis acid).
I'm sure, though, that everyone out there at the bench has had reactions that just. . .stop, for reasons unknown, and can't be pushed forward by addition of more anything. I've always wondered what's going on in those situations (probably a lot of things, from case to case), and they're always a reminder of just how little we sometimes really understand about what's going on inside our reaction flasks. Aggregates or other supramolecular complexes? Solubility problems? Adsorption onto heterogeneous reactants? Getting a handle on these things isn't easy, and most people don't bother doing it, unless they're full-out process chemists in industry.
ChemBark has an interesting question here: who's the most respected and influential chemist, among chemists? He was taking nominations on Twitter, and has settled on Roald Hoffman as his choice. Other strong contenders included Nocera, Corey, Whitesides, Sharpless, Kroto, Grubbs, Gray, Hershbach, Zare, and Stoddart. Anyone over here have names to add to the list? Note again that we're talking influence and fame inside the field, because if you go to "among the general public", you pretty much cut everyone out right there, unfortunately. . .
Ionic liquids (molten salts at relatively low temperatures) have been a big feature of the chemical literature for the last ten or fifteen years - enough of a feature to have attracted a few disparaging comments here, from me and from readers. There's a good article out now that talks about the early days of the field and how it grew, and it has some food for thought in it.
The initial reports in the field didn't get much attention (as is often the case). What seems to have made things take off was the possibility of replacing organic solvents with reusable, non-volatile, and (relatively) non-toxic alternatives. "Green chemistry" was (and to an extent still is) a magnet for funding, and it was the combination of this with ionic liquid (IL) work that made the field. But not all of this was helpful:
The link with green chemistry during the development of the IL field, propelled both fields forward, but at times the link was detrimental to both fields when overgeneralizations eroded confidence. ILs were originally considered as green since many of these liquid salts possess a negligible vapor pressure and might replace the use of volatile organic solvents known to result in airborne chemical contamination. The reported water stability and non-volatility led to the misconception that these salts were inherently safe and environmentally friendly. This was exacerbated by the many unsubstantiated claims that ILs were ‘green’ in introductions meant to provide the motivation for the study, even if the study itself had nothing to do with green chemistry. While it is true that the replacement of a volatile organic compound (VOC) might be preferred, proper knowledge of the chemistry of the ions must also be taken into account before classifying anything as green. Nonetheless, the statement “Ionic Liquids are green” was widely published (and can still be found in papers published today). Given the number and nature of the possible ions comprising ILs, these statements are similar to “Water is green, therefore all solvents are green.”
There were many misunderstandings at the chemical level as well:
However, just as the myriad of molecular solvents (or any compounds) can have dramatic differences in chemical, physical, and biological properties based on their chemical identity, so too can ILs. With the potential for 10^18 ion combinations, a single crystal structure of one compound is not a good representation of the chemistry of the entire class of salts which melt below 100 °C and would be analogous to considering carbon tetrachloride as a model system for all known molecular solvents.
The realization that hexafluorophosphate counterions can indeed generate HF under the right conditions helped bring a dose of reality back to the field, although (as the authors point out), not without a clueless backlash that decided, for a while, that all ionic liquids were therefore intrinsically toxic and corrosive. The impression one gets is that the field has settled down, and that its practitioners are more closely limited to people who know what they are talking about, rather than having quite so many who are doing it because it's hot and publishable. And that's a good thing.
The topic of making hit compounds, leads, and drug candidates that are less flat/aromatic has come up several times around here, and constantly around the industry. A reader sent along the following question: supposing that you wanted to obtain a decent collection of molecules with a greater-than-normal number of nonaromatic carbons and chiral centers, where would you find them?
Are there some suppliers that have done a better job than others of rising to the demand for this sort of thing? If anyone has nominations for good sources, or for places that are at least showing signs of moving in that direction, they'd be welcome. My guess is that fragment-sized molecules would be a good place to start, since they're (presumably) more synthetically accessible, and have advantages in the amount of chemical space that can be covered per number of compounds, but all comers will be considered. . .
Chiral catalyst reactions seem to show up on both lists when you talk about new reactions: the list of "Man, we sure do need more of those" and the "If I see one more paper on that I'm going to do something reckless" list.
I sympathize with the latter viewpoint, but the former is closer to reality. What we don't need are more lousy chiral catalyst papers, though, on that I think we can all agree. So I wanted to mention a good one, from Erick Carreira's group at the ETH. They're trying something that we're probably going to be seeing more of in the future: a "dual-catalyst" approach:
In a conceptually different construct aimed at the synthesis of compounds with a pair of stereogenic centers, two chiral catalysts employed concurrently could dictate the configuration of the stereocenters in the product. Ideally, these would operate independently and set both configurations in a single transition state with minimal matched/mismatched interactions. Herein, we report the realization of this concept in the development of a method for the stereodivergent dual-catalytic α-allylation of aldehydes.
Shown is a typical reaction scheme. They're doing iridium-catalysed allylation reactions, which are already known via the work of Hartwig and others, but with a chiral catalyst to activate the nucleophilic end of the reaction and a separate one for the electrophilic end. That lets you more or less dial in the stereocenters you want in the product. It looks like the allyl alcohol need some sort of aryl group, although they can get it to work with a variety of those. The aldehyde component can vary more widely.
You'd expect a scheme like this to have some combinations that work great, but other mismatched ones that struggle a bit. But in this case the yields stay at 60 to 80%, and the ee values are >99% across the board as they switch things around, which is why we're reading this in Science rather than in, well, you can fill in the names of some other journals as well as I can. Making a quaternary chiral center next to a tertiary one in whatever configuration you want is not something you see every day.
I think that chiral multi-catalytic systems will be taking up even more journal pages than ever in the future. It really seems like a way to get things to perform, and there's certainly enough in the idea to keep a lot of people occupied for a long time. Those of us doing drug discovery should resist the urge to flip the pages too quickly, too, because if we really mean all that stuff about making more three-dimensional molecules, we're going to have to do better with chirality than "Run it down an SFC and throw half of it away".
If you're in iPad sort of chemist (one of Baran's customers?), you may well already know that app versions of ChemDraw and Chem3D came out yesterday for that platform. I haven't tried them out myself, not (yet) being a swipe-and-poke sort of guy, but at $10 for the ChemDraw app (and Chem3D for free), it could be a good way to get chemical structures going on your own tablet.
Andre the Chemist has a writeup on his experiences here. As an inorganic chemist, he's run into difficulties with text labels, but for good ol' organic structures, things should be working fine. I'd be interested in hearing hands-on reviews of the software in the comments: how does the touch-screen interface work out for drawing? Seems like it could be a good fit. . .
I see that Neil Withers is trying to start up a new discussion in that "Kudzu of Chemistry" comment thread. The main topic is what reactions and chemistry we see too much of, but he's wondering what we should see more of. It's a worthwhile question, but I wonder if it'll be hard to answer. Personally, I'd like to see more reactions that let me attach primary and secondary amines directly into unactivated alkyl CH bonds, but I'm not going to arrange my schedule around that waiting period.
So maybe we should stick with reactions (or reaction types) that have been reported, but don't seem to be used as much as they should. What are the unsung chemistries that should be more famous? What reactions have you seen that you can't figure out why no one's ever followed up on them? I'll try to add some of my own in the comments as the day goes on.
Chemistry, like any other human-run endeavor, goes through cycles and fads. At one point in the late 1970s, it seemed as if half the synthetic organic chemists in the world had made cis-jasmone. Later on, a good chunk of them switched to triquinane synthesis. More recently, ionic liquids were all over the literature for a while, and while it's not like they've disappeared, they're past their publishing peak (which might be a good thing for the field).
So what's the kudzu of chemistry these days? One of my colleagues swears that you can apparently get anything published these days that has to do with a BODIPY ligand, and looking at my RSS journal feeds, I don't think I have enough data to refute him. There are still an awful lot of nanostructure papers, but I think that it's a bit harder, compared to a few years ago, to just publish whatever you trip over in that field. The rows of glowing fluorescent vials might just have eased off a tiny bit (unless, of course, that's a BODIPY compound doing the fluorescing!) Any other nominations? What are we seeing way too much of?
For those who are into total synthesis of natural products, Arash Soheili has a Twitter account (Total_Synthesis) that keeps track of all the reports in the major journals. He's emailed me with a link to a searchable database of all these, which brings a lot of not-so-easily-collated information together into one place. Have a look! (Mostly, when I see these, I'm very glad that I'm not still doing them, but that's just me).
It's molecular imaging week! See Arr Oh and others have sent along this paper from Science, a really wonderful example of atomic-level work. (For those without journal access, Wired and PhysOrg have good summaries).
As that image shows, what this team has done is take a starting (poly) phenylacetylene compound and let it cyclize to a variety of products. And they can distinguish the resulting frameworks by direct imaging with an atomic force microscope (using a carbon monoxide molecule as the tip, as in this work), in what is surely the most dramatic example yet of this technique's application to small-molecule structure determination. (The first use I know of, from 2010, is here). The two main products are shown, but they pick up several others, including exotica like stable diradicals (compound 10 in the paper).
There are some important things to keep in mind here. For one, the only way to get a decent structure by this technique is if your molecules can lie flat. These are all sitting on the face of a silver crystal, but if a structure starts poking up, the contrast in the AFM data can be very hard to interpret. The authors of this study had this happen with their compound 9, which curls up from the surface and whose structure is unclear. Another thing to note is that the product distribution is surely altered by the AFM conditions: a molecule in solution will probably find different things to do with itself than one stuck face-on to a metal surface.
But these considerations aside, I find this to be a remarkable piece of work. I hope that some enterprising nanotechnologists will eventually make some sort of array version of the AFM, with multiple tips splayed out from each other, with each CO molecule feeding to a different channel. Such an AFM "hand" might be able to deconvolute more three-dimensional structures (and perhaps sense chirality directly?) Easy for me to propose - I don't have to get it to work!
Here's a question for the organic chemists in the crowd, and not just those in the drug industry, either. Over the last few years, though, there's been a lot of discussion about how drug company compound libraries have too many compounds with too many aromatic rings in them. Here are some examples of just the sort of thing I have in mind. As mentioned here recently, when you look at real day-to-day reactions from the drug labs, you sure do see an awful lot of metal-catalyzed couplings of aryl rings (and the rest of the time seems to be occupied with making amides to link more of them together).
Now, it's worth remembering that some of the studies on this sort of thing have been criticized for stacking the deck. But at the same time, it's undeniable that the proportion of "flat stuff" has been increasing over the years, to the point that several companies seem to be openly worried about the state of their screening collections.
So here's the question: if you're trying to break out of this, and go to more three-dimensional structures with more saturated rings, what are the best ways to do that? The Diels-Alder reaction has come up here as an example of the kind of transformation that doesn't get run so often in drug research, and it has to be noted that it provides you with instant 3-D character in the products. What we could really use are reactions that somehow annulate pyrrolidines or tetrahydropyrans onto other systems in one swoop, or reliably graft on spiro systems where there was a carbonyl, say.
I know that there are some reactions like these out there, but it would be worthwhile, I think, to hear what people think of when they think of making saturated heterocyclic ring systems. Forget the indoles, the quinolines, the pyrazines and the biphenyls: how do you break into the tetrahydropyrans, the homopiperazines, and the saturated 5,5 systems? Embrace the stereochemistry! (This impinges on the topic of natural-product-like scaffolds, too).
My own nomination, for what it's worth, is to use D-glucal as a starting material. If you hydrogenate that double bond, you now have a chiral tetrahydropyran triol, with differential reactivity, ready to be functionalized. Alternatively, you can go after that double bond to make new fused rings, without falling back into making sugars. My carbohydrate-based synthesis PhD work is showing here, but I'm not talking about embarking on a 27-step route to a natural product here (one of those per lifetime is enough, thanks). But I think the potential for library synthesis in this area is underappreciated.
There's a new paper out today in Nature on a very unusual way to determine the chirality of organic molecules. It uses an exotic effect of microwave spectroscopy, and I will immediately confess that the physics is (as of this morning, anyway) outside my range.
This is going to be one of those posts that comes across as gibberish to the non-chemists in the audience. Chirality seems to be a concept that confuses people pretty rapidly, even though the examples of right and left shoes or gloves (or right and left-handed screw threads) are familiar from everyday objects, and exactly the same principles apply to molecules. But the further you dig into the concept, the trickier it gets, and when you start dragging the physics of it in, you start shedding your audience quickly. Get a dozen chemists together and ask them how, exactly, chiral compounds rotate plane-polarized light and see how that goes. (I wouldn't distinguish myself by the clarity of my explanation, either).
But this paper is something else again. Here, see how you do:
Here we extend this class of approaches by carrying out nonlinear resonant phase-sensitive microwave spectroscopy of gas phase samples in the presence of an adiabatically switched non-resonant orthogonal electric field; we use this technique to map the enantiomer-dependent sign of an electric dipole Rabi frequency onto the phase of emitted microwave radiation.
The best I can do with this is that the two enantiomers have the same dipole moment, but that the electric field interacts with them in a manner that gives different signs. This shows up in the phase of the emitted microwaves, and (as long as the sample is cooled down, to cut back on the possible rotational states), it seems to give a very clear signal. This is a completely different way to determine chirality from the existing polarized-light ones, or the use of anomalous dispersion in X-ray data (although that one can be tricky).
Here's a rundown on this new paper from Chemistry World. My guess is that this is going to be one of those techniques that will be used rarely, but when it comes up it'll be because nothing else will work at all. I also wonder if, possibly, the effect might be noticed on molecules in interstellar space under the right conditions, giving us a read on chirality from a distance?
Put this one in the category of "reactions you probably wouldn't have thought of". There's a new paper in Organic Letters on cleaving a carbon-carbon triple bond, yielding the two halves as their own separate nitriles.
It seems to be a reasonable reaction, and someone may well find a use for it. I just enjoyed it because it was totally outside the way that I think about breaking and forming bonds. And it makes me wonder about the reverse: will someone find a way to take two nitriles and turn them into a linked alkyne? Formally, that gives off nitrogen, so you'd think that there would be some way to make it happen. . .
Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.
Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.
The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.
So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?
I wanted to mention a new reaction that's come out in a paper in Science. It's from the Betley lab at Harvard, and it's a new way to make densely substituted saturated nitrogen heterocycles (pyrrolidines, in particular).
You start from a four-carbon chain with an azide at one end, and you end up with a Boc-protected pyrrolidine, by direct activation/substitution of the CH bond at the other end of the chain. Longer chains give you mixtures of different ring sizes (4, 5, and 6), depending on where the catalyst feel like inserting the new bond. I'd like to see how many other functional groups this chemistry is compatible with (can you have another tertiary amine in there somewhere, or a hydroxy?) But we have a huge lack of carbon-hydrogen functionalization reactions in this business, and this is a welcome addition to a rather short list.
There was a paper last year from the Groves group at Princeton on fluorination of aliphatic CH bonds using a manganese porphyrin complex. These two papers are similar in my mind - they're modeling themselves on the CYP enzymes, using high-valent metals to accomplish things that normally we wouldn't think of being able to do easily. The more of this sort of thing, the better, as far as I'm concerned: new reactions will make us think of entirely new things
Over at the Baran group's "Open Flask" blog, there's a post on the number of total synthesis papers that show up in the Journal of the American Chemical Society. I'm reproducing one of the figures below, the percentage of JACS papers with the phrase "total synthesis" in their title.
You can see that the heights of the early 1980s have never been reached again, and that post-2000 there has been a marked drought. As the post notes, JACS seems to have begun publishing many more papers in total around that time (anyone notice this or know anything about it?), and it appears that they certainly didn't fill the new pages with total synthesis. 2013, though, already looks like an outlier, and it's only May.
My own feelings about total synthesis are a matter of record, and have been for some time, if anyone cares. So I'm not that surprised to see the trend in this chart, if trend it is.
But that said, it would be worth running the same analysis on a few other likely journal titles. Has the absolute number of total synthesis papers gone down? Or have they merely migrated (except for the really exceptional ones) to the lower-impact journals? Do fewer papers put the phrase "Total synthesis of. . ." in their titles as compared to years ago? Those are a few of the confounding variables I can think of, and there are probably more. But I think, overall, that the statement "JACS doesn't publish nearly as much total synthesis as it used to" seems to be absolutely correct. Is this a good thing, a bad thing, or some of each?
I wanted to mention a project of Prof. Phil Baran of Scripps and his co-authors, Yoshihiro Ishihara and Ana Montero. It's called the Portable Chemist's Consultant, and it's available for iPads here. And here's a web-based look at its features. Baran was good enough to send me an evaluation copy, so I've had a chance to look through it in detail.
It's clearly based on his course in heterocyclic chemistry, and the chapters on pyridines and other heterocycles read like very well-thought-out review articles. But they also take advantage of the iPad's interface, in that specific transformations are shown in detail (with color and animation), and each of these can be expanded to a wider presentation and a thorough list of references (which are linked in their turn). The "Consumer Reports" style tables of recommended synthetic methods at the end of each section seem very useful, too, although they might need some notation for how much experimental support there is for each combination. For an overview of these topics, though, I doubt if anyone could do this better; I became a more literate heterocyclic chemist just by flipping through things. (Here's a video clip of some of these features in action).
So, do I have any reservations? A few. One of the bigger ones (which I'm told that Baran and his team are addressing) might sound trivial: I'm not sure about the title. As it stands, "The Portable Heterocyclic Chemistry Consultant" would be a much more accurate one, because there are large swaths of chemistry that fall within its current subtitle ("A Survival Guide for Discovery, Process, and Radiolabeling") which are not even touched on. For example, scale-up chemistry is mentioned on the cover, but in the current version of the book I didn't really see anything that was of particular relevance to actual scale-up work (things like the feasibility of solvent switching, heat transfer effects and reaction thermodynamics, run-to-run variability and potential purification methods, reagent sourcing, etc.) For medicinal chemists, I can say that the focus is completely on just the synthetic organic end of things; there's nothing on the behavior of any of the heterocyclic systems in vivo (pharmacokinetic trends, routes of metabolism, known toxicity problems, and so on). There's also nothing on spectral characterization, or any analytical chemistry of any sort, and I found no mention of radiolabeling (although I'd be glad to be corrected on that).
So for these reasons, it's a very academic work, but a very good one of its type. And Prof. Baran tells me that it's being revised constantly (at no charge to previous purchasers), and that these sorts of topics are in the works for later versions. If this book is indeed one of those gifts that keeps on giving, then it's a bargain as it stands, but (at the same time) I think that potential buyers should be aware of what they're getting in the current version.
My second reservation is technological. The book is only available on the iPad, and I'm not completely sure that this is a good idea. There's no way that it could be as useful in print, but a web-based interface would still be fine. (Managing ownership and sales is a lot easier in Apple's ecosystem, to be sure). And I'm not sure how many organic chemists own iPads yet. Baran himself seemed a bit surprised when he found out that I don't own one myself (I borrowed a colleague's to have a look). The most common reaction I've had when I tell people about the "PCC" is to say that they don't own an iPad, either, and to ask if there's any other way they can read it. Another problem is that the people that do have iPads certainly don't take them to the lab bench, which is where a work like this would be most useful. On the other hand, plain old computers are ubiquitous at the bench, thanks to electronic lab notebooks and the like.
All this said, though, if you do own an iPad and need to know about heterocyclic chemistry, you should have a look at this work immediately. If not, well, it's well worth keeping an eye on - these are early days.
There's another paper in the Nature Chemical Biology special issue that I wanted to mention, this one on "Translational Synthetic Chemistry". I can't say that I like the title, which seems to me to have a problem with reification (the process of trying to make something a thing which isn't necessarily a thing at all). I'm not so sure that there is a separate thing called "Translational Synthetic Chemistry", and I'm a bit worried that it might become a catch phrase all its own, which I think might lead to grief.
But that said, I still enjoyed the article. The authors are from H3 Biomedicine in Cambridge, which as I understand it is an offshoot of the Broad Institute and has several Schreiber-trained chemists on board. That means Diversity-Oriented Synthesis, of course, which is an area that I've expressed reservations about before. But the paper also discusses the use of natural product scaffolds as starting materials for new chemical libraries (a topic that's come up here and here), and the synthesis of diverse fragment collections beyond what we usually see. "Fragments versus DOS" has been set up before as a sort of cage match, but I don't think that has to be the case. And "Natural products versus DOS" has also been taken as a showdown, but I'm not so sure about that, either. These aren't either/or cases, and I don't think that the issues are illuminated by pretending that they are.
The authors end up calling for more new compound libraries, made by more new synthetic techniques, and assayed by newer and better high-throughput screens. Coming out against such recommendations makes a person feel as if they're standing up to make objections against motherhood and apple pies. And it's not that I think that these are bad ideas, but I just wonder if they're sufficient. Chemical space, as we were discussing the other day, is vast - crazily, incomprehensibly vast. Trying to blast off into it at random (which is what the pure DOS approaches have always seemed like to me) looks like something that a person could do for a century or two without seeing much return.
So if there are ways to increase the odds, I'm all for them. Natural-product-like molecules look like as good a way as any to do this, since they at least have the track record of evolution on their side. Things that are in roughly these kinds of chemical space, but which living organisms haven't gotten around to making, are still part of a wildly huge chemical space, but one that might have somewhat higher hit rates in screening. So Paul Hergenrother at Illinois might have the right idea when he uses natural products themselves as starting materials and makes new compound libraries from them.
So, who else is doing something like that? And what other methods do we have to make "natural-product-like" structures? Suggestions are welcome, and I'll assemble them and any ideas I have into another post.
Here's another look at the vast universe of things that no chemist has ever made. Estimates of the number of compounds with molecular weights under 500 run as high as ten to the sixtieth, which is an incomprehensibly huge number. We're not going to be able to put any sort of dent in that figure even if we convert the whole mass of the solar system into compound sample vials, so the problem remains: what's out there in that territory, and how do we best approach it?
Well, numbers of that magnitude are going to need some serious computation paring-down before we can take a crack at them, and that's what this latest paper tries to do. I'll refer interested readers to it (and to its supplementary information) for the details, but in brief, it takes a seed structure or two, adds atoms to them, goes through rounds of mutations and parings (according to filters that can be set for functional groups, properties, etc.) and then sends the whole set back around for more. This is going to rapidly explode in size, naturally, so at each stage the program picks a maximally diverse subset to go on with and discards the rest.
There are some of the compounds that come out, just to give you the idea. And they're right; I never would have thought of some of these, and I hope some of them never cross my mind again. I presume that this set has been run with rather permissive structural filters, because there are things there that (1) I don't know how to make, and (2) I'm not sure if anyone else knows how to make yet, and (3) I'm not sure how stable and isolable they'd be even if anyone did. My first reaction is that there sure are a lot of acetals, ketals, hemithioketals and so on in this set, but I'm sure that's an artifact of some sort. Any selection of a set of 10^60 compounds is an artifact of some sort.
So my next question is, what might people use such a program for? Ideas that they wouldn't have come up with, something to stir the imagination? Synthetic challenges to try for, to realize some of these compounds? The authors point out that neither nature nor man has ever really taken advantage of chemical diversity, not compared to what's possible. And that's true, but the possible numbers of compounds are still so terrifying that I wonder what we'll accomplish with drops in the bucket. (There's another paper that bears on this that I'll comment on later this week; this theme will return shortly!)
Now here's one that I didn't know about: a reader sends along word that the former clinical candidate GW501516 is enjoying some popularity on the black market among cyclists and other athletes.
I remember that compound well from the days when I did PPAR nuclear receptor research. It's the very model of a PPAR-delta ligand - GlaxoSmithKline had it in the clinic for some time, until it slowly disappeared from their roster. In 2007, the Evans lab at the Salk Institute published a paper suggesting that the compound increased endurance, and that sent it right into the athletic underworld. I have no idea if it does what its users want, but I do know that I wouldn't touch the stuff. The PPAR compounds have a very, very wide range of effects, and unraveling those proved to be very difficult indeed. Long-term effects of a compound like this one are unknown - all we know is that GSK dropped it from the clinic, and that could well have been for tox. Taking this stuff to gain some time in a bicycle race is sheer foolhardiness.
And since that last post was about sirtuins, here's a new paper in press at J. Med. Chem. from the Sirtris folks (or the Sirtris folks that were, depending on who's making the move down to PA). They report a number of potent new sirtuin inhibitor compounds, which certainly do look drug-like, and there are several X-ray structures of them bound to SIRT3. It seems that they're mostly SIRT1/2/3 pan-inhibitors; if they have selective compounds, they're not publishing on them yet.
I should also note, after this morning's post, that the activities of these compounds were characterized by a modified mass spec assay! I would expect sirtuin researchers in other labs to gladly take up some of these compounds for their own uses. . .
Note: I should make it clear that these are more compounds produced via the DNA-encoded library technology. Note that these are yet another chemotype from this work.
Over the years, I've probably had more hits on my "Sand Won't Save You This Time" post than on any other single one on the site. That details the fun you can have with chloride trifluoride, and believe me, it continues (along with its neighbor, bromine trifluoride) to be on the "Things I Won't Work With" list. The only time I see either of them in the synthetic chemistry literature is when a paper by Shlomo Rozen pops up (for example), but despite his efforts on its behalf, I still won't touch the stuff.
And if anyone needs any more proof as to why, I present this video, made at some point by some French lunatics. You may observe the mild reactivity of this gentle substance as it encounters various common laboratory materials, and draw your own conclusions. We have Plexiglas, a rubber glove, clean leather, not-so-clean leather, a gas mask, a piece of wood, and a wet glove. Some of this, under ordinary circumstances, might be considered protective equipment. But not here.
The reaction discovery field continues to increase its throughput, on ever-smaller amounts of material. (That link has several previous discussions here imbedded in it). The latest report uses laser-assisted mass spec to analyze aliquots (less than a microliter each) of 696 different reactions and controls, pulled directly from the 96-well plates with no purification. That took the MALDI-TOF machine about two hours, in case you're wondering - setting up the experiments definitely took a lot longer (!)
The key to getting this to work was having a pyrene moeity attached to the back end of the substrate(s) for reaction discovery. This serves as a mass spec label - it ionizes very efficiently under the laser conditions, and allows excellent signal/noise coming out of all the other reaction gunk that might be in there. You can monitor the disappearance of the starting material and/or the appearance of new products, as you wish. In this case, the test bed was an electron-rich alkyne starting material, exposed to a variety of reacting partners and various metal catalysts. The screen picked up two previously unknown annulations, which were then optimized in a second round of experiments.
I continue to think that this sort of work has the potential to remake synthetic chemistry. Whenever there's some potential for new reactions to be found (and metal-catalyzed systems are a prime example) these techniques will let us survey the landscape much more quickly. There's no reason to think that we've managed to find even a good fraction of the useful chemistry out there.
Looks like the long-running ChemBark blog is shutting down. Paul Bracher has a new academic position to get off the ground, a move to another part of the country, labs to set up, and grant applications to write, all of which are good enough reasons by themselves. I hope that once he gets settled into the new position that he returns to the chem-blogging world under a new banner - his site has been well worth reading over the years.
Update: I see Paul has come clean this afternoon in the comments section to his post. Congratulations to him on a very convincing April 1 job - I now, naturally, retract all the nice things I said about him (!)
I wrote here about DNA-barcoding of huge (massively, crazily huge) combichem libraries, a technology that apparently works, although one can think of a lot of reasons why it shouldn't. This is something that GlaxoSmithKline bought by acquiring Praecis some years ago, and there are others working in the same space.
For outsiders, the question has long been "What's come out of this work?" And there is now at least one answer, published in a place where one might not notice it: this paper in Prostaglandins and Other Lipid Mediators. It's not a journal whose contents I regularly scan. But this is a paper from GSK on a soluble epoxide hydrolase inhibitor, and therein one finds:
sEH inhibitors were identified by screening large libraries of drug-like molecules, each attached to a DNA “bar code”, utilizing DNA-encoded library technology  developed by Praecis Pharmaceuticals, now part of GlaxoSmithKline. The initial hits were then synthesized off of DNA, and hit-to-lead chemistry was carried out to identify key features of the sEH pharmacophore. The lead series were then optimized for potency at the target, selectivity and developability parameters such as aqueous solubility and oral bioavailability, resulting in GSK2256294A. . .
That's the sum of the med-chem in the article, which certainly compresses things, and I hope that we see a more complete writeup at some point from a chemistry perspective. Looking at the structure, though, this is a triaminotriazine-derived compound (as in the earlier work linked to in the first paragraph), so yes, you apparently can get interesting leads that way. How different this compound is from the screening hit is a good question, but it's noteworthy that a diaminotriazine's worth of its heritage is still present. Perhaps we'll eventually see the results of the later-generation chemistry (non-triazine).
I've written several times about flow chemistry here, and a new paper in J. Med. Chem. prompts me to return to the subject. This, though, is the next stage in flow chemistry - more like flow med-chem:
Here, we report the application of a flow technology platform integrating the key elements of structure–activity relationship (SAR) generation to the discovery of novel Abl kinase inhibitors. The platform utilizes flow chemistry for rapid in-line synthesis, automated purification, and analysis coupled with bioassay. The combination of activity prediction using Random-Forest regression with chemical space sampling algorithms allows the construction of an activity model that refines itself after every iteration of synthesis and biological result.
Now, this is the point at which people start to get either excited or fearful. (I sometimes have trouble telling the difference, myself). We're talking about the entire early-stage optimization cycle here, and the vision is of someone topping up a bunch of solvent reservoirs, hitting a button, and leaving for the weekend in the expectation of finding a nanomolar compound waiting on Monday. I'll bet you could sell that to AstraZeneca for some serious cash, and to be fair, they're not the only ones who would bite, given a sufficiently impressive demo and slide deck.
But how close to this Lab of the Future does this work get? Digging into the paper, we have this:
Initially, this approach mirrors that of a traditional hit-to-lead program, namely, hit generation activities via, for example, high-throughput screening (HTS), other screening approaches, or prior art review. From this, the virtual chemical space of target molecules is constructed that defines the boundaries of an SAR heat map. An initial activity model is then built using data available from a screening campaign or the literature against the defined biological target. This model is used to decide which analogue is made during each iteration of synthesis and testing, and the model is updated after each individual compound assay to incorporate the new data. Typically the coupled design, synthesis, and assay times are 1–2 h per iteration.
Among the key things that already have to be in place, though, are reliable chemistry (fit to generate a wide range of structures) and some clue about where to start. Those are not givens, but they're certainly not impossible barriers, either. In this case, the team (three UK groups) is looking for BCL-Abl inhibitors, a perfectly reasonable test bed. A look through the literature suggested coupling hinge-binding motifs to DFG-loop binders through an acetylene linker, as in Ariad's ponatinib. This, while not a strategy that will earn you a big raise, is not one that's going to get you fired, either. Virtual screening around the structure, followed by eyeballing by real humans, narrowed down some possibilities for new structures. Further possibilities were suggested by looking at PDB structures of homologous binding sites and seeing what sorts of things bound to them.
So already, what we're looking at is less Automatic Lead Discovery than Automatic Patent Busting. But there's a place for that, too. Ten DFG pieces were synthesized, in Sonogashira-couplable form, and 27 hinge-binding motifs with alkynes on them were readied on the other end. Then they pressed the button and went home for the weekend. Well, not quite. They set things up to try two different optimization routines, once the compounds were synthesized, run through a column, and through the assay (all in flow). One will be familiar to anyone who's been in the drug industry for more than about five minutes, because it's called "Chase Potency". The other one, "Most Active Under Sampled", tries to even out the distributions of reactants by favoring the ones that haven't been used as often. (These strategies can also be mixed). In each case, the model was seeded with binding constants of literature structures, to get things going.
The first run, which took about 30 hours, used the "Under Sampled" algorithm to spit out 22 new compounds (there were six chemistry failures) and a corresponding SAR heat map. Another run was done with "Chase Potency" in place, generating 14 more compounds. That was followed by a combined-strategy run, which cranked out 28 more compounds (with 13 failures in synthesis). Overall, there were 90 loops through the process, producing 64 new products. The best of these were nanomolar or below.
But shouldn't they have been? The deck already has to be stacked to some degree for this technique to work at all in the present stage of development. Getting potent inhibitors from these sorts of starting points isn't impressive by itself. I think the main advantage to this is the time needed to generated the compound and the assay data. Having the synthesis, purification, and assay platform all right next to each other, with compound being pumped right from one to the other, is a much tighter loop than the usual drug discovery organization runs. The usual, if you haven't experienced it, is more like "Run the reaction. Work up the reaction. Run it through a column (or have the purification group run it through a column for you). Get your fractions. Evaporate them. Check the compound by LC/MS and NMR. Code it into the system and get it into a vial. Send it over to the assay folks for the weekly run. Wait a couple of days for the batch of data to be processed. Repeat."
The science-fictional extension of this is when we move to a wider variety of possible chemistries, and perhaps incorporate the modeling/docking into the loop as well, when it's trustworthy enough to do so. Now that would be something to see. You come back in a few days and find that the machine has unexpectedly veered off into photochemical 2+2 additions with a range of alkenes, because the Chase Potency module couldn't pass up a great cyclobutane hit that the modeling software predicted. And all while you were doing something else. And that something else, by this point, is. . .what, exactly? Food for thought.
I wanted to mention that Phil Baran's group at Scripps now has a blog, where group members are putting up posts on several topics, from recent syntheses to jelly beans. Well worth keeping an eye on. And I'd like to take the opportunity to welcome Baran and his group to the blogging world.
Which reminds me: the blogroll at left is (as usual) wildly overdue for updating. I've got a list of sites that I'll put up there, but I'd be glad to take recommendations, because it wouldn't surprise me if I've missed some, too. Please add some to the comments, and I promise to actually clean things up this week (!)
I wanted to point out what looks like the resolution of the Blog Syn story about IBX oxidations. See Arr Oh seems to have discovered the discrepancy that's been kicking the results around all over the place: water in the IBX itself. So it looks like this whole effort has ended up discovering something important that we didn't know about the reaction, and nailed down yet another variable. Congratulations!
While I'm on the subject of editorials, Takashi Tsukamoto of Johns Hopkins has one out in ACS Medicinal Chemistry Letters. Part of it is a follow-up to my own trumpet call in the journal last year (check the top of the charts here; the royalties are just flowing in like a river of gold, I can tell you). Tsukamoto is wondering, though, if we aren't exploring chemical space the way that we should:
One of the concerns is the likelihood of identifying drug-like ligands for a given therapeutic target, the so-called “druggability” of the target, has been defined by these compounds, representing a small section of drug-like chemical space. Are aminergic G protein coupled receptors (GPCRs) actually more druggable than other types of targets? Or are we simply overconcentrating on the area of chemical space which contains compounds likely to hit aminergic GPCRs? Is it impossible to disrupt protein–protein interactions with a small molecule? Or do we keep missing the yet unexplored chemical space for protein–protein interaction modulators because we continue making compounds similar to those already synthesized?
. . .If penicillin-binding proteins are presented as new therapeutic targets (without the knowledge of penicillin) today, we would have a slim chance of discovering β-lactams through our current medicinal chemistry practices. Penicillin-binding proteins would be unanimously considered as undruggable targets. I sometimes wonder how many other potentially significant therapeutic targets have been labeled as undruggable just because the chemical space representing their ligands has never been explored. . .
Good questions. I (and others) have had similar thoughts. And I'm always glad to see people pushing into under-represented chemical space (macrocycles being a good example).
The problem is, chemical space is large, and time (and money) are short. Given the pressures that research has been under, it's no surprise that everyone has been reaching for whatever will generate the most compounds in the shortest time - which trend, Tsukamoto notes, makes the whole med-chem enterprise that much easier to outsource to places with cheaper labor. (After all, if there's not so much skill involved in cranking out amides and palladium couplings, why not?)
My advice in the earlier editorial about giving employers something they can't buy in China and India still holds, but (as Tsukamoto says), maybe one of those things could (or should) be "complicated chemistry that makes unusual structures". Here's a similar perspective from Derek Tan at Sloan-Kettering, also referenced by Tsukamoto. It's an appealing thought, that we can save medicinal chemistry by getting back to medicinal chemistry. It may even be true. Let's hope so.
Last year I mentioned the "good ol' Diels-Alder reaction", and talked about how it doesn't get used as much in drug discovery and industrial chemistry as one might think.
Now Stefan Abele from Actelion (in Switzerland) sends along this new paper, which will tell you pretty much all you need to know about the reaction's industrial side. The scarcity of D-A chemistry on scale that I'd noticed was no illusion (links below added by me):
According to a survey by Dugger et al. in 2005 of the type of reaction scaled in a research facility at Pfizer, and an analysis of the reactions used for the preparation of drug candidate molecules by Carey et al. in 2006, the DA reaction falls into the “miscellaneous” category that accounts for only 5 to 11 % of C-C bond-forming reactions performed under Good Manufacturing Practice. This observation mirrors the finding that C-C bond-forming reactions account for 11.5% of the entire reaction repertoire used by medicinal chemists in the pursuit of drug candidates. In this group, palladium-catalyzed reactions represent about 60% of the occurrences, while the “other” category, into which the DA reaction falls, represents only 1.8% of the total number of reactions. Careful examination of the top 200 pharmaceutical products by US retail sales in 2010 revealed that only one marketed drug, namely Buprenorphine, is produced industrially by using the DA reaction. Two other drugs were identified in the top 200 generic drugs of US retail sales in 2008: Calcitriol and its precursor Calciferol. Since 2002, Liu and co-workers have been compiling the new drugs introduced each year to the market. From 2002 to 2010, 174 new chemical entities were reported. Among them, two examples (Varenicline from Pfizer in 2006 and Peramivir by Shionogi in 2010) have been explicitly manufactured through a DA reaction. Similarly, and not surprisingly, our consultation with a large corpus of peers, colleagues, and experts in industry and academia worldwide revealed that the knowledge of such examples of the DA reaction run on a large scale is scarce, except perhaps in the field of fragrance chemistry.
But pretty much every reaction that has been run on large scale is in this review, so if you're leaning that way, this is the place to go. It doesn't shy away from the potential problems (chief among them being potential polymerization of one or both of the starting materials, which would really ruin your afternoon). But it's a powerful enough reaction that it really would seem to have more use than it gets.
The topic of compound purity has come up here before, as well it should. Every experienced medicinal chemist knows that when you have an interesting new hit compound, that one of the first things to do is go back and make sure that it really is what it says on the label. Re-order it from the archive (in both powder and DMSO stock), re-order it if it's from a commercial source, and run it through the LC/MS and the NMR. (And as one of those links above says, if you have any thought that metal reagents were used to make the compound, check for those, too - they can be transparent to LC and NMR).
Recently, we selected a random set of commercial fragment compounds for analysis, and closely examined those that failed to better understand the reasons behind it. The most common reason for QC failure was insolubility (47%), followed by degradation or impurities (39%), and then spectral mismatch (17%) [Note: Compounds can acquire multiple QC designations, hence total incidences > 100% ]. Less than 4% of all compounds assayed failed due to solvent peak overlap or lack of non-exchangeable protons, both requirements for NMR screening. Failure rates were as high as 33% per individual vendor, with an overall average of 16%. . .
I very much wish that they'd identified that 33% failure rate vendor. But overall, they're suggesting that of 10 to 15% compounds will wipe out, regardless of source. Now, you may not feel that solubility is a key criterion for your work, because you're not doing NMR assays. (That's one that will only get worse as you move out of fragment-sized space, too). But that "degradation or impurities" category is still pretty significant. What are your estimates for commercial-crap-in-a-vial rates?
Here's a paper at the intersection of two useful areas: natural products and fragments. Dan Erlanson over at Practical Fragments has a good, detailed look at the the work. What the authors have done is tried to break down known natural product structures into fragment-sized pieces, and cluster those together for guidance in assembling new screening libraries.
I'm sympathetic to that goal. I like fragment-based techniques, and I think that too many fragment libraries tend to be top-heavy with aromatic and heteroaromatic groups. Something with more polarity, more hydrogen-bonding character, and more three-dimensional structures would be useful, and natural products certainly fit that space. (Some of you may be familiar with a similar approach, the deCODE/Emerald "Fragments of Life", which Dan blogged about here). Synthetically, these fragments turn out to be a mixed bag, which is either a bug or a feature depending on your point of view (and what you have funding for or a mandate to pursue):
The natural-product-derived fragments are often far less complex structurally than the guiding natural products themselves. However, their synthesis will often still require considerable synthetic effort, and for widespread access to the full set of natural-product-derived fragments, the development of novel, efficient synthesis methodologies is required. However, the syntheses of natural-product-derived fragments will by no means have to meet the level of difficulty encountered in the multi-step synthesis of genuine natural products.
But take a look at Dan's post for the real downside:
Looking at the structures of some of the phosphatase inhibitors, however, I started to worry. One strong point of the paper is that it is very complete: the chemical structures of all 193 tested fragments are provided in the supplementary information. Unfortunately, the list contains some truly dreadful members; 17 of the worst are shown here, with the nasty bits shown in red. All of these are PAINS that will nonspecifically interfere with many different assays.
Boy, is he right about that, as you'll see when you take a look at the structures. They remind me of this beast, blogged about here back last fall. These structures should not be allowed into a fragment screening library; there are a lot of other things one could use instead, and their chances of leading only to heartbreak are just too high.
I linked recently to the latest reaction check at Blog Syn, benzylic oxidation by IBX. Now Prof. Baran (a co-author on the original paper, from his Nicoloau days) has written See Arr Oh with a detailed repeat of the experiment. He gets it to work, so I think it's fair to say that (1) the reaction is doable, but (2) it's not as easy to reproduce right out of the box as it might be.
I'd like to congratulate him for responding like this. The whole idea of publicly rechecking literature reactions is still fairly new, and (as the comments here have shown), there's a wide range of opinion on it. Getting a detailed, prompt, and civil response from the Baran lab is the best outcome, I think. After all, the point of a published procedure - the point of science - is reproducibility. The IBX reaction is now better known than it was, the details that could make it hard to run are now there for people who want to try it, and Prof. Baran's already high reputation as a scientist actually goes up a bit among the people who've been following this story.
Public reproducibility is an idea whose time, I think, has come, and Blog Syn is only one part of it. When you think about the increasingly well-known problems with reproducing big new biological discoveries, things that could lead to tens and hundreds of millions being spent on clinical research, reproducing organic chemistry reactions shouldn't be controversial at all. As they say to novelists, if you're afraid of bad reviews, there's only one solution: don't show anyone your book.
I wanted to mention that there are two more entries up on Blog Syn: one of them covering this paper on alkenylation of pyridines. (It's sort of like a Heck reaction, only you don't have to have an iodo or triflate on the pyridine; it just goes right into the CH bond). The short answer: the reaction works, but there are variables that seem crucial for its success that were under-reported in the original paper (and have been supplied, in part, by responses from the original author to the Blog Syn post). Anyone thinking about running this reaction definitely needs to be aware of this information.
The latest is a re-evaluation of a older paper on the use of IBX to (among many other things) oxidize arylmethane centers. It's notable for a couple of reasons: it's been claimed that this particular reaction completely fails across multiple substrates, and the reaction itself is from the Nicolau lab (with Phil Baran as a co-author). Here's the current literature situation:
A day in the library can save you a week in the lab, so let’s examine this paper’s impact using SciFinder: it's been cited 179 times from 2002-2013. Using the “Get Reactions" tool, coupled with SciFinder’s convenient new “Group by Transformation” feature, we identified 54 reactions from the citing articles that can be classified as “Oxidations of Arylmethanes to Aldehydes/Ketones" (the original reaction's designation). Of these 54 reactions, only four (4) use the conditions reported in this paper, and all four of those come from one article: Binder, J. T.; Kirsch, S. F. Org. Lett. 2006, 8, 2151–2153, which describes IBX as “an excellent reagent for the selective oxidation to generate synthetically useful 5-formylpyrroles.” Kirsch's yields range from 53-79% for relatively complex substrates, not too shabby.
I'll send you over to Blog Syn for the further details, but let's just say that not many NMR peaks are being observed around 10 ppm. Phil Baran himself makes an appearance with more details about his recollection of the work (to his credit). Several issues remain, well, unresolved. (If any readers here have ever tried the reaction, or have experience with IBX in general, I'm sure comments would be very welcome over there as well).
Well, Cambridge is quiet today, as are many workplaces across the US. My plan is to go out for some good Chinese food and then spend the afternoon in here with my family; my kids haven't been there for at least a couple of years now.
And that brings up a thought that I know many chemists have had: how ill-served chemistry is by museums, science centers, and so on. Physics has a better time of it, or at least some parts of it. You can demo Newtonian mechanics with a lot of hands-on stuff, and there's plenty to do with light, electricity and magnetism and so on. (Quantum mechanics and particle physics, well, not so much). Biology at least can have some live creatures (large and small), and natural-history type exhibits, but its problems for public display really kick in when it shades over to biochemistry.
Chemistry, though, is a tough sell. Displays of the elements aren't bad, but many of them are silvery metals that can't be told apart by the naked eye. Crystals are always good, so perhaps we can claim some of the mineral displays for our own. But physical chemistry, organic chemistry, and analytical chemistry are difficult to show off. The time scales tend to be either too fast or too slow for human perception, or the changes aren't noticeable except with the help of instrumentation. There are still some good demonstrations, but many of these have to be run with freshly prepared materials, and by a single trained person. You can't just turn everyone loose with the stuff, and it's hard to come up with an automated, foolproof display that can run behind glass (and still attract anyone's interest). An interactive "add potassium to water to see what happens" display would be very popular, but rather hard to stage, both practically and from an insurance standpoint. You'd also run through a lot of potassium, come to think of it.
Another problem is that chemistry tends to deal with topics that people either don't see, or don't notice. Cooking food, for example, is sheer chemistry, but no one thinks of it like that - well, except Harold McGee and now the molecular gastronomy people. (Speaking of which, if any of you are crazy enough to order this from Amazon, I'll be very impressed indeed). Washing with soap or detergent, starting a fire, using paint or dye - there are plenty of everyday processes that illustrate chemistry, but they're so familiar that it's hard to use them as demonstrations. Products as various as distilled liquor, plastic containers, gasoline, and (of course) drugs of all sorts are pure examples of all sorts of chemical ideas, but again, it's hard to show them as such. They're either too well-known (think of Dustin Hoffman being advised to go into plastics), or too esoteric (medicinal chemistry, for most people).
So I started asking myself, what would I do if I had to put up some chemistry exhibits in a museum? How would I make them interesting? For med-chem, I'm imagining some big video display that starts out with a molecule and lets people choose from some changes they can make to it (oxidation, adding a fluorine, changing a carbon to nitrogen, etc.) The parts of the molecule where these change are allowed could glow or something when an option is chosen, then when you make the change, the structure snazzily shifts and the display tells you if you made a better drug, a worse one, something inactive, or a flat-out poison. You'd have to choose your options and structures carefully, but you might be able to come up with something.
But other things would just have to be seen and demonstrated, which is tricky. Seen on a screen, the Belousov-Zhabotinskii reaction just looks like a special effect, and a rather cheap one at that. But seeing it done by mixing up real chemicals and solutions right in front of you is much more impressive, but it's hard for me to think of a way to do that which would be done often enough (and on large enough scale) for people to see it, and wouldn't cost too much to do (supplies, staff, flippin' insurance, etc.)
If you had to build out the chemistry hallway at the museum, then, what would you fill it with? Suggestions welcome.
My grad school work was chiral-pool synthesis; trying to make a complex natural product from carbohydrate starting materials. There was quite a bit of that around in those days, but you have to wonder about its place in the world by now. It's true that everyone likes to be able to buy their chiral centers, especially if they're from the naturally-occuring series (nobody's keen to use L-glucose as their starting material if they can avoid it!) We certainly love to do that in the drug industry, and we can often get away with such syntheses, since our compounds generally don't have too many chiral centers.
But how developed are the multicenter methods? I certainly did not enjoy manipulating the multiple chiral centers on a sugar molecule, although you can (with care and attention) do some interesting gymnastics on that framework. But I think that asymmetric synthesis, especially catalytic variations, is more widely used today to build things up, rather than starting with a lot of chirality and working it around to what you want. The synthetic difficulties of that latter method often seem to get out of hand, and the methods aren't as general as the build-up-your-chirality ones.
Is my impression correct? And if so, is this the way things should be? My tendency is to say "yes" to both questions, but I'd like to see what the general opinions are.
I wrote here and here about Luca Turin's theory that our perception of smell is partly formed by sensing vibrational modes. (Turin is the author of an entertaining book on the subject of olfaction, The Secret of Scent, and also co-author of Perfumes: The A-Z Guide). His theory is still controversial, to say the least, but Turin and co-workers have a new paper out trying to shore it up.
A previous report from Vosshall and Keller at Rockfeller University had shown that human subjects were unable to distinguish acetophenone from its deuterated analog, which is not what you'd expect if we were sensing bond vibrations. Interestingly, this paper confirms this result. (References to all these studies are in the original paper, which is open-access, being in PLoSONE):
In principle, odorant isotopomers provide a possible test of shape vs. vibration mechanisms: replacing, for example, hydrogen with deuterium in an odorant leaves the ground-state conformation of the molecule unaltered while doubling atomic mass and so altering the frequency of all its vibrational modes to a greater or lesser extent. To first order, deuteration should therefore have little or no effect on the smell character of an odorant recognized by shape, whereas deuterated isotopomers should smell different if a vibrational mechanism is involved.
The experimental evidence on this question to date is contradictory. Drosophila appears able to recognize the presence of deuterium in odorant isotopomers by a vibrational mechanism. Partial deuteration of insect pheromones reduces electroantennogram response amplitudes. Fish have been reported to be able to distinguish isotopomers of glycine by smell. However, human trials using commercially available deuterated odorants [benzaldehyde and acetophenone] have yielded conflicting results, both positive and negative. Here, using GC-pure samples and a different experimental technique, we fully confirm Keller and Vosshall’s finding that humans, both naive and trained subjects, are unable to discriminate between acetophenone isotopomers.
But the paper goes on to show that humans apparently are able to discriminate deuterated musk compounds from their H-analogs. Cyclopentadecanone, for example, was deuterated to >95% next to the carbonyl, and to 90% at the other methylenes. It and three other commercial musks were purified and checked versus their native forms:
After silica gel purification, aliquots of the deuterated musks were diluted in ethanol and their odor character assessed on smelling strips. The parent compounds have classic powerful musk odor characters, with secondary perfumer descriptors as follows: animalic [Exaltone], sweet [Exaltolide], oily [Astrotone] and sweet [Tonalid]. In all the deuterated musks, the musk character, though still present was much reduced, and a new character appeared, variously described by the trained evaluators [NR, DG, LT and Christina Koutsoudaki, Vioryl SA] as “burnt,” “roasted,” “toasted,” or “nutty.” Naive subjects most commonly described the additional common character as “burnt.”
They found, by stopping the deuterium exchange early, that this smell showed up even at around 50% D-exchange or less. For more rigorous tests, they went to a "smelling GC", and double-blinded the tests. This gave clean compound peaks, and they were able to diminish the need to keep a memory of the previous smell in mind by capturing the eluted peak vapors in Eppendorf tube for side-by-side comparison.
This protocol showed that people are indeed unable to discriminate deuterated acetophenone from undeuterated - the Keller and Vosshall paper stands up, which will come as a relief to the author of the unusually celebratory editorial in Nature Neuroscience that accompanied it. To be sure, it also makes moot Turin's own objections to their work at the time, which questioned its experimental design and rigor.
But the deuterated musk experiment done this way are quite interesting. I'm going to just quote the entire section here:
All trials were performed with GC-pure catalytically deuterated [D fraction >90%] cyclopentadecanone [See Methods]. Each trial consisted of the assessment of 4 pairs of odorants, one deuterated and one sham-deuterated. The subjects were presented with a deuterated sample and their attention was drawn to the “burnt, nutty, roasted” character of the deuterated compound. Several other sample pairs were presented until the subjects were sure they could tell the difference between the two sample types.
The Eppendorf tubes were heated in a solid heating block to 50C. The samples were arranged in rows according to their type. The experimenter randomized the order of the tubes within the rows by means of two flips of a coin (first flip: first or second two positions, second flip: first or second spot within those). The rows were then mixed randomly by a further coin flip per d/H pair (heads: swap positions, tails leave in situ).
Subjects were first given a training pair and told which was deuterated and which sham-deuterated. The experimenter then left to watch the experiment through a window. Subjects were then presented with the unlabeled, position-randomized pairs of deuterated and sham-deuterated GC-pure samples and asked to say which was which.
The subject, wearing nitrile gloves to avoid contamination, smelled first one and then the other sample. Multiple sniffs at each sample were allowed. The subject was asked to identify the deuterated sample and to place it to one side. After four trials the tubes were placed under the UV light source and identified. The subject was not informed of the outcome. To avoid habituation, the subject then rested for 15 minutes before attempting the next trial.
The results are shown in table 2. Eleven subjects were used. Two subjects tired before reaching the desired number of 12 trials. Two were able to go beyond to 13 and 17 trials respectively. The binomial p values range between 0.109 [6/8 correct] to 7.62×10−6 [17/17 trials]. These are independent trials, and an aggregate probability for all trials [119/132 correct] can be calculated: it is equal to 5.9×10−23.
As it happens, musks are at nearly the top of the molecular weight range for odorant compounds. The paper mentions a rule of thumb among fragrance chemists that compounds with more than 18 carbons rarely have any perceptible odor, even when heated (and different people's noses can top out even before that). Musks tend to smell quite similar even with rather different structures, which suggests that a small number of receptors are involved in their perception. Here's Turin's unified theory of musk:
We suggest therefore that a musk odor is achieved when three conditions are simultaneously fulfilled: First, the molecule is so large that only one or a very few receptors are activated. Second, one or more of these receptors detects vibrations in the 1380–1550 cm-1 range. Third, the molecule has intense bands in that region, caused either by a few nitro groups or, equivalently, many CH2 groups. A properly quantitative account of musk odor will require better understanding of the shape selectivity of the receptors at the upper end of the molecular weight scale, and of the selection rules of a biological IETS spectrometer to calculate the intensity of vibrational modes.
It's safe to say that this controversy is very much alive, no matter what the explanation might be. Leslie Vosshall of Rockefeller has already commented on this latest paper, wondering if compounds might be enzymatically altered in the nose (which would also be expected to show a large difference with deuterated compounds). I'll await the next round with interest!
We medicinal chemists talk a good game when it comes to the the hydrophobic effect. It's the way that non-water-soluble molecules (or parts of molecules) like to associate with each other, right? Sure thing. And it works because of. . .well, van der Waals forces. Or displacement of water molecules from protein surfaces. Or entropic effects. Or all of those, plus some other stuff that, um, complicated to explain. Something like that.
Here's a paper in Angewandte Chemie that really bears down on the topic. The authors study the binding of simple ligands to thermolysin, a well-worked-out system for which very high-resolution X-ray structures are available. And what they find is, well, that things really are complicated to explain:
In summary, there are no universally valid reasons why the hydrophobic effect should be predominantly “entropic” or “enthalpic”; small structural changes in the binding features of water molecules on the molecular level determine whether hydrophobic binding is enthalpically or entropically driven.
Admittedly, this study reaches the limits of experimental accuracy accomplishable in contemporary protein–ligand structural work. . .Surprising pairwise systematic changes in the thermodynamic data are experienced for complexes of related ligands, and they are convincingly well reflected by the structural properties. The present study unravels small but important details. Computational methods simulate molecular properties at the atomic level, and are usually determined by the summation of many small details. However, details such as those observed here are usually not regarded by these computational methods as relevant, simply because we are not fully aware of their importance for protein–ligand binding, structure–activity relationships, and rational drug design in general. . .
I think that there are a lot of things in this area of which we're not fully aware. There are many others that we treat as unified phenomena, because we've given them names that make us imagine that they are. The hydrophobic effect is definitely one of these - George Whitesides is right when he says that there are many of them. But when all of these effects, on closer inspection, break down into tiny, shifting, tricky arrays of conflicting components, can you blame us for simplifying?
Here's a recent paper in J. Med. Chem. on halogen bonding in medicinal chemistry. I find the topic interesting, because it's an effect that certainly appears to be real, but is rarely (if ever) exploited in any kind of systematic way.
Halogens, especially the lighter fluorine and chlorine, are widely used substituents in medicinal chemistry. Until recently, they were merely perceived as hydrophobic moieties and Lewis bases in accordance with their electronegativities. Much in contrast to this perception, compounds containing chlorine, bromine, or iodine can also form directed close contacts of the type R–X···Y–R′, where the halogen X acts as a Lewis acid and Y can be any electron donor moiety. . .
What seems to be happening is that the electron density around the halogen atom is not as smooth as most of us picture it. You'd imagine a solid cloud of electrons around the bromine atom of a bromoaromatic, but in reality, there seems to be a region of slight positivecharge (the "sigma hole") out on the far end. (As a side effect, this give you more of a circular stripe of negative charge as well). Both these effects have been observed experimentally.
Now, you're not going to see this with fluorine; that one is more like most of us picture it (and to be honest, fluorine's weird enough already). But as you get heavier, things become more pronounced. That gives me (and probably a lot of you) an uneasy feeling, because traditionally we've been leery of putting the heavier halogens into our molecules. "Too much weight and too much hydrophobicity for too little payback" has been the usual thinking, and often that's true. But it seems that these substituents can actually earn out their advance in some cases, and we should be ready to exploit those, because we need all the help we can get.
Interestingly, you can increase the effect by adding more fluorines to the haloaromatic, which emphasizes the sigma hole. So you have that option, or you can take a deep breath, close your eyes, and consider. . .iodos:
Interestingly, the introduction of two fluorines into a chlorobenzene scaffold makes the halogen bond strength comparable to that of unsubstituted bromobenzene, and 1,3-difluoro-5-bromobenzene and unsubstituted iodobenzene also have a comparable halogen bond strength. While bromo and chloro groups are widely employed substituents in current medicinal chemistry, iodo groups are often perceived as problematic. Substituting an iodoarene core by a substituted bromoarene scaffold might therefore be a feasible strategy to retain affinity by tuning the Br···LB (Lewis base) halogen bond to similar levels as the original I···LB halogen bond.
As someone who values ligand efficiency, the idea of putting in an iodine gives me the shivers. A fluoro-bromo combo doesn't seem much more attractive, although almost anything looks good compared to a single atom that adds 127 mass units at a single whack. But I might have to learn to love one someday.
The paper includes a number of examples of groups that seem to be capable of interacting with halogens, and some specific success stories from recent literature. It's probably worth thinking about these things similarly to the way we think about hydrogen bonds - valuable, but hard to obtain on purpose. They're both directional, and trying to pick up either one can cause more harm than good if you miss. But keep an eye out for something in your binding site that might like a bit of positive charge poking at it. Because I can bet that you never thought to address it with a bromine atom!
Update: in the spirit of scientific inquiry, I've just sent in an iodo intermediate from my current work for testing in the primary assay. It's not something I would have considered doing otherwise, but if anyone gives me any grief, I'll tell them that it's 2013 already and I'm following the latest trends in medicinal chemistry.
Here's something I've been following for the last couple of weeks in the chemical blogging world, and now it's up on its own site: "Blog Syn", an initiative of the well-known chemblogger See Arr Oh. The idea here is to take interesting new reactions that appear in the literature, and. . .well, see if they actually work. (A radical concept, I know, but stick with me here).
The first example is this recent paper in JACS, which shows an unusual iron-sulfur reaction that ends up generating a benz-azole directly from an active methyl group in one pot. There are now three repeats of the reaction, and the verdict (so far) is that it works, but not quite as well as hoped for. You probably have to be careful to exclude oxygen (the paper itself just says "under argon"), and the yield of the test reaction is not has high as reported. As you'll see, there are spectral data, sources of reagents, photos of experimental setups - everything you'd need to see how this reaction is actually being (re)run.
I like this idea very much, and I look forward to seeing it applied to new reactions as they appear (and I hope to contribute the occasional run myself, when possible). They're accepting nominations for the next reaction to test, so if you have something you've seen that you're wondering about, put it into the hopper.
If you haven't seen it yet, this video tour through the DayGlo company's facilities is quite a sight. For us chemists, be sure to check out things at about the 5:30 mark, where they head into the wet chemistry area. You'll see some of the most well-used batch reactors you can picture (their largest one was bought used in the early 1970s, to give you some idea). As the chemist giving the tour says, "This is not like the pharmaceutical industry. . ."
This looks like an interesting reaction; let's see what gets made of it. David Milstein's group at the Weizmann Institute in Israel report a new catalytic system to oxidize alcohols to carboxylic acids, with water as the oxygen donor (as shown from labeling experiments). Hydrogen gas bubbles out of the mixture. The catalyst is a ruthenium complex, and although the reaction is not especially fast (18 hour timescale), the turnover numbers seem to be good (0.2% catalyst loading). Interestingly, oxygen actually seems to hurt the catalyst; the system runs better under argon. One possible drawback is that the ruthenium catalyst can serve as a hydrogenation catalyst - alkenes are reduced, what with all the hydrogen around.
Getting rid of (most of) the metals and the high-valent reagents will be worth the trouble industrially, as will getting rid of the need for pressurized oxygen. As it is now, many carboxylic acid compounds are produced on scale via either alkenes (hydroformylation and then oxidation of the aldehydes with a catalyst, or carbonylation), from alkanes via nonselective oxidation in air, or from alcohols via carbonylation.
We're still a long way from ditching the current processes, but if this reaction is robust enough, it could open up some new industrial feedstock routes. (One that I wonder about is replacing the current route to adipic acid, used in Nylon production. It's currently made through a rather foul nitric acid process - if there's enough hexanediol in the world. (Not sure if that's feasible, though - it looks like most of the hexanediol is made instead by reducing adipic acid! Makes you wonder if there's a potential biological route, as there is for butanediol). Edit - fixed this part, due to dropped some carbons between my brain and the keyboard this morning). Someone may also find a nice use for the hydrogen that's given off, and get some sort of two-for-one process. At the very least, this is a reminder of just how much more metal-catalyzed chemistry there is to be discovered. . .
Update: one of the paper's authors has dropped by the comments section, with interesting further details. . .
Here's an interesting challenge: over at Synthetic Remarks, there's a need for a couple of grams of 3,4-difluorothiophene. But you can't buy that much, and the literature has very little useful to say about how one would make it. So is there a practical route to the stuff (at least on paper) that's worth trying? Note that Dr. Freddy stipulates "No Sandmeyer crap, for heaven's sake", so no Balz-Schiemann chemistry, folks.
The prize? Any chemistry book worth up to $150 from Amazon, sent to your door. (Just think of the possibilities) So if any of you have any bright fluorination ideas, have a crack at it, and good luck!
This is a lesson that everyone should have learned many times before, but those colorful atoms are just so. . .colorful and everything. If anyone knows what element is supposed to be colored "silvery purple", See Arr Oh would like to hear from you.
Here's a funny-looking compound for you - Ivorenolide A, isolated from mahogany tree bark, it has an 18-membered ring with conjugated acetylenes in it. That makes the 3-D structure quite weird; it's nearly flat. And it has biological activity, too (immunosuppression, as measured by T-cell and B-cell proliferation assay in vitro). Got anything that looks like this in your compound libraries? Me neither.
You've probably seen the story that a substantial quantity (roughly fifty pounds!) of gold dust seems to have gone missing from Pfizer's labs in St. Louis. No report I've seen has any details, though, on just what Pfizer was doing with that much gold dust - the company isn't saying. I can tell you that I've never found a laboratory use for it myself dang it all.
So let's speculate! Why would a drug company need gold dust on that scale? Buying it in that form makes you think that a large surface area might have been important, unless there was some gold refinery running Double Coupon Wednesday on the stuff. Making a proprietary catalyst? Starting material for functionalized gold nanoparticles? Solid support(s) for some biophysical assay? Classy replacement for Celite for those difficult filtrations? Your ideas are welcome in the comments. . .
Update: out of many good comments, my favorite so far is: "Knowing Pfizer, I'm guessing they were planning on turning it into lead."
Word comes that Fluorous is shutting down. The company had been trying for several years to make a go of it with its polyfluorinated materials, used for purification and reaction partitioning, but the commercial side of the business has apparently been struggling for a while. It's a tough market, and there hasn't, as far as I know, been what the software people would call a "killer app" for fluorous techniques - they're interested, often useful, but it's been hard to persuade enough people to take a crack at them.
The company is still taking orders for its remaining stock, and the link above will allow you to download their database of literature references for fluorous techniques, among other things. I wish the people involved the best, and I wish that things had worked out better.
Here's another next-generation X-ray crystal paper, this time using a free electron laser X-ray source. That's powerful enough to cause very fast and significant radiation damage to any crystals you put in its way, so the team used a flow system, with a stream of small crystals of T. brucei cathepsin B enzyme being exposed in random orientations to very short pulses of extremely intense X-rays. (Here's an earlier paper where the same team used this technique to obtain a structure of the Photosystem I complex). Note that this was done at room temperature, instead of cryogenically. The other key feature is that the crystals were actually those formed inside Sf9 insect cells via baculovirus overexpression, not purified protein that was then crystallized in vitro.
Nearly 4 million of these snapshots were obtained, with almost 300,000 of them showing diffraction. 60% of these were used to refine the structure, which out at 2.1 Angstroms, and clearly showed many useful features of the enzyme. (Like others in its class, it starts out inhibited by a propeptide, which is later cleaved - that's one of the things that makes it a challenge to get an X-ray structure by traditional means).
I'm always happy to see bizarre new techniques used to generate X-ray structures. Although I'm well aware of their limitations, such structures are still tremendous opportunities to learn about protein functions and how our small molecules interact with them. I wrote about the instrument used in these papers here, before it came on line, and it's good to see data coming out of it.
Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .
We have a late entry in this year's "Least Soluble Molecule - Dosed In Vivo Division" award. Try feeding that into your cLogP program and see what it tells you about its polarity. (This would be a good ChemDraw challenge, too). What we're looking at, I'd say, is a sort of three-dimensional asphalt, decorated around its edges with festive scoops of lard.
The thing is, such structures are perfectly plausible building blocks for various sorts of nanotechnology. It would not, though, have occurred to me to feed any to a rodent. But that's what the authors of this new paper managed to do. The compound shown is wildly fluorescent (as well you might think), and the paper explores its possibilities as an imaging agent. The problem with many - well, most - fluorescent species is photobleaching. That's just the destruction of your glowing molecule by the light used to excite it, and it's a fact of life for almost all the commonly used fluorescent tags. Beat on them enough, and they'll stop emitting light for you.
But this beast is apparently more resistant to photobleaching. (I'll bet it's resistant to a lot of things). Its NMR spectrum is rather unusual - those two protons on the central trypticene show up at 8.26 and 8.91, for example. And in case you're wondering, the M+1 peak in the mass spec comes in at a good solid 2429 mass units, a region of the detector that I'm willing to bet most of us have never explored, or not willingly. The melting point is reported as ">300 C", which is sort of disappointing - I was hoping for something in the four figures.
The paper says, rather drily, that "To direct the biological application of our 3D nanographene, water solubilization is necessary", but that's no small feat. They ended up using Pluronic surfactant, which gave them 100nm particles of the stuff, and they tried these out on both cells and mice. The particles showed very low cytotoxicity (not a foregone conclusion by any means), and were actually internalized to some degree. Subcutaneous injection showed that the compound accumulated in several organs, especially the liver, which is just where you'd expect something like this to pile up. How long it would take to get out of the liver, though, is a good question.
The paper ends with the usual sort of language about using this as a platform for chemotherapy, etc., but I take that as the "insert technologically optimistic conclusion here" macro that a lot of people seem to have loaded into their word processing programs. The main reason this caught my eye is that this is quite possibly the least drug-like molecule I've ever seen actually dosed in an animal. When will we see its like again?
I've decided this year that I'll be posting some recommendations for science-themed gifts, since this is the season that people will be looking around for them. This article at Smithsonian has a look at the history of the good ol' chemistry set. As I mentioned in this old post, I had one as a boy, augmented by a number of extra reagents, some of which (potassium permanganate!) were in rather too high an oxidation state for a ten-year-old. I can't report that I did much in the way of systematic experiments with all my material, but I did have a good time with it. Once in a while some combination of reagents will remind me of the smell of those bottles, and I'm instantly transported back to the early 1970s, out in a corner of the shop building in back of our house. (Elemental sulfur is a component of that smell; the rest I'm not sure about).
The Smithsonian article mentions that Thames and Kosmos chemistry sets get good reviews from people who've seen them. So if you're in the market for a gift for the kids, that might be a line to try! The potassium permanganate I'll leave up to individual discretion. . .
As mentioned the other day, this will be a post for people to ask questions directly to Philip Skinner (SDBioBrit) of Perkin-Elmer/Cambridgesoft. He's doing technical support for ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, etc., and will be monitoring the comments and posting there. Hope it helps some people out!
Note - he's out on the West Coast of the US, so allow the poor guy time to get up and get some coffee in him!
I don't know how many readers have been following this, but there's been some interesting work over the last few years in using streptavidin (a protein that's an old friend of chemical biologists everywhere) as a platform for new catalyst systems. This paper in Science (from groups at Basel and Colorado State) has some new results in the area, along with a good set of leading references. (One of the authors has also published an overview in Accounts of Chemical Research). Interestingly, this whole idea seems to trace back to a George Whitesides paper from back in 1978, if you can believe that.
(Strept)avidin has an extremely well-characterized binding site, and its very tight interaction with biotin has been used as a set of molecular duct tape in more experiments than anyone can count. Whitesides realized back during the Carter administration that the site was large enough to accommodate a metal catalyst center, and this latest paper is the latest in a string of refinements of that idea, this time using a rhodium-catalyzed C-H activation reaction.
A biotinylated version of the catalyst did indeed bind streptavidin, but this system showed very low activity. It's known, though, that the reaction needs a base to work, so the next step was to engineer a weakly basic residue nearby in the protein. A glutamate sped things up, and an aspartate even more (with the closely related asparagine showing up just as poorly as the original system, which suggests that the carboxylate really is doing the job). A lysine/glutamate double mutant gave even better results.
The authors then fine-tuned that system for enantioselectivity, mutating other residues nearby. Introducing aromatic groups increased both the yield and the selectivity, as it turned out, and the eventual winner was run across a range of substrates. These varied quite a bit, with some combinations showing very good yields and pretty impressive enantioselectivities for this reaction, which has never until now been performed asymmetrically, but others not performing as well.
And that's promise (and the difficulty) with enzyme systems. Working on that scale, you're really bumping up against individual parts of your substrates on an atomic level, so results tend, as you push them, to bin into Wonderful and Terrible. An enzymatic reaction that delivers great results across a huge range of substrates is nearly a contradiction in terms; the great results come when everything fits just so. (Thus the Codexis-style enzyme optimization efforts). There's still a lot of brute force involved in this sort of work, which makes techniques to speed up the brutal parts very worthwhile. As this paper shows, there's still no substitute for Just Trying Things Out. The structure can give you valuable clues about where to do that empirical work (otherwise the possibilities are nearly endless), but at some point, you have to let the system tell you what's going on, rather than the other way around.
We'll start off with a little extraterrestrial chemistry. As many will have heard, there are all sorts of hints being dropped that the sample analyzing equipment on the Mars Curiosity rover has detected something very interesting. We'll have to wait until the first week of December to find out what it is, but my money is on polycyclic aromatic hydrocarbons or something other complex abiotic organics.
Here's a detailed look at the issue. The Martian surface has a pretty vigorous amount of perchlorate in it, which was not realized for a long time (and rather complicates the interpretation of some of the past experiments on it). But Curiosity's analytical suite was designed to deal with this, and my guess is that these techniques have worked and that organic material has been detected.
I would very much bet against any sort of strong signature of life-as-we-know-it, though. For one thing, finding that in a random sand dune would seem pretty unlikely. Actually, finding good traces anywhere in the top layer of Martian rock and dust seems unlikely (as opposed to deeper underground, where I'm willing to speculate freely on the possible existence/persistence of bacteria and such). And I'm not sure the Curiosity would be well equipped to discriminate abiotic versus biotic compounds, anyway.
But organic compounds in general, absolutely. This brings up an interestingly false idea that underlies a lot of casual thinking about Mars (and space in general). Many people have this mental picture of everywhere outside Earth being sort of like the surface of our moon. It leads to a false dichotomy: here we have temperate air, liquid water, life and the byproducts of life (oil and coal, for example). Out there is all cold barren rock directly exposed to vacuum and hard radiation. We associate "space" with clean, barren, surfaces and knife-edge shadows, whereas "down here" it's all wet and messy. Not so.
There's plenty of irradiated rock, true, but there's water all over the outer solar system, in huge amounts. And while what we see out there is frozen, it's a near-certainty that there are massive oceans of the liquid stuff down under the various crusts of the larger outer-planet moons. All those alien-invasion movies, the ones with the extraterrestrials after our planet's water, are fun but ridiculous examples of that false dichotomy in action. There's plenty of organic chemistry, too - I've written before about how the colors of Jupiter's clouds remind me of reaction byproducts, and it's no coincidence that they do. The gas giant planets are absolutely full of organic chemicals of all varieties, and they're getting heated, pressurized, mixed, irradiated, and zapped by huge lightning storms all the hours of their days. What isn't in there?
Everything came that way. The solar system has plenty of hydrocarbons, plenty of small carbohydrates, and plenty of amines and other nitrogen-containing compounds in it. The carbonaceous chrondrites are physical evidence that's fallen to Earth - some of these have clearly never been heated since their formation (since they're full of water and volatile organics), so the universe would seem to be awash in small-molecule gorp. There's another false dichotomy, that the materials for life are very rare and precious and only found down here on Earth. But they're everywhere.
Via a reader, here's an excellent YouTube video for those of you who use ChemDraw. I've been using the software since it came out, and there are several useful tricks here that I didn't know were even in the software. Did you know that you could give your common structures nicknames, so that the program would immediately draw them when you typed in the name? Or how to use the "Sprout" tool for drawing bonds without going to the bond-drawing tool? There's also an detailed look at customizing hotkeys, which for a heavy ChemDraw user will make you look like you have magic powers. Well worth a look. Update: see the comments for more if you're into this sort of thing!
I'd still like to see how quickly all these would allow you to draw something like this (well, other than giving it a nickname - I'd suggest "Jabba" or "Chemzilla" - and having it appear instantly). Of course, those of us old enough to remember the pre-ChemDraw (or any-other-draw) days will have a different perspective on the whole field. I remember the first time I saw the program being used, which would have been 1986, not an awful long time after it came out (see the timeline of computers in chemistry here). Like every other practicing organic chemist, as soon as I saw the program I knew that I had to have it. It was, as they say, a "killer app", and ChemDraw sold Macs, albeit on a smaller scale than VisiCalc sold Apple IIs. But it's hard to get across how those programs felt, unless you've actually rubbed Helvetica capital letters from a transfer sheet into an ink-drawn chair-conformation ring to make a drawing of a carbohydrate, or had to go back and manually erase (and write in) half a column of figures because you had recalculate them. It feels like, instead of hitting "Print", being given instead a slab of hardwood and some sharp tools which which to start carving out a block for inking. Or instead of hitting "Send", having someone bring you a horse.
Here's a paper that I missed in Organic Process Research and Development earlier this year, extolling the virtues of sulfolane as a high-temperature polar solvent. I have to say, I've never used it, although I hear of it being used once in a while, mainly by people who are really having to crank the temperature on some poor reaction.
The only bad thing I've heard about it is its difficulty of removal. That high-boiling polar aprotic group all has this problem, of course (DMSO is no treat to get out of your sample sometimes, either, although it's so water-soluble that you always have sheer extraction on your side). But sulfolane is higher-boiling than all the rest (287C!), and it also freezes at about 28C, which could be a problem, too. (The paper notes that small amounts of water lower the freezing temperature substantially, and that 97/3 sulfolane/water is an article of commerce itself, probably for that reason). It has an unusual advantage, though, from a safety standpoint: it stands out from all the other polar aprotics as having remarkably poor skin penetration (as contrasted very much with DMSO, for example). It's more toxic than the others, but the skin penetration makes up for that, as long as you're not ingesting it some other way, which is Not Advised.
The paper gives a number of examples where this solvent proved to be just the thing, so I'll have to keep it in mind. Anyone out there care to share any hands-on experiences?
For those wanting a timeline of the whole hexacyclinol business, with links to the articles, blogs, and commentary that's surrounded it, allow me to recommend Carmen Drahl's "History of the Hexacyclinol Hoo-Hah". (And no, the whole thing is not written in alliteration; for that, you'll be wanting this).
The retraction has been agreed due to lack of sufficient Supporting Information. In particular, the lack of experimental procedures and characterization data for the synthetic intermediates as well as copies of salient NMR spectra prevents validation of the synthetic claims. The author acknowledges this shortcoming and its potential impact on the community
Potential? After six years? There were people taking their first undergraduate organic course when this controversy hit who are now thinking about how to start tying together their PhD dissertations. It seems that Angewandte Chemie is very loath to go the full-retraction route (there haven't been many), but that retraction notice doesn't bring up anything that wasn't apparent after the first ten minutes of reading the paper.
Here's a general organic chemistry question for the crowd, inspired by a recent discussion among colleagues. We were whiteboarding around some structures, and the statement was made that "By this time in the history of organic chemistry, unknown heterocycles are probably unknown for a very good reason". So, true or false? Are the rings that we haven't made yet mostly unmade because they're very hard (or impossible), or mostly because no one's ever cared about them (or realized that they'd made them at all)?
Note that this problem was the subject of some thorough theme-and-variations work a few years ago. That paper would suggest that that as many as 90% of the unknown heterocycles are simply not feasible to make, but that still leaves you with three thousand or so that are. So the answer to the question above might turn out to be "Both at the same time. . ."
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
That title should bring in the hits. But don't get your hopes up! This is medicinal chemistry, after all.
"Can't you just put the group in your molecule that does such-and-such?" Medicinal chemists sometimes hear variations of that question from people outside of chemistry - hopeful sorts who believe that we might have some effective and instantly applicable techniques for fixing selectivity, brain penetration, toxicity, and all those other properties we're always trying to align.
Mostly, though, we just have general guidelines - not so big, not so greasy (maybe not so polar, either, depending on what you're after), and avoid a few of the weirder functional groups. After that, it's art and science and hard work. A recent J. Med. Chem. paper illustrates just that point - the authors are looking at the phenomenon of molecular promiscuity. That shows up sometimes when one compound is reasonably selective, but a seemingly closely related one hits several other targets. Is there any way to predict this sort of thing?
"Probably not", is the answer. The authors looked at a range of matched molecular pairs (MMPs), structures that were mostly identical but varied only in one region. Their data set is list of compounds in this paper from the Broad Institute, which I blogged about here. There are over 15,000 compounds from three sources - vendors, natural product collections, and Schreiber-style diversity-oriented synthesis. The MMPs are things like chloro-for-methoxy on an aryl ring, or thiophene-for-pyridyl with other substituents the same. That is, they're just the sort of combinations that show up when medicinal chemists work out a series of analogs.
The Broad data set yielded 30954 matched pairs, involving over 8000 compounds and over seven thousand different transformations. Comparing these compounds and their reported selectivity over 100 different targets (also in the original paper), showed that most of these behaved "normally" - over half of them were active against the same targets that their partners were active against. But at the other end of the scale, 829 compounds showed different activity over at least ten targets, and 126 of those compounds different in activity by fifty targets or more. 33 of them differed by over ninety targets! So there really are some sudden changes out there waiting to be tripped over; they're not frequent, but they're dramatic.
How about correlations between these "promiscuity cliff" compounds and physical properties, such as molecular weight, logP, donor/acceptor count, and so on? I'd have guessed that a change to higher logP would have accompanied this sort of thing over a broad data set, but the matched pairs don't really show that (nor a shift in molecular weight). On the other hand, most of the highly promiscuous compounds are in the high cLogP range, which is reassuring from the standpoint of Received Med-Chem Wisdom. There are still plenty of selective high-logP compounds, but the ones that hit dozens of targets are almost invariably logP > 6.
Structurally, though, no particular substructure (or transformation of substructures) was found to be associated with sudden onset of promiscuity, so to this approximation, there's no actionable "avoid sticking this thing on" rule to be drawn. (Note that this does not, to me at least, say that there are no such things are frequent-hitting structures - we're talking about changes within some larger structure, not the hits you'd get when screening 500 small rhodanine phenols or the like). In fact, I don't think the Broad data set even included many functional groups of that sort to start with.
On the basis of the data available to us, it is not possible to conclude with certainty to what extent highly promiscuous compounds engage in specific and/or nonspecific interactions with targets. It is of course unlikely that a compound might form specific interactions with 90 or more diverse targets, even if the interactions were clearly detectable under the given experimental conditions. . .
. . .it has remained largely unclear from a medicinal chemistry perspective thus far whether certain molecular frameworks carry an intrinsic likelihood of promiscuity and/or might have frequent hitter character. After all, promiscuity is determined for compounds, not their frameworks. Importantly, the findings presented herein do not promote a framework-centric view of promiscuity. Thus, for the evaluation and prioritization of compound series for medicinal chemistry, frameworks should not primarily be considered as an intrinsic source of promiscuity and potential lack of compound specificity. Rather, we demonstrate that small chemical modifications can trigger large-magnitude promiscuity effects. Importantly, these effects depend on the specific structural environment in which these modifications occur. On the basis of our analysis, substitutions that induce promiscuity in any structural environment were not identified. Thus, in medicinal chemistry, it is important to evaluate promiscuity for individual compounds in series that are preferred from an SAR perspective; observed specificity of certain analogs within a series does not guarantee that others are not highly promiscuous."
Point taken. I continue to think, though, that some structures should trigger those evaluations with more urgency than others, although it's important never to take anything for granted with molecules you really care about.
That's the word-for-word title of a provocative article by Rolf Carlson and Tomas Hudlicky in Helvetica Chimica Acta. That journal's usually not quite this exciting, but it is proud of its reputation for compound characterization and experimental accuracy. That probably helped this manuscript find a home there, where it's part of a Festschrift issue in honor of Dieter Seebach's 75th birthday.
The authors don't hold back much (and Hudlicky has not been shyabout these issues, either, as some readers will know). So, for the three categories of malfeasance described in the title, the first (hype) includes the overblown titling of many papers:
As long as the foolish use of various metrics continues there is little hope of return to integrity. Young scientists entering academia and competing for resources and recognition are easily infected with the mantra of importance of
publishing in 'high-impact journals' and, therefore, strive to make their work as noticeable as possible by employing excess hype.
It is the reader, not the author, of papers describing synthetic method who should evaluate its merits. Therefore, self-promoting words like 'novel', 'new', 'efficient', 'simple', 'high-yielding', 'versatile', 'optimum' should not be used in the title of the paper if such qualities are not covered by the actual content of the paper.
It also includes the inflation of reaction yields (see that link in the second paragraph above for more on that topic). This is another one that's going to be hard to fix:
Unfortunately, the community has chosen and continues to choose the yield values in submitted manuscripts as a measure of overall quality and/or utility of the report. This, of course, encourages the 'adjustment' in the values in order to avoid critique. An additional problem in the reported values is the fact that synthesis is performed on small scales, thanks to advances in NMR and other techniques available for structure determination. On milligram scales it is extremely difficult to accurately determine weight and content of a sample, given the equipment available in typical academic laboratory.
The second category, malpractice, is sloppy work, but not outright fraud:
Malpractice, as explained above, is usually not deliberate and derives primarily from ignorance or professional incompetence. The most frequent cases involve improper experimental protocols, improper methods used in characterization of compounds, and the lack of correct citations to previous work.
For example, the authors point out that very, very rarely are any new synthetic methods given a proper optimization. One-variable one-at-a-time changes are worthwhile, but they're not sufficient to explore a reaction manifold, not when these changes can interact with each other. As process chemists in industry know, the only way to explore such landscapes is with techniques such as Design of Experiments (DoE), which try to find out what factors in a multivariate system produce the greatest change in results. Here's an example; the process chemistry literature furnishes many more.
And finally, you have outright scientific misconduct - fraud, poaching of ideas from grant applications and the like, plagiarism in publications, etc. It's hard to get a handle on these - they seem to be increasing, but the techniques to find and expose them are also getting better. Over time, thought, these techniques might just have the effect of making fraud more sophisticated; that would be in line with human behavior as I understand it, and with selection pressure as well. The motives for such acts are with us still, and do not seem to be abating much, so I tend to think that determined miscreants will find ways to do what they want to do.
Thoughts? Some of this paper's points could be put in the "grumblings about the good old days" category, but I think that a lot of it is accurate. I'm not sure how good the old days were, myself, since they were also filled with human beings, but the pressures found today do seem to be bringing on a lot of behaviors we could do without.
The recent discussions here about ugly tool compounds have prompted an alert reader to send in this example, a putative agonist of the orphan receptor GPR35. Will someone rise to the defense of this one?
My post the other day on a very unattractive screening hit/tool compound prompted a reader to mention this paper. It's one from industry this time (AstraZeneca), and at first it looks like similarly foul chemical matter. But I think it's worth a closer look, to see how they dealt with what they'd been given by screening.
This team was looking for hits against PIM kinases, and the compound shown was a 160nM hit from high-throughput screening. That's hard to ignore, but on the other hand, it's another one of those structures that tell you that you have work to do. It's actually quite similar to the hit from the previous post - similar heterocycle, alkylidene branching to a polyphenol.
So why am I happier reading this paper than the previous one? For one, this structure does have a small leg up, because this thiazolidinedione heterocycle doesn't have a thioamide in it, and it's actually been in drugs that have been used in humans. TZDs are certainly not my first choice, but they're not at the bottom of the list, either. On the other hand, I can't think of a situation where a thioamide shouldn't set off the warning bells, and not just for a compound's chances of becoming a drug. The chances of becoming a useful tool compound are lower, too, for the same reasons (potential reactivity / lack of selectivity). Note that these compounds are fragment-sized, unlike the diepoxide we were talking about the other day, which means that they're likely to be able to fit into more binding sites.
But there's still that aromatic ring. In this case, though, the very first thing this paper says after stating that they decided to pursue this scaffold is: "We were interested to determine whether or not we could remove the phenol from the series, as phenols often give poor pharmacokinetic and drug-like properties.". And that's what they set about doing, making a whole series of substituted aryls with less troublesome groups on them. Basic amines branching off from the ortho position led to very good potency, as it turned out, and they were able to ditch the phenol/catechol functionality completely while getting well into (or below) single-digit nanomolar potency. With these compounds, they also did something else important: they tested the lead structures against a panel of over four hundred other kinases to get an idea of their selectivity. These is just the sort of treatment that I think the Tdp-1 inhibitor from the Minnesota/NIH group needs.
To be fair, that other paper did show a number of attempts to get rid of the thioamide head group (all unsuccessful), and they did try a wide range of aryl substituents (the polyphenols were by far the most potent). And it's not like the Minnesota/NIH group was trying to produce a clinical candidate; they're not a drug company. A good tool compound to figure out what selective Tdp-1 inhibition does is what they were after, and it's a worthy goal (there's a lot of unknown biology there). If that had been a drug company effort, those two SAR trends taken together would have been enough to kill the chemical series (for any use) in most departments. But even the brave groups who might want to take it further would have immediately profiled their best chemical matter in as many assays as possible. Nasty functional groups and lack of selectivity would surely have doomed the series anywhere.
And it would doom it as a tool compound as well. Tool compounds don't have to have good whole-animal PK, and they don't have to be scalable to pilot plant equipment, and they don't have to be checked for hERG and all the other in vivo tox screens. But they do have to be selective - otherwise, how do you interpret their results in an assay? The whole-cell extract work that the group reported is an important first step to address that issue, but it's just barely the beginning. And I think that sums up my thoughts when I saw the paper: if it had been titled "A Problematic Possible Tool Compound for Tdp-1", I would have applauded it for its accuracy.
The authors say that they're working on some of these exact questions, and I look forward to seeing what comes out of that work. I'd have probably liked it better if that had been part of the original manuscript, but we'll see how it goes.
S. Raj Govindarajan makes his case (some of which recapitulates points made by readers here). I doubt if he's convinced anyone who holds the view of the original authors, but I think he's on target with this part:
Should your son be forced to take chemistry? Absolutely. But the curriculum needs to be rethought in a way that would instill practical knowledge, curiosity about the world, and an appetite for at least understanding scientific achievement and its necessity/implications.
People don’t have to become scientists if they don’t want to, but they should have a fundamental understanding of scientific concepts. That way, people like myself need not be terrified that an ignorant public will vote to slash funding for scientific research and understanding. . .
Here's a blog post at The Washington Post in which a parent asks the musical question: "Why Are You Forcing My Son to Take Chemistry?"
It's short, but it can be summarized as My son will not be a chemist. He will not be a scientist. A year of chemistry class will do nothing for him but make him miserable. He could be taking something else that would be doing him more good. And this father is probably right about his son, who's 15, not becoming any sort of scientist. But his argument breaks down a bit after that.
That's because the same objections could apply to most other things that his son could be taking. He says that his son "could be really good at" public speaking, or music, or creative writing, for example. Or not. Perhaps one of them would make him miserable, or simply do nothing for him, and be an opportunity cost as well. The difference is that the boy (and/or his father) are already pretty sure that chemistry will be a waste, and they haven't had the chance to find that out about the others yet.
But again, I take him at his word when he says that his son will be lousy at chemistry (leaving aside the self-fulfilling prophecy aspect, although that's definitely something to consider). This gets back to questions that I wrote about here, namely: how much science should people know? How much should they get in school? How much will do them some good? I think, in this case, that everyone should know that there are such things as chemical elements, and that they combine to form compounds. They should know about reactions like combustion, and a bit about energy and thermodynamics. Knowing an acid from a base would be nice, but the list just keeps on going from there, and where does one draw the line?
I think, after a basic list of facts and concepts, that what I'd like for kids to get out of a science class is the broader idea of experimentation - that the world runs by physical laws which can be interrogated. Isolating variables, varying conditions, generating new hypotheses: these are habits of mind that actually do come in handy in the real world, whether you remember what an s orbital is or not. I'm not sure how well these concepts get across, though.
Do you need a year of high school chemistry to learn these things? I doubt it. A lot of it will be balancing acid-base equations, learning about the columns and rows of the periodic table, oxidations states, Lewis structures, and so on. And the son probably will have no use for any of that - the father has no memory of any of it himself, and although I'd like people to know some of these things, I wonder if not knowing them has harmed him too much. What might have harmed him, though, is a lack of knowledge of those broader points. Or a general attitude that science is That Stuff Those Other People Understand. You make yourself vulnerable to being taken in if you carry that worldview around with you, because claiming scientific backing is a well-used ploy. You should know enough to at least not be taken in easily.
There may be no more R. B. Woodwards, but never let it be said that there's nothing more to be found in organic synthesis. Until we can make natural products the way that they're made in nature, at room temperature, atom by atom, our skills don't stand comparison with what we know is possible. But that's not going to be the work of a single genius, for sure, although applications are always being accepted.
New reactions, though, are always out there. Here's an example of one, in a field (the Diels-Alder reaction) that you'd think would have been pretty well worked over. This will win no Nobels, and only synthetic organic chemists will pay attention. But I'm always glad to see discoveries like this, and to know that they're still out there.
Explaining R. B. Woodward to a non-chemist, via jazz. And they were probably right, too. But that brings up an interesting question, one that applies to organic synthesis as well as every other human activity. At what point does a field become incapable of supporting Titans?
Consider organic chemistry. There were many major discoveries that had to be made before we (as a civilization) even understood what was going on. Atomic theory itself, valences, tetrahedral carbons, spectroscopy: without these and similar foundational work, you're not going to get very far. But at some point you've got enough material for the next genius to come along and make the most out of it, and I'd put Woodward in that category. He had just enough tools to make his work barely possible, and he had to invent quite a few more along the way. This gave him plenty of room to demonstrate just how good he was at organic synthesis. Complex molecules that would have been beyond structural determination in years past were now there (in theory) for the taking, but these were still well out of reach for all but the most inspired.
To switch fields of achievement, I recall reading someone's opinion once that Bach wasn't all that good, all things considered. I didn't care for that when I saw it (I like Bach very much), but the argument was that he got there "firstest with the mostest", as they said in the Gold Rush, and did such a thorough job on the musical styles of his day (counterpoint, the fugue form, etc.) that no one could ever stand on his level again. You couldn't, because Bach had already Been There and Composed That. Now, that undermines the author's original point, I think, because only a musical genius could have covered so much ground so well, but his second point stands: once Bach had done it, it was done. Anyone who composes a theme-and-variations in contrapuntal style will be compared to him, and probably unfavorably.
Similar arguments can be made across the arts and sciences. But the sciences have the advantage of not being subject so much to the whims of fashion. Picasso, I've long thought, helped create the art world in which he would be considered a great painter. (It reminds me of the way that successful organisms set up a positive feedback loop with their environment, helping to induce the conditions in which they can thrive). There are catastrophic events in both ecologies, of course, that change the requirements of fitness - Burne-Jones (to pick one example) went so far out of fashion by the 1950s and 60s that people were throwing his paintings away with the trash. But the art world has set itself up with fashion as part of its motor. Styles of painting come and go, because styles of painting have to come and go. But Newton's discoveries stand right where they were when he made them - si monumentum requiris, circumspice. Newtonian mechanics do not go out of fashion.
The only thing that can be done to alter great scientific discoveries of the past is to show how they fit into previously-unrealized larger contexts (as Einstein did with Newton). That, naturally, tends to get harder and harder as time goes on. Once the brush is cleared in science, it tends to stay cleared. That process can uncover new problems, but those are the tougher ones. This line of thought brings on talk of the End Of Science, as John Horgan put it, with which you may contrast Vannevar Bush's "Endless Frontier" (which helped establish the modern funding system for academic science in the US. My own take is that the frontier is endless for practical purposes for the foreseeable future, but not similarly endless in every direction at once.
There will, I'd say, never be another R.B. Woodward. Heraclitus aside, we have stepped into that river already, and crossed it. That's not to say that there are not great challenges in synthetic organic chemistry - there are - but it means that there is much less scope for a sky-filling fireworks display like Woodward's career. Anyone trying to recapitulate it is wasting time and effort that could be better applied.
Update: Wavefunction has thoughts on the issue here.
A deserved Nobel? Absolutely. But the grousing has already started. The 2012 Nobel Prize for Chemistry has gone to Bob Lefkowitz (Duke) and Brian Kobilka (Stanford) for GPCRs, G-protein coupled receptors.
Everyone who's done drug discovery knows what GPCRs are, and most of us have worked on molecules to target them at one point or another. At least a third of marketed drugs, after all, are GPCR ligands, so their importance is hard to overstate. That's why I say that this Nobel is completely deserved (and has been anticipated for some time now). I've written about them numerous times here over the years, and I'm going to forgo the chance to explain them in detail again. For more information I can recommend the Nobel site's popular background and their more detailed scientific background - they've already done the explanatory work.
I will say a bit about where GPCRs fit into the world of drug targets, though, since they've been so important to pharma R&D. Everyone had realized, for decades (more like centuries), that cells had to be able to send signal to each other somehow. But how was this done? No matter what, there had to be some sort of transducer mechanism, because any signal would arrive on the outside of the cell membrane and then (somehow) be carried across and set off activity inside the cell. As it became clear that small molecules (both the body's own and artificial ones from outside) could have signaling effects, the idea of a "receptor" became inescapable. But it's worth remembering that up until the mid-1970s you could find people - in print, no less - warning readers that the idea of a receptor as a distinct physical object was unproven and could be an unwarranted assumption. Everyone knew that molecular signals were being handled somehow, but it was very unclear what (or how many) pieces there were to the process. This year's award recognizes the lifting of that fog.
It also recognizes something else very important, and here I want to rally my fellow chemists. As I mentioned above, the complaints are already starting that this is yet another chemistry prize that's been given to the biologists. But this is looking at things the wrong way around. Biology isn't invading chemistry - biology is turning into chemistry. Giving the prize this year to Lefkowitz and Kobilka takes us from the first cloning of a GPCR (biology, biology all the way) to a detailed understanding of their molecular structure (chemistry!) And that's the story of molecular biology for you, right there. As it lives up to its name, its practitioners have had to start thinking of their tools and targets as real, distinct molecules. They have shapes, they have functional groups, they have stereochemistry and localized charges and conformations. They're chemicals. That's what kept occurring to me at the recent chemical biology conference I attended: anyone who's serious about understanding this stuff has to understand it in terms of chemistry, not in terms of "this square interacts with this circle, which has an arrow to this box over here, which cycles to this oval over here with a name in the middle of it. . ." Those old schematics will only take you so far.
So, my fellow chemists, cheer the hell up already. Vast new territories are opening up to our expertise and our ways of looking at the world, and we're going to be needed to understand what to do next. Too many people are making me think of those who objected to the Louisiana Purchase or the annexation of California, who wondered what we could possibly ever want with those trackless wastelands to the West and how they could ever be part of the country. Looking at molecular biology and sighing "But it's not chemistry. . ." misses the point. I've had to come around to this view myself, but more and more I'm thinking it's the right one.
I wanted to take a moment to mention this conference, coming up on November 6 at Northeastern in Boston. They have a wide-ranging program on drug discovery scheduled, with some people that I know from experience to be good speakers. Worth a look if you're in the area.
I can't resist pointing out this compound, which recently showed up in J. Med. Chem.. Now, that's a Bcl-2/Bcl-xl inhibitor, the star of the protein-protein interaction world, and there's probably never going to be a nice-looking compound that does the job in that system. The interacting surfaces are too wide and too shallow; it's a real triumph that people have compounds for this system at all. But people have, and there are compounds in the clinic.
But man, will you look at the things. This is one from Bristol-Myers Squibb the University of Michigan, and it is a beast in all directions. It weighs a mere 811 daltons, and is actually one of the more svelte compounds in the paper. Solubility, formulation, absorption, clearance. . .it all looks like fun. But we may well have to start learning how to deal with compounds like these, so we'd better steel ourselves.
According to the Wall Street Journal, the periodic table is now cool. It's shown up as a design, uh, element in TV shows, on T-shirts, and so on. (The article even gets quotes from Tom Lehrer, who I'm glad to hear is still with us). And Theodore Gray's coffee-table book The Elements
has now sold 650,000 copies (one of them to me - I recommend it). Of course, Gray has the ultimate periodic-table fan item, if you can afford it:
People who lacked patience for a chemistry set can now buy periodic table shower curtains, T-shirts, coffee mugs and even a periodic coffee table. The furniture piece, made of burred oak with samples of inlaid elements, costs $8,550, plus shipping, which gets pricey. For safety reasons, fluorine, chlorine and bromine are forbidden on airplanes, says Max Whitby in London, who produces the table.
I'd add my own, if I had 9 long ones to spend on one of these. Thick-walled ampoules would do the job, although the fluorine would still present a problem (doesn't it always?) But I suppose most of the radioactive ones (except depleted uranium) are still out. Hand-rubbed varnish would probably stop alpha particles, but not much else.
Since we've been talking about the ACS around here recently, I wanted to highlight a decision in a long-running court case the society has been involved in, American Chemical Society v. Leadscope. Rich Apodaca has a summary here of the earlier phases of the suit, which is now in its tenth year in the courts. Basically, three employees of Chemical Abstracts left to form their own chemical information company, and ended up with a patent on a particular variety of software that would display structure-activity and structure-property relationships. The ACS felt that this was too similar to the (discontinued) Pathfinder software they'd developed, and sued.
The ACS lost in a jury trial - in fact, they did more than just lose. The jury found that the society had competed unfairly, filing suit maliciously and defaming Leadscope in the process, and they awarded the latter company $26.5 million in damages. The ACS then lost in the Court of Appeals (and the damages were increased). So they took things all the way to the Ohio Supreme Court, and now they've lost there, too. The defamation ruling (and award) was reversed, and will be vacated by the lower court, but the finding of unfair competition stands. It looks like the society still owes $26.5 million. As this post by an IP lawyer shows, they were going all out:
As for the issue of ACS's subjective intent, the Supreme Court found ample support for the jury's finding that ACS had the intent to injure Leadscope and its founders. It noted that ACS's president had closely monitored Leadscope and had even sent out an email to then-Ohio-Governor Robert Taft to abort a visit by the governor to Leadscope's offices. ACS's former information technology director also provided damaging testimony documenting ACS's president's hostility towards Leadscope. In addition, ACS took actions or made statements that interfered with Leadscope's ability to get funding (for example, by dissuading an venture capitalist interested in investing in Leadscope by telling him that there were legal issues with Leadscope's technology) and took actions in the litigation to disrupt Leadscope's ability to get insurance coverage for the dispute.
As detailed here at ChemBark, it's not like there's been a lot of coverage about this (I've never written about it myself). These are things that every member of the ACS should at least be aware of, but it's not like the ACS is going to do that job, for obvious reasons. One of the main venues for such stories would be. . .Chemical and Engineering News, so that's not going to happen. And it's not a story that resonates much with a general newspaper/magazine readership, so what does that leave us with? Well, mentions like that Nature News article to get the word out, and the blogs to go into the details.
That ChemBark post has a whole series of questions that would be very much worth answering. How the the ACS get into this fix in the first place? Was the original suit ill-advised? How much will that $26.5 million affect the society's finances - is that a big deal, or not? How much further money went down the drain in legal fees along the way? Are there any lessons to be learned from all this, or could the same thing start happening again next month?
And beyond those immediate questions, there are the bigger ones that the ACS (and other scientific societies) should be asking. Can a single entity be (A) a publisher of a large stable of high-profile scientific journals, and (B) the curator and disseminator of the (very profitable) primary database of all the reported chemical matter in the world, and (C) the voice of its own membership, who are simultaneously paying money for access to A and B, and (D) the lobbying organization for chemistry in general, as well as (E) a scientific society dedicated to the spread of knowledge? I'm not sure that all these are possible, at the same time, for the same organization. But sites like ChemBark, and this one, and the rest of the chemical blogworld) are the only places that seem to be available to talk about these things.
There's a paper out in Science from a team led by the IBM-Zürich folks, who have been pushing the capabilities of atomic-force microscopy for some time now. These are the people who published the paper in 2009 with those images of pentacene, and now they're back with even higher resolution.
One of their images is shown here. This is a big polycyclic aromatic hydrocarbon, hexabenzocoronene. One of the things that students note when they first try drawing such things is where the "holes" are. Aromatic benzene rings are special (different electron densities, different bond lengths), and if you connect one to another by a single bond (biphenyl), that connecting bond is of ordinary length. But a structure like this one - is it six benzene rings connected by a network of those ordinary bonds? Or are the electrons spread out over the whole surface in a great big delocalized cloud? Or something in between?
Calculations suggest that "in between, but still different" is the right answer, with some of the bonds having more double-bond character than others. And that's what this paper has determined by reaching down and feeling the bonds with an AFM tip. There's a single CO molecule at the end of the probe, and they've gotten to the point where they can see that they get greater sensitivity if that carbon monoxide molecule is tilted over rather than pointing straight down. I am not making that up. Running this single-molecule finger over the surface of hexabenzocoronene gives you the images shown.
"A" is the structure of the molecule, with the two different kinds of bond (i-bonds and j-bonds) noted. "B" is an AFM image at a constant height of 0.35 angstrom, which is really putting your atomic thumb down. The dark parts of the image correspond to attractive forces (van der Waals), and the light parts correspond to repulsive push-back. In this case, the pushback is due to the Pauli exclusion principle - those electrons cannot occupy the same quantum states, and they are quite adamant about that when you try to force them together. The electron density is highest around the outer part of the structure, but you can clearly see the bonds all the way through the internal structure as well. Take a look at the central aromatic ring - its bonds show up more more clearly than the bonds leading out from it, reflecting the greater electron density in there. "C" is an AFM image at 3.5A height in a "pseudo-3d representation", and "D" is the calculated electron density in between these two heights (at 2.5A above the molecule). Note that the two different kinds of bonds are also apparent in panel C, where some of them are brighter and shorter.
This kind of thing continues to give me a funny feeling when I read about it. Actually using things like Pauli repulsion to make pictures of molecules, well. . .maybe I am living in someone's science fiction novel, at that.
When I was clearing a space on my desk the other day, I came across this paper, which I'd printed several months ago to read later. Later's finally here! A brief look at the manuscript will make clear why I didn't immediately dig into it - it's titled "Modifying Chemical Landscapes by Coupling to Vacuum Fields", and it's about as physics-heavy as anything that Angewandte Chemie would be willing to publish. The scary part is, this is one of a pair of papers from the same group (Thomas Ebbesen's at Strasbourg), and it's the other one that really gets into the physics. (If you can't get the first paper, here's a summary of it, the only mention I've been able to find of this work).
But it's worth a bit of digging, because this is very strange and interesting work. So bear with me for a paragraph - I always thought that someone should write a textbook titled "Quantum Mechanics: A Hand-Waving Approach", and that's what you're about to get here. The theory tells us, among many other weird things, that the vacuum between molecules is not what we might think it is. That's more properly the quantum electrodynamic vacuum, the ground state of electromagnetic fields. Because the Planck constant is not zero - tiny, but crucially not zero - the QED vacuum is not the empty nothingness that we think of classically. It's a dielectric, it's diamagnetic, and its properties can be altered. The theory that tells us such odd things is to be taken very seriously indeed, though, since it has made some of the most detailed and accurate predictions in the history of science.
And the vacuum-field fluctuation part of the theory has to be taken very seriously, too, because these effects have actually been measured. This was first accomplished via the Lamb shift and the Casimir effect is the latest poster child. That relates to the properties of the vacuum between two very closely spaced physical plates, and we're now to the point, technologically, where we actually make structures of this kind, measure their sizes and compositions, and determine what's going on inside them.
So what, those few of you who are still reading would like to know, does this have to do with chemistry? Well, when a real molecule is placed between such plates, its energy levels behave in strange ways. And this latest paper demonstrates that with a photochemical rearrangement - the reaction rates change completely depending on whether or not the starting material is confined in the right sort of space, and they change exactly as the cavity is tuned more closely to the absorption taking place. In effect, the molecule is now part of a completely new system (molecule-plus-cavity), and this new system has different energy levels - and can do different chemistry.
The photochemistry shown is not exciting per se, but the fact that it can be altered just by putting the molecule in a very tiny box is exciting indeed:
The rearrangement of the molecular energy levels by coupling to the vacuum field has numerous important consequences for molecular and material sciences. As we have shown here, it can be used to modify chemical energy landscapes and in turn the reaction rates and yields. Strong coupling can either speed up or slow down a reaction depending on the reorganization of specific energy levels relative to the overall energy landscape. Both rates and the thermodynamics of the reaction will be modified. . .The coupling was done here to an electronic transition but it could also be done to a specific vibrational transition for instance to modify the reactivity of a bond. In this way it can be seen as analogous to a catalyst which changes the reaction rate by modifying the energy landscape.
I look forward to seeing how this field develops. If we end up being able to make reactions go the way we want them to by coupling our starting materials to actual fabric of space, I will officially decide that I am, in fact, living in someone's science fiction novel, and I will be very happy about that. I can picture a vacuum-field flow chemistry machine, pumping reactants through various ridiculously small and convoluted lattices, as someone turns a chrome-plated crank on the side to adjust the geometry of the cavities to change the product distributions. OK, there are perhaps a couple of engineering challenges there, but you get the idea.
And speaking as an organic chemist, I have a few other questions: can these vacuum field effects occur in some of the other confined spaces that we're more used to thinking about? The insides of zeolites, for example? The interior of a cyclodextrin? Between sheets of graphene? Inside the active site of an enzyme? I'm sure that there are reasons why not all of these would be able to show such an effect (irregular geometry, just to pick one), but it does make you wonder.
Well, an alert commenter to this post sent along this link to the Cancer Prevention Research Institute of Texas grant site. And if you search for the phrase "R12KCN", you'll see six million dollars set aside for "Recruitment of Established Faculty", which Nicoloau's name attached.
So if this is going to happen, is it a good idea? I'm not asking if it's a good idea for K. C. Nicoloau; he's more than capable of looking after his own career. Is it a good idea for Rice, and for the CPRIT? The answer to that one depends on what everyone is looking for. If Rice is looking to make a big splash, that'll work just fine. But as another comment a bit further down in that above thread notes, this would be a departure for their chemistry department, because they'd actually de-emphasized organic synthesis a while back. Bringing in KCN will certainly re-emphasize it for them, if that's what they're after.
It's not where I would put my money, but (fortunately) I am not in charge of laying out millions to stock up a chemistry department. I've written several times (most recently here) about what I think of total synthesis at this point in the history of the science. If malevolent aliens suddenly filled our skies, threatening to vaporize the planet if we did not synthesize maitotoxin, I would unhesitatingly vote to give K. C. Nicolaou unlimited funding. That's what he does, and he's damn good at it. I just don't think - without the alien pressure and all - that it provides as much return for the time and money as other areas of science.
There's a rumor making the rounds that K. C. Nicolaou is leaving Scripps (La Jolla), with the most often-mentioned destination being Rice University. That's striking many people as a bit unlikely, unless Rice has decided to really throw the money (and facilities spending) around, and has decided to start off with a big splash. But there is at least a bit of a Scripps-to-South migration going on, with M. G. Finn heading to Georgia Tech. So we shall see. . .anyone heard more?
As we head into Nobel Season, Chembark and Wavefunction have their latest odds up. The biology side of the chemistry prize seems to be getting a lot of betting this year, with nuclear hormone signaling, chaperone proteins, oncogenes, Western/Southern blotting, and various bioinorganic discoveries all being mentioned. I'll do a full post on my own predictions (and what I wouldn't like to see get the prize), but there's a lot of good material in those two posts to start thinking about.
Here you go, from IKA. If you can make it up to about 1:52 or so, that's when the traditional hard-sell starts. But up until then, it's pretty painful, and not least because the model playing a chemist is evaporating a bright green solution (sure thing) and the receiving flask is light blue (oh yeah). More unlikely colors are to be seen in the sales-pitch part of the video that follows, though, but at least there's no acting, or whatever that's supposed to be. Yikes.
There's an odd retraction in the synthetic chemistry literature. A synthesis of the lundurine alkaloid core from the Martin group at Texas was published last year, and its centerpiece was a double-ring-closing olefin metathesis reaction. (Coincidentally, that reaction was one of the "Black Swan" examples in the paper I blogged about yesterday - the initial reports of it from the 1960s weren't appreciated by the synthetic organic community for many years).
Now the notice says that the paper is being retracted because that RCM reaction is "not reproducible". (The cynical among you will already be wondering when that became a criterion for retraction in the literature - if it works once, it's in, right?)
There are more details at The Heterocyclist, a blog by the well-known synthetic chemist Will Pearson that I've been remiss in not highlighting before now. While you're there, fans of the sorts of chemicals I write about in "Things I Won't Work With" might enjoy this post on the high explosive RDX, and the Michigan chemist (Werner Bachmann) who figured out how to synthesize it on scale during World War II.
What's a Black Swan Event in chemistry? Longtime industrial chemist Bill Nugent has a very interesting article in Angewandte Chemie with that theme, and it's well worth a look. He details several examples of things that all organic chemists thought they knew that turned out not to be so, and traces the counterexamples back to their first appearances in the literature. For example, the idea that gold (and gold complexes) were uninteresting catalysts:
I completed my graduate studies with Prof. Jay Kochi at Indiana University in 1976. Although research for my thesis focused on organomercury chemistry, there was an active program on organogold chemistry, and our perspective was typical for its time. Gold was regarded as a lethargic and overweight version of catalytically interesting copper. More- over, in the presence of water, gold(I) complexes have a nasty tendency to disproportionate to gold(III) and colloidal gold(0). Gold, it was thought, could provide insight into the workings of copper catalysis but was simply too inert to serve as a useful catalyst itself. Yet, during the decade after I completed my Ph.D. in 1976 there were tantalizing hints in the literature that this was not the case.
One of these was a high-temperature rearrangement reported in 1976, and there was a 1983 report on gold-catalyzed oxidation of sulfides to sulfoxides. Neither of these got much attention, as the Nugent's own chart of the literature on the subject shows. (I don't pay much attention when someone oxidizes a sulfide, myself). Apparently, though, a few people had reason to know that something was going on:
However, analytical chemists in the gold-mining industry have long harnessed the ability of gold to catalyze the oxidation of certain organic dyes as a means of assaying ore samples. At least one of these reports actually predates the (1983) Natile publication. Significantly, it could be shown that other precious metals do not catalyze the same reactions, the assays are specific for gold. It is safe to say that the synthetic community was not familiar with this report.
I'll bet not. It wasn't until 1998 that a paper appeared that really got people interested, and you can see the effect on that chart. Nugent has a number of other similar examples of chemistry that appeared years before its potential was recognized. Pd-catalyzed C-N bond formation, monodentate asymmetric hydrogenation catalysts, the use of olefin metathesis in organic synthesis, non-aqueous enzyme chemistry, and many others.
The phrase “Black Swan event” comes from the writings of the statistician and philosopher Nassim Nicholas Taleb. The term derives from a Latin metaphor that for many centuries simply meant something that does not exist. But also implicit in the phrase is the vulnerability of any system of thought to conflicting data. The phrase's underlying logic could be undone by the observation of a single black swan.
In 1697, the Dutch explorer Willem de Vlamingh discovered black swans on the Swan River in Western Australia. Not surprisingly, the phrase underwent a metamorphosis and came to mean a perceived impossibility that might later be disproven. It is in this sense that Taleb employs it. In his view: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct an explanation for its occurrence after the fact, making it explainable and predictable.”
Taleb has documented this last point about human nature through historical and psychological evidence. His ideas remain controversial but seem to make a great deal of sense when one attempts to understand the lengthy interludes between the literature antecedents and the disruptive breakthroughs shown. . .At the very least, his ideas represent a heads up as to how we read and mentally process the chemical literature.
I have no doubt that unwarranted assumptions persist in the conventional wisdom of organic synthesis. (Indeed, to believe otherwise would suggest that disruptive break- throughs will no longer occur in the future.) The goal, it would seem, is to recognize such assumptions for what they are and to minimize the time lag between the appearance of Black Swans and the breakthroughs that follow.
One difference between Nugent's examples and Taleb's is the "extreme impact" part. I think that Taleb has in mind events in the financial industry like the real estate collapse of 2007-2008 (recommended reading here
), or the currency events that led to the wipeout of Long-Term Capital Management in 1998. The scientific literature works differently. As this paper shows, big events in organic chemistry don't come on as sudden, unexpected waves that sweep everything before them. Our swans are mute. They slip into the water so quietly that no one notices them for years, and they're often small enough that people mistake them for some other bird entirely. Thus the time lag.
How to shorten that? It'll be hard, because a lot of the dark-colored birds you see in the scientific literature aren't amazing black swans; they're crows and grackles. (And closer inspection shows that some of them are engaged in such unusual swan-like behavior because they're floating inertly on their sides). The sheer size of the literature now is another problem - interesting outliers are carried along in a flood tide of stuff that's not quite so interesting. (This paper mentions that very problem, along with a recommendation to still try to browse the literature - rather than only doing targeted searches - because otherwise you'll never see any oddities at all).
Then there's the way that we deal with such things even when we do encounter them. Nugent's recommendation is to think hard about whether you really know as much as you think you do when you try to rationalize away some odd report. (And rationalizing them away is the usual reponse). The conventional wisdom may not be as solid as it appears; you can probably put your foot through it in numerous places with a well-aimed kick. As the paper puts it: "Ultimately, the fact that something has never been done is the flimsiest of evidence that it cannot be done."
That's worth thinking about in terms of medicinal chemistry, as well as organic synthesis. Look, for example, at Rule-Of-Five type criteria. We've had a lot of discussions about these around here (those links are just some of the more recent ones), and I'll freely admit that I've been more in the camp that says "Time and money are fleeting, bias your work towards friendly chemical space". But it's for sure that there are compounds that break all kinds of rules and still work. Maybe more time and money should go into figuring out what it is about those drugs, and whether there are any general lessons we can learn about how to break the rules wisely. It's not that work in this area hasn't been done, but we still have a poor understanding of what's going on.
Over at Chemistry Blog, there's a post by Quintus on the synthesis of a complex natural product, FR-182877. The route is interesting in that it features a key Diels-Alder reaction, and the post mentions that this isn't a reaction that gets used much in industry.
True enough - that one and the Claisen rearrangement are the first reactions I think of in the category of "taught in every organic chemistry course, haven't run one in years". In the case of the Claisen, the number of years is now getting up to. . .hmm, about 26, I think. The Diels-Alder has shown up a bit more often for me, and someone in my lab was running one last year, but it was the first time she'd ever done it (after many years of drug discovery experience).
Why is that? The post I linked to suggested a good reason that one isn't done too often on scale: it can be unpredictably exothermic, and some of the reactants can decide to polymerize instead, which you don't want, either. That can be very exothermic, too, and leaves you with a reactor full of useless plastic gunk which will have to be removed with tools ranging from a scoop to a saw. This is a good time to adduce the benefits of flow chemistry, which has been successfully applied in such cases, and is worth thinking about any time you have a batch reaction that might take off on you.
But to scale something up, you need to have an interest in that structure to start with. There's another reason that you don't see so many Diels-Alders in drug synthesis, and it has to do with the sorts of molecules we tend to make. The cycloaddition gives you a three-dimensional structure with stereocenters, and medicinal chemistry, notoriously, tends to favor flat aromatic rings, sometimes very much to its detriment. Many drug discovery departments have taken the pledge over the years to try to cut back on the flatness and introduce more sp3 carbons, but it doesn't always take. (For one thing, if your leads are coming out of your screening collection, odds are you'll be starting with something on the flat end of the scale, because that's what your past projects filled the files with).
I think that fragment-based drug discovery has a better chance of giving you 3-D leads, but only if you pay attention while you're working on it. Those hits can sometimes be prosecuted in the flat-and-aryl style, too, if you insist. And I think it's fair to say that a lot of fragment hits have an aryl (especially a heteroaryl) ring in them, which might reflect the ease of assembling a fragment-sized library of compounds full of such. Even the fragment folks have been talking over the years about the need to get more three-dimensionality into the collections, and vendors have been pitching this as a feature of their offerings.
The other rap on the classic Diels-Alder reaction is that it gives you substituted cyclohexanes, which aren't always the first place you look for drug leads. But the hetero-Diels-Alder reactions can give you a lot of interesting compounds that look more drug-like, and I think that they deserve more play than they get in this business. I'll go ahead and take a public pledge to run a series of them before the year is out!
I didn't even know that you could make those things - no doubt someone will be inspired to try the three-boron version next. Diarylmethanes aren't the most preferred drug structures in the world (that carbon is just waiting to be oxidized), but I can't say that I've always avoided them on those grounds. I was on a project where we made a whole series of the things, actually - didn't work out so well for the intended target, but the compounds went on to hit in a completely different assay, so the company did probably get its money's worth.
This paper from GlaxoSmithKline uses a technology that I find very interesting, but it's one that I still have many questions about. It's applied in this case to ADAMTS-5, a metalloprotease enzyme, but I'm not going to talk about the target at all, but rather, the techniques used to screen it. The paper's acronym for it is ELT, Encoded Library Technology, but that "E" could just as well stand for "Enormous".
That's because they screened a four billion member library against the enzyme. That is many times the number of discrete chemical species that have been described in the entire scientific literature, in case you're wondering. This is done, as some of you may have already guessed, by DNA encoding. There's really no other way; no one has a multibillion-member library formatted in screening plates and ready to go.
So what's DNA encoding? What you do, roughly, is produce a combinatorial diversity set of compounds while they're attached to a length of DNA. Each synthetic step along the way is marked by adding another DNA sequence to the tag, so (in theory) every compound in the collection ends up with a unique oligonucleotide "bar code" attached to it. You screen this collection, narrow down on which compound (or compounds) are hits, and then use PCR and sequencing to figure out what their structures must have been.
As you can see, the only way this can work is through the magic of molecular biology. There are so many enzymatic methods for manipulating DNA sequences, and they work so well compared with standard organic chemistry, that ridiculously small amounts of DNA can be detected, amplified, sequenced, and worked with. And that's what lets you make a billion member library; none of the components can be present in very much quantity (!)
This particular library comes off of a 1,3,5-triazine, which is not exactly the most cutting-edge chemical scaffold out there (I well recall people making collections of such things back in about 1992). But here's where one of the big questions comes up: what if you have four billion of the things? What sort of low hit rate can you not overcome by that kind of brute force? My thought whenever I see these gigantic encoded libraries is that the whole field might as well be called "Return of Combichem: This Time It Works", and that's what I'd like to know: does it?
There are other questions. I've always wondered about the behavior of these tagged molecules in screening assays, since I picture the organic molecule itself as about the size of a window air conditioner poking out from the side of a two-story house of DNA. It seems strange to me that these beasts can interact with protein targets in ways that can be reliably reproduced once the huge wad of DNA is no longer present, but I've been assured by several people that this is indeed the case.
In this example, two particular lineages of compounds stood out as hits, which makes you much happier than a collection of random singletons. When the team prepared a selection of these as off-DNA "real organic compounds", many of them were indeed nanomolar hits, although a few dropped out. Interestingly, none of the compounds had the sorts of zinc-binding groups that you'd expect against the metalloprotease target. The rest of the paper is a more traditional SAR exploration of these, leading to what one has to infer are more tool/target validation compounds rather than drug candidates per se.
I know that GSK has been doing this sort of thing for a while, and from the looks of it, this work itself was done a while ago. For one thing, it's in J. Med. Chem., which is not where anything hot off the lab bench appears. For another, several of the authors of the paper appear with "Present Address" footnotes, so there has been time for a number of people on this project to have moved on completely. And that brings up the last set of questions, for now: has this been a worthwhile effort for GSK? Are they still doing it? Are we just seeing the tip of a large and interesting iceberg, or are we seeing the best that they've been able to do? That's the drug industry for you; you never know how many cards have been turned over, or why.
The controversy I wrote about last week, about whether (some) enzymes work by using extremely fast movements (rather than by putting things into their place and letting them do their thing) may remind some folks of the supposed medieval arguments about angels dancing on the heads of pins. But it also reminds me a bit of some other arguments in organic chemistry over the years. The horrible prototype is, of course, the norbornyl cation.
There was a time when people would simply leave the room when that topic came up, because they knew that they were in for another round of fruitless wrangling. Was its structure that of two rapidly interconverting standard carbocations, or a single bridged "non-classical" one that broke the previously accepted rules? George Olah and H. C. Brown, Nobel laureates both, were on opposite sides of that one, but every physical organic chemist from about 1950 to about 1980 probably had to take a stand one way or the other. (It is commonly accepted that Olah's side won), but the arguments got pretty esoteric by the end. Update: the battle was first joined by Saul Winstein, who did not live to see his proposal vindicated by Olah's spectroscopic studies).
Another one, which came along a few years later, was the "synchronous / asynchronous" mechanism of the Diels-Alder reaction. Do the new bonds in that one form at the same time, or does one form, and then the other? That one involved the physical organic people again, as well as plenty of computational chemists. I stopped following the debate after a while, but I believe that the final reckoning was that most standard Diels-Alder reactions were synchronous, within the limits of detection, but that messing with the electron density of the two reactants could easily push the reaction into asynchronous (or flat-out stepwise) territory.
So why does this level of detail matter? The problem is, chemistry is all about things like bond formation and bond breaking, and about interactions between individual molecules (and parts of molecules) that change the energies of the systems involved. And those things are nothing but picky details, all the way down. Thermodynamics, which runs chemical reactions and runs the rest of the universe, is the most rigorous branch of accounting there is. Totaling up those energies to see which side of the ledger wins out can easily involve the fate of single water molecules, or even to single protons, and you don't get much pickier than that.
This sort of thing is one argument used against the feasibility of molecular nanotechnology. How are we to harness such fine distinctions, at such levels? But it's worth remembering that we ourselves, and every other living creature, are nanotech machines at heart. Our enzymes are constantly breaking bonds, twisting single molecules, altering reaction rates, and generating specific, defined molecular products. If they weren't, we'd fall right over. We eventually fall over anyway, because none of these machines work perfectly. But they work pretty well, and they make our own chemical efforts look like stone axes and deer-bone hammers.
So we may find getting down to this level of things to be a lot of work, and hard to understand, and frustrating to deal with. But that's where we're going to have to be if we're ever going to do real chemistry, the kind that's that's indistinguishable from magic.
How do enzymes work? People have been trying to answer that, in detail, for decades. There's no point in trying to do it without running down all those details, either, because we already know the broad picture: enzymes work by bringing reactive groups together under extremely favorable conditions so that reaction rates speed up tremendously. Great! But how do they bring those things together, how does their reactivity change, and what kinds of favorable conditions are we talking about here?
And some of this we know, too. You can see, in many enzyme active sites, that the protein is stabilizing the transition state of the reaction, lowering its energy so it's easier to jump over the hump to product. It wouldn't surprise me to see the energies of some starting materials being raised to effect that same barrier-lowering, although I don't know of any examples of that off the top of my head. But even this level of detail raises still more questions: what interactions are these that lower and raise these energies? How much of a price is paid, thermodynamically, to do these things, and how does that break out into entropic and enthalpic terms?
Some of those answers are known, to some degree, in some systems. But still more questions remain. One of the big ones has been the degree to which protein motion contributes to enzyme action. Now, we can see some big conformational changes taking place with some proteins, but what about the normal background motions? Intellectually, it makes sense that enzymes would have learned, over the millennia, to take advantage of this, since it's for sure that their structures are always vibrating. But proving that is another thing entirely.
Modern spectroscopy may have done the trick. This new paper from groups at Manchester and Oxford reports painstaking studies on B-12 dependent ethanolamine ammonia lyase. Not an enzyme I'd ever heard of, that one, but "enzymes I've never heard of" is a rather roomy category. It's an interesting one, though, partly because it goes through a free radical mechanism, and partly because it manages to speed things up by about a trillion-fold over the plain solution rate.
Just how it does that has been a mystery. There's no sign of any major enzyme conformational change as the substrate binds, for one thing. But using stopped-flow techniques with IR spectroscopy, as well as ultrafast time-resolved IR, there seem to be structural changes going on at the time scale of the actual reaction. It's hard to see this stuff, but it appears to be there - so what is it? Isotopic labeling experiments seem to say that these IR peaks represent a change in the protein, not the B12 cofactor. (There are plenty of cofactor changes going on, too, and teasing these new peaks out of all that signal was no small feat).
So this could be evidence for protein motion being important right at the enzymatic reaction itself. But I should point out that not everyone's buying that. Nature Chemistry had two back-to-back articles earlier this year, the first advocating this idea, and the second shooting it down. The case against this proposal - which would modify transition-state theory as it's usually understood - is that there can be a number of conformations with different reactivities, some of which take advantage of quantum-mechanical tunneling effects, but all of which perform "traditional" transition-state chemistry, each in their own way. Invoking fast motions (on the femtosecond time scale) to explain things is, in this view, a layer of complexity too far.
I realize that all this can sound pretty esoteric - it does even to full-time chemists, and if you're not a chemist, you probably stopped reading quite a while ago. But we really do need to figure out exactly how enzymes do their jobs, because we'd like to be able to do the same thing. Enzymatic reactions are, in most cases, so vastly superior to our own ways of doing chemistry that learning to make them to order would revolutionize things in several fields at once. We know this chemistry can be done - we see it happen, and the fact that we're alive and walking around depends on it - but we can't do it ourselves. Yet.
After mentioning the natural product Shootmenowicene yesterday, I note that See Arr Oh is reporting that the total synthesis of this compound is now down to only 47 steps. I think the purity could be improved with a prep GC of one of the early intermediates (or perhaps a spinning band distillation), but that's about all his synthesis is missing. . .
Here are two papers in Angewandte Chemie on "rewiring" synthetic chemistry. Bartosz Grzybowski and co-workers at Northwestern have been modeling the landscape of synthetic organic chemistry for some time now, looking at how various reactions and families of reactions are connected. Now they're trying to use that information to design (and redesign) synthetic sequences.
This is a graph theory problem, a rather large graph theory problem, if you assign chemical structures to nods and transformations to the edges connecting them. And it quickly turns into one that is rather computationally demanding, as are all these "find the shortest path" types, but that doesn't mean that you can't run through a lot of possibilities and find a lot of things that you couldn't by eyeballing things. That's especially true when you add in the price and availability of the starting materials, as the second paper linked above does. If you're a total synthetic chemist, and you didn't feel at least a tiny chill running down your back, you probably need to think about the implications of all this again. People have been trying to automate synthetic chemistry planning since the days of E. J. Corey's LHASA program, but we're getting closer to the real deal here:
We first consider the optimization of syntheses leading to one specified target molecule. In this case, possible syntheses are examined using a recursive algorithm that back-propagates on the network starting from the target. At the first backward step, the algorithm examines all reactions leading to the target and calculates the minimum cost (given by the cost function discussed above) associated with each of them. This calculation, in turn, depends on the minimum costs of the associated reactants that may be purchased or synthesized. In this way, the cost calculation continues recursively, moving backward from the target until a critical search depth is reached (for algorithm details, see the Supporting Information, Section 2.3). Provided each branch of the synthesis is independent of the others (good approximation for individual targets, not for multiple targets), this algorithm rapidly identifies the synthetic plan which minimizes the cost criterion.
That said, how well does all this work so far? Grzybowski owns a chemical company (ProChimia), so this work examined 51 of its products to see if they could be made easily and/or more cheaply. And it looks like this optimization worked, partly by identifying new routes and partly by sending more of the syntheses through shared starting materials and intermediates. The company seems to have implemented many of the suggestions.
The other paper linked in the first paragraph is a similar exercise, but this time looking for one-pot reaction sequences. They've added filters for chemical compatibility of functional groups, reagents, and solvents (miscibility, oxidizing versus reducing conditions, sensitivity to water, acid/base reactions, hydride reagents versus protic conditions, and so on). The program tries to get around these problems, when possible, by changing the order of addition, and can also evaluate its suggestions versus the cost and commercial availability of the reagents involved.
Of course, the true value of any theoretical–chemical algorithm is in experimental validation. In principle, the method can be tested to identify one-pot reactions from among any of the possible 1.8 billion two-step sequences present within the NOC (Network of Organic Chemistry). While our algorithm has already identified over a million (and counting!) possible sequences, such randomly chosen reactions might be of no real-world interest, and so herein we chose to illustrate the performance of the method by “wiring” reaction sequences within classes of compounds that are of popular interest and/or practical importance.
They show a range of reaction sequences involving substituted quinolines and thiophenes, with many combinations of halogenation/amine displacement/Suzuki/Sonogashira reactions. None of these are particularly surprising, but it would have been quite tedious to work out all the possibilities by hand. Looking over the yields (given in the Supporting Information), it appears that in almost every case the one-pot sequences identified by the program are equal to or better than the stepwise yields (sometimes by substantial margins). It doesn't always work, though:
Having discussed the success cases, it is important to outline the pitfalls of the method. While our algorithm has so far generated over a million structurally diverse one-pot sequences, it is clearly impossible to validate all of them experimentally. Instead, we estimated the likelihood of false-positive predictions by closely inspecting about 500 predicted sequences and cross-checking them against the original research describing the constituent/individual reactions. In few percent of cases, the predicted sequences turned out to be unfeasible because the underlying chemical databases did not report, or reported incorrectly, the key reagents or reaction conditions present in the original reports. This result underscores the need for faithful translation of the literature data into chemical database content. A much less frequent source of errors (only few cases we encountered so far) is the algorithm's incomplete “knowledge” of the mechanistic details of the reactions to be wired. One illustrative example is included in the Supporting Information, Section 5, where a predicted sequence failed experimentally because of an unforeseen transformation of Lawesson's reagent into species reactive toward one of the intermediates. We recognize that there is an ongoing need to improve the filters/rules that our algorithm uses; the goal is that such improvements will ultimately render the algorithm on a par with the detailed synthetic knowledge of experienced organic chemists. . .
And you know, I don't see any reason at all why that can't happen, or why it won't. It might be this program, or one of its later versions, or someone else's software entirely, but I truly don't see how this technology can fail. Depending on the speed with which that happens, it could transform the way that synthetic chemistry is done. The software is only going to get better - every failed sequence adds to its abilities to avoid that sort of thing next time; every successful one gets a star next to it in the lookup table. Crappy reactions from the literature that don't actually work will get weeded out. The more it gets used, the more useful it becomes. Even if these papers are presenting the rosiest picture possible, I still think that we're looking at the future here.
Put all this together with the automated random-reaction-discovery work that I've blogged about, and you can picture a very different world, where reactions get discovered, validated, and entered into the synthetic armamentarium with less and less human input. You may not like that world very much - I'm not sure what I think about it myself - but it's looking more and more likely the be the world we find ourselves in.
Now here's one of those structures that you don't see very often in a drug molecule. It wasn't intended to be a drug, though - it's a photolabel tool compound based on the general anesthetic mephobarbital, which is what that trifluoromethyldiazirine group is doing in there. (When those are exposed to light, nitrogen gas takes off, leaving behind a reactive carbene that generally attacks something nearby as quickly as possible).
But when the two enantiomers were tested, it turns out that one of them is about as potent as the best compounds in its class, while the other (the R enantiomer) is ten-fold better. And when used for its intended purpose, as a photolabeling agent, it does show up stuck to specific sites on human GABA receptors, as hoped. So this should provide some interesting information about barbituate binding, although I sort of doubt if anyone's going to try to develop it into a general anesthetic all on its own.
In a related topic, note that the model for this series, mephobarbital itself, is disappearing from the market. It's one of those ancient compounds that never really went through the modern regulatory process, but the FDA has stated that it's not going to let it be grandfathered in. Its manufacturer, Lundbeck, said earlier this year (PDF) that it saw no path forward other than a completely new NDA filing, which didn't seem feasible, so it was abandoning the product. Existing stocks have expired by now, so mephobarbital is no more, at least in the US.
Looks like my "Things I Won't Work With" series (and John Clark's book "Ignition") has inspired science fiction author Charles Stross
- check out this story, and prepare to see several compounds that you never expected to see mixed together (!)
I wrote here about the Cronin lab at Glasgow and their work on using 3-D printing technology to make small chemical reactors. Now there's an article on this research in the Observer that's getting some press attention (several people have e-mailed it to me). Unfortunately, the headline gets across the tone of the whole piece: "The 'Chemputer' That Could Print Out Any Drug".
To be fair, this was a team effort. As the reporter notes, Prof. Cronin "has a gift for extrapolation", and that seems to be a fair statement. I think that such gifts have to be watched carefully in the presence of journalists, though. The whole story is a mixture of wonderful-things-coming-soon! and still-early-days-lots-of-work-to-be-done, and these two ingredients keep trying to separate and form different layers:
So far Cronin's lab has been creating quite straightforward reaction chambers, and simple three-step sequences of reactions to "print" inorganic molecules. The next stage, also successfully demonstrated, and where things start to get interesting, is the ability to "print" catalysts into the walls of the reactionware. Much further down the line – Cronin has a gift for extrapolation – he envisages far more complex reactor environments, which would enable chemistry to be done "in the presence of a liver cell that has cancer, or a newly identified superbug", with all the implications that might have for drug research.
In the shorter term, his team is looking at ways in which relatively simple drugs – ibuprofen is the example they are using – might be successfully produced in their 3D printer or portable "chemputer". If that principle can be established, then the possibilities suddenly seem endless. "Imagine your printer like a refrigerator that is full of all the ingredients you might require to make any dish in Jamie Oliver's new book," Cronin says. "Jamie has made all those recipes in his own kitchen and validated them. If you apply that idea to making drugs, you have all your ingredients and you follow a recipe that a drug company gives you. They will have validated that recipe in their lab. And when you have downloaded it and enabled the printer to read the software it will work. The value is in the recipe, not in the manufacture. It is an app, essentially."
What would this mean? Well for a start it would potentially democratise complex chemistry, and allow drugs not only to be distributed anywhere in the world but created at the point of need. It could reverse the trend, Cronin suggests, for ineffective counterfeit drugs (often anti-malarials or anti-retrovirals) that have flooded some markets in the developing world, by offering a cheap medicine-making platform that could validate a drug made according to the pharmaceutical company's "software". Crucially, it would potentially enable a greater range of drugs to be produced. "There are loads of drugs out there that aren't available," Cronin says, "because the population that needs them is not big enough, or not rich enough. This model changes that economy of scale; it could makes any drug cost effective."
Not surprisingly Cronin is excited by these prospects, though he continually adds the caveat that they are still essentially at the "science fiction" stage of this process. . .
Unfortunately, "science fiction" isn't necessarily a "stage" in some implied process. Sometimes things just stay fictional. Cronin's ideas are not crazy, but there are a lot of details between here and there, and if you don't know much organic chemistry (as many of the readers of the original article won't), then you probably won't realize how much work remains to be done. Here's just a bit; many readers of this blog will have thought of these and more:
First, you have to get a process worked out for each of these compounds, which will require quite a bit of experimentation. Not all reagents and solvents are compatible with the silicone material that these microreactors are being fabricated from. Then you have to ask yourself, where do the reagents and raw materials come in? Printer cartridges full of acetic anhydride and the like? Is it better to have these shipped around and stored than it is to have the end product? In what form is the final drug produced? Does it drip out the end of the microreactor (and in what solvent?), or is a a smear on some solid matrix? Is it suitable for dosing? How do you know how much you've produced? How do you check purity from batch to batch - in other words, is there any way of knowing if something has gone wrong? What about medicines that need to be micronized, coated, or treated in the many other ways that pills are prepared for human use?
And those are just the practical considerations - some of them. Backing up to some of Prof. Cronin's earlier statements, what exactly are those "loads of drugs out there that aren't available because the population that needs them is not big enough, or not rich enough"? Those would be ones that haven't been discovered yet, because it's not like we in the industry have the shelves lined with compounds that work that we aren't doing anything with for some reason. (Lots of people seem to think that, though). Even if these microreactors turn out to be a good way to make compounds, though, making compounds has not been the rate-limiting step in discovering new drugs. I'd say that biological understanding is a bigger one, or (short of that), just having truly useful assays to find the compounds you really want.
Cronin has some speculations on that, too - he wonders about the possibility of having these microreactors in some sort of cellular or tissue environment, thus speeding up the whole synthesis/assay loop. That would be a good thing, but the number of steps that have to be filled in to get that to work is even larger than for the drug-manufacture-on-site idea. I think it's well worth working on - but I also think it's well worth keeping out of the newspapers just yet, too, until there's something more to report.
I'm pressed for time this morning, so I wanted to put up a quick link to Adam Feuerstein's thoughts on media embargoes of scientific results (and how they're becoming increasingly useless).
And I also wanted to note this odd bit of news: I'll bet you thought that fluorine, elemental gaseous fluorine, wasn't found in nature. Too reactive, right? But we're all wrong: it's found in tiny cavities in an unusually stinky mineral. And part (or all) of that smell is fluorine itself, which I'll bet that very few people have smelled in the lab. I hope not, anyway.
A lot of natural product structures have been misassigned over the years. In the old days, it was a wonder when you were able to assign a complex one at all. Structure determination, pre-NMR, could be an intellectual challenge at the highest level, something like trying to reconstruct a position on a chess board in the dark, based on acrostic clues in a language you don't speak. The advent of modern spectroscopy turned on the lights, which is definitely a good thing, but many people who'd made their careers under the old system missed the thrill of the old hunt when it was gone.
But even now, it's possible to get structures wrong - even with high-field 2-D NMR, even with X-ray spectroscopy. Natural products can be startlingly weird by the standards of human chemistry, and I still have a lot of sympathy for anyone who's figuring them out. My sympathy goes only so far, though.
Specifically, this case. I have to agree with the BRSM Blog, which says: "I have to say that I think I could have done a better job myself. Drunk." Think that's harsh? Check out the structures. The proposed structure had two napthalenes, with two methoxys and four phenols. But the real natural product, as it turns out, has one methoxy and one phenol. And no napthyls. And four flipping bromine atoms. Why the vengeful spirit of R. B. Woodward hasn't appeared, shooting lightning bolts and breaking Scotch bottles over people's heads, I just can't figure.
It's not anything to shake the earth, but I'm actually happy to see new variations being discovered for ancient reactions like the Friedel-Crafts. It makes sense that an activated amide could participate in the reaction, but it looks like no one's ever quite explored the idea like this.
And yes, I know that a large Friedel-Crafts can be a pain, what with all that aluminum gunk. The biggest one I ever ran used protic conditions (methanesulfonic acid), so I haven't had the complete experience, but I've still managed to work my way through some gooey aluminum milkshakes. But it's still a useful reaction on the bench scale.
And somehow, I can read a paper like this one and be pleased, while a paper on yet another way to dehydrate an oxime to a nitrile makes me roll my eyes. I'm still trying to work out why that might be - a bit broader scope? More possible utility? Just the fact that this was something that no one had quite thought of, as opposed to another way to take the same starting material to the same product? I should figure out what my boundaries are.
Those of you who are fans of high-throughput reaction discovery have another paper to check out - and those who aren't have another reason to grit your teeth. (Previous examples here, here, and here). The authors, a collaboration between the Bellomo lab at Penn and the Merck process group at Rahway, have gotten the reaction screen size down to 20 microliters with 1 mg of compound, which allows you to go through 96-well plates pretty rapidly.
Their test bed was a pyrimidone synthesis reaction. They screened 475 reaction conditions (95 different additives/catalysts in 5 different solvents). Each of them got one hour at 60 C to show what it could do, and the entire analysis was completed in one day. A phenantholine/copper bromide catalyst in dioxane showed the best results from that run, so it was taken to a separate series of experiments, to see how low the loading could go, what similar solvents might work out, how low they could push the reaction temperature, and so on. As it turned out, 2-methyl THF at RT overnight with 5% catalyst gave an 84% yield, which already represented a significant improvement over the known conditions (which used no catalyst at 140 C).
Moving to different substrates, they found that these conditions gave product each time, but in varying yields. A further catalyst screen, with 112 phosphine ligands, gave another set of conditions that could also be applied to the diverse substrates. A re-screen of solvents together with the best phosphine gave (along with the initial optimization conditions) high-yielding reactions for each of the new substrates. There was no set of one-size-fits-all conditions, though, which certainly fits with organic synthesis as I know it.
With these in hand, they did some work on the mechanism. It appears that the copper is participating in a single-electron transfer reaction, but further details aren't clear. It's not the sort of thing that you would have been able to think your way to on a blackboard, which (to me) is the whole point of doing chemistry this way. As the authors put it:
We envision that similar, generally useful platform tools will soon become more widely available, thus dramatically impacting chemistry development and enabling increased access to chemical diversity and lower-cost synthesis. Most importantly, we believe that such platforms will lead to the discovery of new and potentially useful chemical reactivity and reaction mechanisms.
Exactly. We should be finding easier ways to make compounds, and new ways to make compounds we've never been able to prepare. I think that searching for them in this way is an efficient way to do that, and will also open up new areas of research as we stumble across things we never realized were even there. If there's a downside here, I'm not seeing it.
Here's an excellent article, with copious references, tracing the history of what we now know as the metal-catalyzed coupling field. Victor Snieckus of Queen's University, Thomas Colacot (Johnson Matthey) and co-authors go back to the Wurtz and Glaser reactions of the 1850s and 60s, up through the Ullmann reaction (1891, and still very much with us) and Kharasch and Cadiot-Chodkiewicz couplings (1940s) before breaking into the world of palladium with the Wacker oxidation.
Along the way, one learns that the discoverer of palladium (Wollaston) could never interest anyone in the metal, and almost all of it that he'd extracted was still sitting on the shelf, unsold, at his death. Time vindicated him, and how - it's now perhaps the most essential catalytic metal in the world. The late 1960s were a turning point:
Entry of Richard Heck: Following post-doctoral studies, Heck accepted a position at Hercules Powder Co where he was afforded freedom that is seldom experienced by the modern industrial chemist. Briefed with the task of “doing something with transition metals,” Heck investigated the chemistry of cobalt carbonyl complexes. Although this work generated many interesting observations, finding profitable applications for his research proved difficult. Inspired by his colleague Pat Henry's work on the Wacker oxidation, Heck's attention turned in the direction of arylpalladium chemistry.
He tried Wacker-type conditions with other reagents around to try to intercept the palladium intermediate, and organomercurys obliged with an immediate reaction. The story from there is a trip through a good swath of the periodic table, and the development of an awful lot of knowledge and expertise in metal complexes. Enter then Mizoroki, Kumada, Sonogashira, Negishi, Stille, Suzuki and many others. It's a long, complex, story, but this paper should serve as the definitive overview, and an excellent look at how chemistry (and science in general) go about discovering and developing things.
I've written here before about reaction discovery schemes, and the reaction to those reactions has been, well, mixed. I like them, some other people like them, but some other people are quite offended by the "random search" mentality behind these ideas.
Well, prepare yourselves for another technology for exploring the wild blue yonder. A new paper in Angewandte Chemie from a group at the CEA (Gif sur Yvette, France) outlines an immunological detection scheme. They have antibodies to an imidazole derivative, and antibodies to a phenolic moeity as well. So both structures are attached to a range of functional groups and combined with heat and/or metal catalysts to see if anything happens. A sandwich assay at the end with the different antibodies gives you a yellow color only if a compound has been formed that has both ends present; that is, if a coupling reaction of some sort has occurred.
They ran 3360 reactions, each on a 100 nmol scale (there's the sensitivity of the antibodies for you). Two new reactions were discovered - an isourea synthesis (which can lead to benzoxazoles) and an alkyne reaction leading to thiazole derivatives. Neither of those is going to set the world of organic chemistry ablaze, but as a proof of concept, I'm convinced that this technique can work. So what do you do with it next?
One plan looks to be discovering new bioorthogonal reactions, couplings that can take place either inside or on the surface of living cells. The immunological detection is so sensitive that products can be teased out of all sorts of messy mixtures, apparently even cell lysates. I'd also encourage them to try some other conditions, such as various photochemical setups, to see what might be out there - it's a much less explored field than copper-catalyzed coupling reactions.
Like it or not, I think we're going to be seeing more of this sort of work. We might as well make the most of it!
Now these are the funkiest structures I've seen in quite a while. I won't spoil the surprise; if you're an organic chemist, go ahead and click on the link. This is one of those "No one's made compounds like this, so let's see if they do anything" papers, and I'd say that if you're going to do that sort of thing, you should go pretty far off the beaten path. That they have.
These compounds are - not surprisingly - said to be cytotoxic, with activity against a range of cancer cell lines. A couple of passes through the paper, and I haven't found any normal cells used as controls for all that cytotoxicity. Sad to say, the betting would be that there's no window at all. But at least I've seen a class of compounds that I'll bet has never made it into J. Med. Chem. before.
I'd be interested in hearing people's thoughts on this technology, from the Cronin group at the University of Glasgow. (Here's a press release, and a piece from Chemistry World if you can't get in to Nature Chemistry).
They're adapting 3-D printing technology to make small reaction vessels out of silicone polymer. The design of these can be changed to directly alter the mixing, timing, and stoichiometry of reactions, and they've also gone as far as incorporating palladium catalyst into the walls of the newly formed reactors, making them active for hydrogenation reactions.
I can see this eventually being useful for multistep flow chemistry, a micro-scale analog of the sorts of systems that Steve Ley's group has published on. Perhaps an array of identical vessels could be used in parallel for scale-up if the design is taking advantage of the small size of the chambers (again, as is done in industrial flow applications). The speed with which new doped polymeric materials could be prototyped seems to be a real advantage as well, which should allow experimentation with immobilized reagents and catalysts which would be incompatible with each other in solution. Other ideas?
We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).
Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.
And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:
In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.
This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?
Update: for the non-chemists in the audience who are wondering why one doesn't stroll in as advertised, check out what happens when you deal with the nastier end of fluorine chemistry. This new chemistry isn't anything like those examples - thank goodness - but it'll give you some idea of why we respect and fear the fluorine.
C&E News has an article on some of the recent fluorination methods that have been appearing in the literature. (Some of these have come up on this site here, here, and here).
These methods are all quite interesting (I've tried some of them out myself, with success), but what I also found interesting was the sociological angle that the article brought in. Organofluorine chemistry has not, over the years, been the sort of thing that one takes up lightly, for a lot of good reasons. Some of the real advances in the field have come from making it more accessible to more chemists. Very few people will use elemental fluorine other than at near-gunpoint, and some of the other classic reagents are still quite unfriendly, tending to leave cursing chemists swearing never to touch them again.
But making the field more open makes it, well, more open. And some of the people who've been there a while aren't quite sure what to make of the newcomers. They don't always cite the literature in appropriate depth, which is a real concern, and there can be a general feeling that they haven't paid their fluorine dues. (But the whole point is to keep people from paying those in the first place).
Since I'm not having to make my reputation discovering fluorination conditions, though, I'm just happy to deal with the results of all this work, both from the hardy pioneers as well as from the flashy new immigrants. These are useful reactions, and the rest of us are glad to have 'em.
Courtesy of C&E News, here's an interesting look inside the Chinese labs of HEC Pharm, a company making APIs and generics. The facilities look good. I have to say, that's an awful lot of HPLC capacity, starting at 0:41.
The idea of company housing, though, is a bit harder to get used to. . .
Whoever's behind the Journal of Apocryphal Chemistry is trying to do everyone a good deed before we get into allergy season. After detailing the ever-more-stringent controls on the sale of pseudephedrine, they propose a synthetic route based on a more readily available starting material: methamphetamine.
A quick search of several neighborhoods of the United States revealed that while pseudephedrine is difficult to obtain, N-methylmethamphetamine can be procured at almost any time on short notice and in quantities sufficient for synthesis of useful amounts of the desired material. Moreover, according to government statistics, N-methylmethamphetamine is becoming an increasingly attractive starting material for pseudephedrine, as the availability of N-methylmethamphetamine has remained high while prices have dropped and purity has increased. We present here a convenient series of transformations using reagents which can be found in most well stocked organic chemistry laboratories. . .
Their route, based on a 1985 paper in J. Chem. Soc. Chem. Comm., is not exactly trailer-park chemistry, though. (I note that they have the reference a bit wrong as well; there was no plain J. Chem. Soc. in 1985). It involves a chromium carbonyl complex of the aryl ring, formation of a chiral lithium dianion, and oxidation of that with MoOPH, which would give you pseudephedrine after decomplexation. There's no way to tell if these reactions have actually been run, of course. Based on the literature precedent, it might work, although I'd be worried about maintaining the chirality of the dianion. (For what it's worth, the authors are also aware of this problem, and claim that the selectivity was unaffected).
Their larger point stands. I look forward to seeing more from this paper's authors, O. Hai and I. B. Hakkenshit. I see less interesting stuff in my RSS feed every day of the week.
Stuart Cantrill has a post on one of those vast dendrimer structures - you know, those mandala-like things that weigh as much as a beer truck. He says that if you can draw the structure on his page in ChemDraw (or the like) in under three hours, you are clearly a wonder-worker.
He's asking on his Twitter feed for examples of the worst chemical structure anyone's had to draw, so I thought I'd throw the same question out to the crowd. You're going to have had to have lead an evil past life to be able to beat his dendrimer, though.
I don't know how many of you out there like to form azides, but if you do, you've probably used (or thought about using) imidazole-1-sulfonyl azide hydrochloride. This reagent appeared in Organic Letters a few years ago as a safe-to-handle shelf-stable azide transfer reagent, and seems to have found popularity. (I've used it myself).
So it was with some alarm that I noted this new paper on the stability and handling characteristics of the reagent. It's a collaboration between the University of Western Australia (where the reagent was developed, partly by the guy whose lab bench I took over in grad school back in 1983, Bob Stick), the University of British Columbia, and the Klapötke group at Munich. That last bunch is known to readers of "Things I Won't Work With", as experts in energetic materials, and when I saw that name I knew I'd better read the paper pronto.
As it turns out, the hydrochloride isn't quite as optimal as thought. It's impact-sensitive, for one thing, and not shelf-stable. The new paper mentions that it decomposes with an odor of hydrazoic acid on storage - you don't want odors of hydrazoic acid, believe me - and I thought while reading that, "Hmm. My bottle of the stuff is white crystalline powder; that's strange." But then I realized that I hadn't looked at my bottle for a few months. And as if by magic, there it was, turning dark and gooey. I had the irrational thought that the act of reading this paper had suddenly turned my reagent into hazardous waste, but no, it's been doing that slowly on its own.
So if you have some of this reagent around, take care. The latest work suggests that the hydrogensulfate salt, and especially the fluoroborate, are less sensitive and more stable alternatives to the hydrochloride, and I guess I'll have to make some at some point. (They also made the perchlorate - just for the sake of science, y'know - and report, to no one's surprise, that it "should not be prepared by those without expertise in handling energetic materials"). But it needs no ghost come from the grave to tell us this.
So, back to my lab and my waste-disposal problem! And here's a note on the literature. We have the original prep of the reagent, a follow-up note on stability problems, and this latest paper on alternatives. But when you go back to the original paper, there is no mention of the later hazard information. Shouldn't there be a note, a link, or something? Why isn't there? Anyone at Organic Letters or the ACS care to comment on that?
Update: I've successfully opened my bottle, with tongs and behind a blast shield, just to be on the safe side, and defanged the stuff off by dilution.
Here's a YouTube look at a periodic table, laid out with high-quality samples of the real elements. I want one, although I'm willing to compromise on some of the radioactive items; completeness can be taken a bit too far.
Looking through the literature this morning, I thought about another technique that, although you see it published on, no organic chemist I know has ever actually used: electrochemistry. There are all sorts of odd reactions that can apparently be made to go at electrode surfaces, but what synthetic organic chemist has ever run one, besides someone in a group that concentrates on publishing papers on electrochemical reactions? Since a few inconclusive cyclic voltammetry scans in 1984, I sure haven't.
That's more harsh-sounding than I intended. I definitely don't think that the technique is useless, but it surely doesn't get used much. One problem is that there are so many different conditions - solvents, electrolytes, electrode materials, voltage/current regimens. If you've never done the stuff before, it's hard to know where to start. And that leads to the next problem, which is that so much of the equipment in the field has been home-made. That makes the activation barrier to trying it yourself that much higher: do you want to do this reaction enough to want to build your own apparatus and troubleshoot it? Or do you have something else to do? If someone sold a standard electrochemistry kit (controller box to run different conditions, set of different electrode materials, etc.), that would free some people up to find out what it could do for them, rather than wondering if they've built a decent setup.
Then there's the scale-up problem. When you're working at a surface to do your chemistry, that's always going to be a concern. What's the throughput? Enough to meet your needs? And if not, how exactly are you going to increase it, without having to rebuild the whole apparatus? There's probably a way to integrate flow chemistry with electrochemistry, which might solve that problem. But that mixture is, as yet, still in the realm of a few dedicated tinkerers - which is what one could say, sometimes, of the whole electrochemical field.
For a really stunning electron micrograph of the thinnest possible layer of glass, see here. (If you don't have journal access, here's a release with some details). What's even more striking is that the semi-random arrangement of atoms is basically an exact match of a hypothesis from 1932 by W. H. Zachariasen at Chicago.
And maybe it's just me, but high-resolution images of molecular structure like this still give me the shivers. I mean, I've seen all sorts of electron density maps from X-ray crystallography, but somehow this sort of thing gives one a more direct feeling of looking at the individual atoms. And for some reason, that seems like something Man Was Not Meant to Do - perhaps it's all those old elementary school textbooks that told me that atoms could never be seen. (Then again, philosopher Mortimer Adler made the same assumption, as I found to my surprise when I read his Ten Philosophical Mistakes, on page 184 if you're keeping score at home.
Noted chem-blogger Milkshake seems to have had a close call with a fire started by a tiny potassium hydride residue. It looks like he made it through without serious injury, but that sort of thing will definitely shake a person up.
I hate potassium hydride. Its relative sodium hydride is a common reagent, but it's much tamer (and even so, can cause interesting fires - I knew someone who ignited a heap of it on the pan of a balance while he was weighing it out, which slowed things down a bit). Sodium hydride is usually sold as a 60% dispersion, a dark grey powder soaked with mineral oil to keep it from deteriorating too quickly (and to keep it from setting everything on fire). You can buy 95% sodium hydride, the dry stuff, and there are people who swear by it, but I tend to sweat at it. You never know if it's been stored properly; you may be adding a slug of sodium hydroxide to your reaction without knowing it. And there's the fire part. You'll want to move briskly if you're using the 95%, and I'd pick a day when the humidity is low.
But potassium hydride, that's another beast entirely. It makes the sodium compound look like corn meal, in terms of how forgiving it is. You can't get away with the clumpy oily powder form at all - traditionally, KH is sold as a gooey dispersion of grey powder sitting under a few inches of mineral oil. If it's well dispersed, it's supposed to be 35%. You shake the stuff up until you think it's even mixed, then pipet out the amount of gunk that corresponded to the KH contained therein. Sure you do. What actually happens is that you pipet out the stuff, noticing while you do that it's already settling out inside the pipet, thereby to clog it up when you try to transfer it. No fun.
It's becoming available now dispersed in a block of wax, which is not such a bad idea at all. Wax isn't any harder to get out of your reaction than oil is, and you can carve off chunks and weigh them without so many what-am-I-doing moments. But Milkshake worries that this ease of use will lead to more fires during workups (which is where his reaction ran into trouble), and he may well be right. If you're going to use KH, don't let your guard down.
A new paper in Angewandte Chemie tries to open another front in relations between academic and drug industry chemists. It's from several authors at GSK-Stevenage, and it proposes something they're calling "Lead-Oriented Synthesis". So what's that?
Well, the paper itself starts out as a quick tutorial on the state and practice of medicinal chemistry. That's a good plan, since Angewandte Chemie is not primarily a med-chem journal (he said with a straight face). Actually, it has the opposite reputation, a forum where high-end academic chemistry gets showplaced. So the authors start off by reminded the readership what drug discovery entails. And although we've had plenty of discussions around here about these topics, I think that most people can agree on the main points laid out:
1. Physical properties influence a drug's behavior.
2. Among those properties, logP may well be the most important single descriptor,
3. Most successful drugs have logP values between 1 and perhaps 4 or 5. Pushing the lipophilicity end of things is, generally speaking, asking for trouble.
4. Since optimization of lead compounds almost always adds molecular weight, and very frequently adds lipophilicity, lead compounds are better found in (and past) the low ends of these property ranges, to reduce the risk of making an unwieldy final compound.
As the authors take pains to say, though, there are many successful drugs that fall outside these ranges. But many of those turn out to have some special features - antibacterial compounds (for example) tend to be more polar outliers, for reasons that are still being debated. There is, though, no similar class of successful less polar than usual drugs, to my knowledge. If you're starting a program against a target that you have no reason to think is an outlier, and assuming you want an oral drug for it, then your chances for success do seem to be higher within the known property ranges.
So, overall, the GSK folks maintain that lead compounds for drug discovery are most desirable with logP values between -1 and 3, molecular weights from around 200 to 350, and no problematic functional groups (redox-active and so on). And I have to agree; given the choice, that's where I'd like to start, too. So why are they telling all this to the readers of Angewandte Chemie? Because these aren't the sorts of compounds that academic chemists are interested in making.
For example, a survey of the 2009 issues of the Journal of Organic Chemistry found about 32,700 compounds indexed with the word "preparation" in Chemical Abstracts, after organometallics, isotopically labeled compounds, and commercially available ones were stripped out. 60% of those are outside the molecular weight criteria for lead-like compounds. Over half the remainder fail cLogP, and most of the remaining ones fail the internal GSK structural filters for problematic functional groups. Overall, only about 2% of the JOC compounds from that year would be called "lead-like". A similar analysis across seven other synthetic organic journals led to almost the same results.
Looking at array/library synthesis, as reported in the Journal of Combinatorial Chemistry and from inside GSK's own labs, the authors quantify something else that most chemists suspected: the more polar structures tend to drop out as the work goes on. This "cLogP drift" seems to be due to incompatible chemistries or difficulties in isolation and purification, and this could also illustrate why many new synthetic methods aren't applied in lead-like chemical space: they don't work as well there.
So that's what underlies the call for "lead-oriented synthesis". This paper is asking for the development of robust reactions which will work across a variety of structural types, will be tolerant of polar functionalities, and will generate compounds without such potentially problematic groups as Michael acceptors, nitros, and the like. That's not so easy, when you actually try to do it, and the hope is that it's enough of a challenge to attract people who are trying to develop new chemistry.
Just getting a high-profile paper of this sort out into the literature could help, because it's something to reference in (say) grant applications, to show that the proposed research is really filling a need. Academic chemists tend, broadly, to work on what will advance or maintain their positions and careers, and if coming up with new reactions of this kind can be seen as doing that, then people will step up and try it. And the converse applies, too, and how: if there's no perceived need for it, no one will bother. That's especially true when you're talking about making molecules that are smaller than the usual big-and-complex synthetic targets, and made via harder-than-it-looks chemistry.
Thoughts from the industrial end of things? I'd be happy to see more work like this being done, although I think it' going to take more than one paper like this to get it going. That said, the intersection with popular fragment-based drug design ideas, which are already having an effect in the purely academic world of diversity-oriented synthesis, might give an extra impetus to all this.
Most readers here will remember the fatal lab accident at UCLA in 2009 involving t-butyllithium, which took the life of graduate student Sheri Sangji. Well, there's a new sequel to that: the professor involved, Patrick Harran, has been charged along with UCLA with a felony: "willfully violating occupational health and safety standards". A warrant has been issued for his arrest; he plans to turn himself in when he returns from out of town this season. The University could face fines of up to $1.5 million per charge; Harran faces possible jail time.
This is the first time I've heard of such a case going to criminal prosecution, and I'm still not sure what I think about it. It's true that the lab was found to have several safety violations in an inspection before the accident - but, on the other hand, many working labs do, depending on what sort of standards are being applied. But it would also appear that Sangji herself was not properly prepared for handing t-butyllithium, which (as all organic chemists should know) bursts into flames spontaneously on exposure to air. She was wearing flammable clothing and no lab coat; no one should be allowed to start working with t-BuLi under those conditions. Being inexperienced, she should have been warned much more thoroughly than she appears to have been.
So something most definitely went wrong here, and the LA County DA's office has decided to treat it as a criminal matter. Well, negligence can rise to that level, under the law, so perhaps they have a point. Thoughts?
Update: here's a post that rounds up the responses to this across the blogging world.
For some comic relief, here's a list that was going around on Twitter: Chemistry, The Movie. What titles would you suggest? To give you the idea, some of the ones that have already come up include "Boron Free", "The Wizard of Osmium", and "The Bench Connection". More at the link (and on Twitter, #ChemistryTheMovie), if you can stand it. But if you can't take, for example, "Weekend at Swernie's", you'd be advised to click somewhere else (!)
In case people haven't seen it, this trifluoromethylation method from the MacMillan lab looks quite interesting. Now, not everyone loves the idea of sticking CF3 groups all over their molecules, and if you're a medicinal chemist you'll want to exercise restraint, but it's still an inarguably useful group. And the chemistry is interesting, too, using visible-light photoredox chemistry, an area that's been getting a lot of attention recently and seems pretty promising.
There's quite a list of reactions that have been done via this route, usually involving ruthenium or iridium catalysts and either fluorescent light or blue LEDs. (A trivia note: that ruthenium compound linked to looks more like good saffron powder, both in solid form and solution, than anything I've ever seen. It's all that Iranian food I get at home, I guess). Labs to watch include MacMillan's at Princeton, Corey Stephenson's at BU, and Tehshik Yoon's at Wisconsin, among others. Photochemistry has been a neglected field in many ways - perhaps taking it out of the ultraviolet and finding useful new reactions will slowly bring it back into the usual toolkit.