About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
November 26, 2008
The pace of research has noticeably slowed today here in the US. Most industrial labs will be empty tomorrow, Friday, and through the weekend, and even the academic labs will have fewer grad students and post-docs hanging out in them. I'll be cleaning up some previously run reactions, setting up anything that can comfortably go for a few days, and otherwise getting ready for Monday myself. This is not a day to try any tricky chemistry.
I also have a manuscript that I'm working on, and it would be a good use of my time to try to finish up its experimental section. The paper will likely be of interest to the readership here, so I'll be sure to note when it makes it into print. It'll be good to hit the scientific literature again; everything that's gone onto my list for the last year or two has been residual stuff from the Wonder Drug Factory, and there's not much of that left, naturally.
And I'll be observing a blog holiday until Monday as well, unless of course, something big happens. (I rather doubt that anything will, and considering what "something big" usually means, I rather hope nothing does). I'd like to wish all the US readers a happy Thanksgiving, and if anyone in the rest of the readership wants to try cooking a turkey, well, it's not as hard as it's cracked up to be. If you soak it in some salt water beforehand, it's quite tasty (my wife and I usually buy a kosher turkey, since they've already been salted). Allow me to finish up by furnishing the details of last night's synthetic work, at home in the kitchen with my two children:
Melt 3 tablespoons (43 grams) of butter and two squares of unsweetened baking chocolate (I used a coffee cup set in a pan of boiling water). Beat 3 eggs in a good-sized bowl. Then, in a small saucepan, combine 1 cup (240 mL) of corn syrup and 1/2 cup table sugar (100 g), and bring the mixture to a boil for about two minutes. (It doesn't look at first as if the sugar will go into solution, but it will - you naturally don't want this to cool down, though, once it has). Add the butter/chocolate mixture to the sugar syrup (they're not all that miscible, but do what you can), and add this gemisch slowly to the beaten eggs, stirring vigorously. (As I explained to my kids, if you were to dump these together with no stirring, you'd end up with chocolate-covered scrambled eggs; I try to teach them some technique along the way). Stir in a teaspoon (5 mL) of vanilla extract, 1 1/4 cups of pecan pieces (about 130 grams, I think), and pour the resulting slurry into a pie crust, your own or the store's. Bake about 45 minutes at 375F (190C, gas mark 5 for you subjects of the Queen). Yield: one chocolate pecan pie.
+ TrackBacks (0) | Category: Blog Housekeeping
November 25, 2008
Avandia (rosiglitazone) has been under suspicion for the last couple of years, after data appeared suggesting a higher rate of cardiovascular problems with its use. GlaxoSmithKline has been disputing this association all the way, as well they might, but today there’s yet more information to dispute.
A retrospective study in the Archives of Internal Medicine looked at about 14,000 patients on Medicare (older than 65) who were prescribed Avandia between 2000 and 2005. Now, looking backwards at the data is always a tricky business. For example, comparing these patients to another group that didn’t get the drug could be quite misleading – the obvious mistake there is that if someone has been prescribed Avandia, then they’re likely getting it because they’ve got Type II diabetes (or metabolic syndrome at least). Comparing that cohort to a group that isn’t showing such symptoms would be wildly misleading.
But this study compared the Avandia patients to 14,000 who were getting its direct competitor, Actos (pioglitazone). Now that’s more like it. The two drugs are indicated for the same patient population, for the same reasons. Their mechanism of action is supposed to be the same, too, as much as anyone can tell with the PPAR-gamma compounds. I wrote about that here – the problem with these drugs is that they affect the transcription of hundreds of genes, making their effects very hard to work out. Rosi and pio overlap quite a bit, but there are definitely (PDF) genes that each of them affect alone, and many others that they affect to different levels. Clinically, though, they are in theory doing the exact same thing.
But are they? This study found that the patients who started on Avandia had a fifteen per cent higher deaths-from-all-causes rate than the Actos group. To me, that’s a startlingly high number, and it really calls for an explanation. The Avandia group had a 13 per cent higher rate of heart failure, but no difference in strokes and heart attack, oddly. The authors believe that these latter two causes of death are likely to be undercounted in this population, though – there’s a significant no-cause-reported group in the data.
The authors also claim that the two populations were “surprisingly similar”, strengthening their conclusions. I think that that’s likely to be the case, given the similarities between the two drugs. GlaxoSmithKline, for their part, is saying that these numbers don’t match the safety data they’ve collected, and that a randomized clinical trial is the best way to settle such issues.
Well, yeah: a randomized clinical trial is the best way to settle a lot of medical questions. But neither GSK (nor Takeda and Lilly, makers of Actos) have seen fit to go head-to-head in one, have they? My guess is that both companies felt that the chances of showing a major clinical difference between the two was small, and that the size, length, and expense of such a trial would likely not justify its results. And if we’re talking about the beneficial mechanisms of action here, that’s probably true. You’d have quite a time showing daylight between the two drugs on things like insulin sensitivity, glycosylated hemoglobin, and other measures of diabetes. Individual patients may well show differences, and that's useful in practice - but that's a hard thing to show in a large averaged set of data. But how about nasty side effects? Maybe there's some room there - but in a murky field like PPAR-gamma, you'd have to have a lot of nerve to run a trial hoping to see something bad in your competitor's compound, while still being sure enough of your own. No, it's disingenuous to talk about how these questions need to be answered by a clinical trial, when you haven't done one, haven't planned one, and have (what seemed to be) good reasons not to.
This kind of study is the best rosi-to-pio comparison we're likely to get. And it does not look good for Avandia. GSK is going to have to live with that - and in fact, they already are.
+ TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Toxicology
November 24, 2008
Since I was talking about Nitromed on Friday, let me mention another attempt to combine two known drugs into a new therapy. Another Cambridge company whose front doors I walk by once in a while is CombinatoRx. If they'd had that name back in the early 1990s, you'd have assumed that they did combinatorial chemistry, but their plan is to take approved drugs and find greater-than-the-sum-of-their-parts combinations to approve as a single pill.
That's not easy. It's hard enough figuring out just how single drugs behave in the real world, and any physician will tell you all about what fun it is to deal with drug interactions. Finding beneficial drug interactions, especially unknown ones, is a real uphill climb. But CombinatoRx thought they had one in the mixture of low-dose prednisolone and dipyridamole.
Prednisolone is a well-known corticosteroid which is used to suppress inflammation and the immunen response. Dipyridamole is a multi-mechanism drug that increases the free concentration of adenosine, and it's been used to inhibit clotting and lower pulmonary hypertension. Blood pressure problems are common with prednisolone, and the company believed that the prednisolone dose could be taken down to non-side-effect levels in the presence of the other drug. So they formulated a combination pill (Synavive, CRx-102) to test this out in osteoarthritis patients. The stakes were high - here's a writeup from before the results came out last month.
Things did not work out. The Phase IIb study definitively missed its endpoints. Not only did Synavive not compare to prednisolone alone, it didn't reach statistical significance versus the placebo group, either. The stock dropped 72% the next day, and the company has now announced layoffs that total 65% of its workforce.
What I have to wonder, though, is how things would have worked out in the long run even if the trial had succeeded. As Nitromed's experience shows, it's a hard business convincing insurers to pay a premium for two generic drugs just because they're now available in one pill. I know that CombinatoRx was making much out of their proprietary formulation, no doubt anticipating such objections. But I wonder if a company in this space would have to actually run a head-to-head against the two-generic-pill dosing regimen to really convince people that it had something to offer. And that would take nerves of steel, for sure. . .
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials
November 21, 2008
Nitromed has been in trouble for several years now. They're a perfect example of a dog that caught a car: a company that was demolished by actually getting its drug on the market. No one wanted to pay for it, though, and after all that expense the company was left worse off than before.
Their remnants, though, are still listed on the NASDAQ, and that's more than a lot of more promising companies can say. One of those is Archemix, a company that I see every day on my way to work in Cambridge. They're working on aptamer-based therapies, and are at the stage in their life where (under normal conditions) they'd be thinking about an IPO. Well, they were thinking about that, but. . .these aren't exactly normal conditions, and the company recently announced that it was shelving that plan for now.
Enter Nitromed! The two companies have now announced a merger (a 70:30 stock split) which will keep the Archemix name and be listed on the stock exchange like Nitromed. Just add water, and you have an IPO, albeit with some dilution compared to the traditional way. Interestingly, the CEO of Nitromed will be the CEO of the new company, which I gather is coming as a bit of a surprise to the folks in the trenches at Archemix, as you might imagine.
But good luck to them all - in this environment, we all need it. Aptamers are an interesting and risky business, but not as risky as developing cardiovascular drugs that no one wants. . .
+ TrackBacks (0) | Category: Business and Markets
November 20, 2008
A colleague e-mailed me last night with an observation that he’d heard recently: “Have you noticed,” he said, “that the number we use get less and less precise, the farther away they get from the chemists?”
Thinking about it, I’d have to say that’s right, although I don’t think that we can claim any particular credit. After all, we have our feet planted in physics. Our molecular weights are based on the weights of the elementary particles, which are known. . .pretty exactly. And we’ve got a pretty good handle on molecular formulae, too, so we can go around getting mass spectra out to four decimal places and learning all kinds of things from them.
But then when these compounds get run through the primary assays, purified enzymes or the like, the numbers start getting fuzzier. Protein preps are all subtly different – ideally, they should be different in ways that make no difference, but then there’s the actual running of the assay to consider. Reproducibility varies, but no on gets worked up about a compound that shows, say, a 3 nanomolar inhibition in one assay and a six nanomolar in another. “Single digit nanomolar” is all we need to know, and it’s good odds that the next one will split the difference and come in at four or five, anyway.
But then you go to cellular assays, and things get more complicated. Cells are ridiculously more complex than enzymes, and there are so many more things that can kick around your data. Where did this batch of cells come from? How many times have they divided? What stage of their life cycles are they in, on average? What are they growing on, and in? Are they clean (no nasty mycoplasms?) Even if you’ve got all those things under control, your compounds are going to be exposed to untold numbers of other proteins now, all with potential binding sites and activities of their own. And that’s if they can even get past the cell membrane at all – many don’t, for reasons that are not always clear. No, your cellular numbers are always going to have a pretty good spread in them.
But then you go to whole animals, which have all those problems and more. Absorption from the gut and later metabolism are tricky and poorly understood processes, and they’re affected by a bewildering number of variables. Is your compound crystalline? Same way each time? What’s the particle size? How much water does that powder have in it? What are you taking the compound up in to dose it? Have the rats eaten recently? What time of day are they getting the compound? Male rats, or female? Nothing bothering them, no loud noises or change in lighting? Every single one of these things can throw your data around all over the place.
But now you’re up to clinical trials, and animal data is as orderly as a brick wall compared to human data. All those variables listed above still obtain, although you've presumably controlled for several of them by the time you're in the clinic. But that's more than made up for by the heterogeneity of your human volunteers and that of your all-too-human clinical staff. (Ask anyone who's worked up close with clinical data, and you'll hear all about it).
So we start from chemistry, where if we make a compound once we assume that we can always make it again - not always a warranted assumption, mind you, but mostly true. Then we move to in vitro assays, where you really need to have n-of-3, at least, so you can get error bars on your numbers. And we end up in human trials with hundreds (or thousands) of people taking the resulting drug, desperately hoping all the while that we'll be able to pick out an interpretable signal in all the noise. That's the business, all right.
+ TrackBacks (0) | Category: Drug Development
November 19, 2008
I know that it’s not necessarily fair to drag out old press releases, but let’s do it anyway. Many readers will remember a few years back when Novartis was making its big research move into Cambridge, renovating the old Necco candy building and hiring like mad. (We’ll pause for a bit of somber nostalgia at the memory of a large drug company actually hiring hordes of scientists).
While that was going on, there was a lot of talk about the way their research site was going to be run. Under its new research head, Mark Fishman, Novartis would "reinvent the way drugs are discovered" (I quote from an August 2003 article from the Boston Globe, behind their subscriber wall now, which irritated me quite a bit at the time). There was a lot of talk about Gleevec, and how this was going to be some sort of model for the future of drug discovery in the organization. (I could never quite follow that one, but I was willing to give them the benefit of the doubt). The whole thing would be a "research operation vastly different from traditional pharmaceutical research", to quote another old Globe article (May 2002).
Well, some years on now, the obvious question is: did any of this happen? Novartis as a company is doing fairly well, particularly in comparison to some of its peers. And they haven’t had any massive layoffs, to my knowledge, which puts them ahead of the game these days. So overall, the company has been successful: but is the Cambridge site the sort of place it was supposed to be, according to the original PR?
My impression is that it isn’t, at least not to the extent that we were all hearing about back then. I know a number of people who work there, and from the outside, at least, it seems to be pretty much like any other large drug research operation, albeit with less elbow room than usual in some of the labs and offices (a deliberate decision, apparently). I hear the usual talk and the usual complaints. Nothing that goes on over there strikes me as very different from other outfits of that size.
And there’s nothing wrong with that. This isn't a slap at Novartis, at Mark Fishman, or at anyone over there - it's a very good research organization. But I do wonder where all that transformational talk went. Is it still a work in progress (which seems to be the official viewpoint)? Did the organization try to change things, and fail? Was there even a clear idea of what this change was to consist of? Was there a decision made at some point that since things seemed to be going reasonably well, that the company should just leave the site to develop as it was? Or was all that talk at the beginning nothing more than, well, talk? I wondered about this at the time, and I suppose I'm still wondering now. . .
+ TrackBacks (0) | Category: Drug Industry History
November 18, 2008
One of the more wide-ranging on my “Lowe’s Laws of the Lab” list is this: The secret of success in synthetic chemistry is knowing what you can afford not to worry about.
That’s because you have to have a cutoff somewhere. There are so many potential things that can affect an experiment, and if you have to sweat every one of them out every time, you’re never going to get anything done. So you need to understand enough to know which parts are crucial and which parts aren’t. I think the beginnings of this law came from my days as a teaching assistant, watching undergraduates carefully weigh out a fivefold excess of reagent. Hmm. Did it matter if they were throwing in 4.75 equivalents or 5.25? Well, no, probably not. So why measure it out drop by drop?
Tom Goodwin, the professor responsible for teaching me inmy first organic chemistry course, once advanced his own solution to this problem. Growing weary of the seemingly endless stream of lab students asking him “Dr. Goodwin, I added X by mistake instead of Y. . .will that make a difference?”, he proposed creating “Goodwin’s Book of Tolerances.” I think he envisioned this as a thick volume like one of those old unabridged dictionaries, something that would live on its own special stand down the hall. “That way,” he told me, “when some student comes up and says ‘Dr. Goodwin, I added cheese dip instead of HCl – will that make a difference?’, I can walk over, flip to page thousand-and-whatever, and say ‘No. Cheese dip is fine.’”
According to him, a solid majority of these questions ended with the ritual phrase “Will that make a difference?” And that’s just what a working chemist needs to know: what will, and what won’t. The challenge comes when you’re not sure what the key features of your system are, which is the case in a lot of medicinal chemistry. Then you have to feel your way along, and be prepared to do some things (and make some compounds) that in retrospect will look ridiculous. (As I’ve said before, though, if you’re not willing to look like a fool, you’re probably never going to discover anything interesting at all).
Another challenge is when the parts of the system you thought were secure start to turn on you. We see that all the time in drug discovery projects – that methyl group is just what you need, until you make some change at the other end of the molecule. Suddenly its suboptimal – and you really should run some checks on these things as you go, rather than assuming that all your structure-activity relationships make sense. Most of them don’t, at some point. An extreme example of having a feature that should have been solid turn into a variable would be that business I wrote about the other week, where active substances turned out to be leaching out of plastic labware.
But if you spend all your time wondering if your vials are messing up your reactions, you'll freeze up completely. Everything could cause your reaction to go wrong, and your idea to keel over. Realize it, be ready for it - but find a way not to worry about it until you have to.
+ TrackBacks (0) | Category: Lowe's Laws of the Lab | Who Discovers and Why
November 17, 2008
There was a legal ruling last week in California that we’re going to hear a lot more of in this business. Conte v. Wyeth. This case involved metaclopramide, which was sold by Wyeth as Reglan before going off-patent in 1982. The plaintiff had been prescribed the generic version of the drug, was affected by a rare and serious neurological side effect (tardive dyskinesia, familiar to people who’ve worked with CNS drugs) and sued.
But as you can see from the name of the case, this wasn’t a suit against her physician, or against the generic manufacturer. It was a suit against Wyeth, the original producer of the drug, and that’s where things have gotten innovative. As Beck and Herrmann put it at the Drug and Device Law Blog:
The prescribing doctor denied reading any of the generic manufacturer's warnings but was wishy-washy about whether he might have read the pioneer manufacturer's labeling at some point in the more distant past.
Well, since the dawn of product liability, we thought we knew the answer to that question. You can only sue the manufacturer of the product that injured you. Only the manufacturer made a profit from selling the product, and only the manufacturer controls the safety of the product it makes, so only the manufacturer can be liable.
Not any more, it seems. The First District Court of Appeals in San Francisco ruled that Wyeth (and other drug companies) are also liable for harm caused by the generic versions of their drugs. At first glance, you might think “Well, sure – it’s the same drug, and if it causes harm, it causes harm, and the people who put it on the market should bear responsibility”. But these are generic drugs we’re talking about here – they’ve already been on the market for years. Their behavior, their benefits, and their risks are pretty well worked out by the time the patents expire, so we’re not talking about something new or unexpected popping up. (And in this case, we're talking about a drug that has been generic for twenty-six years).
The prescribing information and labeling has been settled for a long time, too, you’d think. At any rate, that’s worked out between the generic manufacturers and the FDA. How Wyeth can be held liable for the use of a product that it did not manufacture, did not label, and did not sell is a mystery to me.
Over at Law and More, a parallel is drawn between this ruling and the history of public nuisance law during the controversy over lead paint; the implication is that this ruling will stand up and be with us for a good long while. But at Cal Biz Lit, the betting is that “this all goes away at the California Supreme Court”. We’ll see, because that’s exactly where it’s headed and maybe beyond that, eventually.
And if this holds up? Well, Beck and Herrmann lay it out in their extensive follow-up post on the issue, which I recommend to those with a legal interest:
Conte-style liability can only drive up the cost of new drugs – all of them. Generic drugs are cheaper precisely because their manufacturers did not incur the cost of drug development – costs which run into the hundreds of millions of dollars for each successful FDA approval. Because they are cheap, generics typically drive the pioneer manufacturer’s drug off the market (or into a very small market share) within a few years, if not sooner. Generic drugs will stay cheap under Conte. But imposing liability in perpetuity upon pioneer manufacturers for products they no longer sell or get any profit from means that the pioneer manufacturers (being for-profit entities) have to recoup that liability expense somewhere. There’s only one place it can come from. That’s as an add-on to the costs of new drugs that still enjoy patent protection.
Exactly right. This decision establishes a fishing license for people to go after the deepest-pocketed defendents. Let’s hope it’s reversed.
+ TrackBacks (0) | Category: Regulatory Affairs | The Central Nervous System | Toxicology
November 14, 2008
So, you’re making an enzyme inhibitor drug, some compound that’s going to go into the protein’s active site and gum up the works. You usually want these things to be potent, so you can be sure that you’ve knocked down the enzyme, so you can give people a tiny, convenient pill, and so you don’t have to make heaps of the compound to sell. How potent is potent? And how potent can you get?
Well, we’d like nanomolar. For the non-chemists in the crowd, that’s a concentration measure based on the molecular weight of the compound. If the molecular weight of the drug is 400, which is more typical than perhaps it should be, then 400 grams of the stuff is one mole. And 400 grams dissolved
in a liter of solvent to make a liter of solution would then give you a one molar (1 M) solution. (The original version of this post didn't make that important distinction, which I'll chalk up to my not being completely awake on the train ride first thing in the morning. The final volume you get on taking large amounts of things up in a given amount of solvent can vary quite a bit, but concentration is based, naturally, on what you end up with. And it’s a pretty flippin’ unusual drug substance than can be dissolved in water to that concentration, let me tell you right up front). So, four grams in a liter would be 0.01 M, or 10 millimolar, and foru hundred milligrams per liter would be a 1 millimolar solution. A one micromolar solution would be 400 micrograms (0.0004 grams) per liter, and a one nanomolar solution would be 400 nanograms (400 billionths of a gram) per liter. And that’s the concentration that we’d like to get to show good enzyme inhibition. Pretty potent, eh?
But you can do better – if you want to, which is a real question. Taking it all the way, your drug can go in and attach itself to the active site of its target by a real chemical bond. Some of those bond-forming reactions are reversible, and some of them aren’t. Even the reversible ones are a lot tighter than your usual run of inhibitor.
You can often recognize them by their time-dependent inhibition. With a normal drug, it doesn’t take all that long for things to equilibrate. If you leave the compound on for ten, twenty, thirty minutes, it usually doesn’t make a huge difference in the binding constant, because it’s already done what it can do and reached the balance it’s going to reach. But a covalent inhibitor, that’ll appear to get more and more potent the longer it stays in there, since more and more of the binding sites are being wiped out. (One test for reversibility after seeing that behavior is to let the protein equilibrate with fresh blank buffer solution for a while, to see if its activity ever comes back). You can get into hair-splitting arguments if your compound binds so tightly that it might as well be covalent; at some point they're functionally equivalent.
There are several drugs that do this kind of thing, but they’re an interesting lot. You have the penicillins and their kin – that’s what that weirdo four-membered lactam ring is doing, spring-loaded for trouble once it gets into the enzyme. The exact same trick is used in Alli (orlistat), the pancreatic lipase inhibitor. And there are some oncology drugs that covalently attach to their targets (and, in some cases, to everything else they hit, too). But you’ll notice that there’s a bias toward compounds that hit bacterial enzymes (instead of circulating human ones), don’t get out of the gut, or are toxic and used as a last resort.
Those classes don’t cover all the covalent drugs, but there’s enough of that sort of thing to make people nervous. If your compound has some sort of red-hot functional group on it, like some of those nasty older cancer compounds, you’re surely going to mess up a lot of other proteins that you would rather have left alone. And what happens to the target protein after you’ve stapled your drug to it, anyway? One fear has been that it might present enough of a different appearance to set off an immune response, and you don’t want that, either.
But covalent inhibition is actually a part of normal biochemistry. If you had a compound with a not-so-lively group, one that only reacted with the protein when it got right into the right spot – well, that might be selective, and worth a look. The Cravatt lab at Scripps has been looking into what kinds of functional groups react with various proteins, and as we get a better handle on this sort of thing, covalency could make a comeback. Some people maintain that it never left!
+ TrackBacks (0) | Category: Drug Assays | Toxicology
November 13, 2008
Organic chemisty can be a real high-wire act. If you’re taking a compound along over a multistep sequence, everything has to work, at least to some extent: a twelve-step route to a compound whose last step can’t be made to work isn’t a route to the compound at all. To get the overall yield you multiply all the individual ones, and a zero will naturally take care of everything that came before it.
Even very respectable yields will creep up on you if you have the misfortune to be doing a long enough synthesis. It’s just math – if you have an average 90% yield, which shouldn’t usually be cause for distress, that means that you’re only going to get about 35% of what you theoretically could have after ten steps (0.9 to the tenth). An average 95% yield will run that up to 60% over the same sequence, and there you have one of the biggest reason for the importance of process chemistry groups. Their whole reason to live is to change those numbers, to make sure that they stay that way every time, and without having to do anything crazier than necessary along the way.
When you’re involved in something like this and you know you’re going to be approaching a tricky step, the natural temptation is to try it out on something else first. Model systems, though, can be the road to heartbreak. In the end, there are no perfect models, of anything. If you’re lucky, the conditions you’ve worked out by using your more-easily-available model compound will translate to your precious one. But as was explained to me years ago in grad school, the problem is that if you run your model and it works, you go on to the real system. And if you run your model and it doesn’t work, well. . .you might just go on to the real system anyway, because you’re not sure if your model is a fair one or not. So what’s the point?
This gets to be a real problem in some labs. While ten steps is medium to long for a commercial drug synthesis, it’s just the warmup for a lot of academic ones. Making natural products by total synthesis can take you on up into the twenty- and thirty-step levels, and some go beyond that, most horribly for everyone concerned. In such cases, you’d much rather have several segments of the big honking molecule built separately and then hooked together, rather than run everything in a row.
But what if you spend all that time on the segments, but you can’t put the things together? The most famous example of that I know happened in Nicolaou’s synthesis of Brevetoxin B. The initial disconnection of this terrible molecule into two nearly-as-awful pieces turned out to have been a mistake. Despite repeated attempts, no way could be found to couple the two laboriously prepared pieces to make the whole molecule, and untold man-hours of grad-student and post-doc slave labor had to be ditched for a new approach. If you w