About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
November 30, 2012
A few years ago, I asked the readership for the best books on the practice of medicinal chemistry and drug discovery itself. These may not be exactly stocking stuffers, at least not for most people, but I wanted to mention these again, and to solicit nominations for more recent titles to add to the list. So, here's what I have at the moment:
For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.
Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson.
Case histories of successful past projects are found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.
Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery.
For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.
Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.
In a related field, the standard desk reference for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons. Since all of us make a fair number of poisons (as we eventually discover), it's worth a look.
As mentioned, titles to add to the list are welcome - I'll watch the comments for ideas!
+ TrackBacks (0) | Category: Book Recommendations | Science Gifts
There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):
The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?
He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:
". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.
This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.
See if this might sound familiar to anyone:
". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.
But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?
"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.
What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.
This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?
And as for those times when we do have mode-of-action information, well, here are some fighting words for you:
". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.
I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.
I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:
". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.
That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.
What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:
In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.
One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why
November 29, 2012
An awful lot of people are using an awful lot of bad language in Cambridge, MA right now. At about 4:25 PM (EST), the power flickered and went out in a large swath of East Cambridge, out to somewhere near Harvard Square. That takes out MIT and more technology-based companies than you'd care to count, so everyone is getting the chance to find out how their backup power supplies work (or don't), and how their expensive, finicky equipment takes to having the current lurch around.
I was in my office when things browned down and went out, and it soon became clear that the whole area had gone dark. Public transit was working (when I got on it, anyway), and my commute home is the same as always (for better or worse!), but that won't be the case for people depending on spotty streetlights and the like. Not to mention the various homeward-bound folks who are presumably sitting, none too happily, in elevators right now.
Servers, NMR machines, LC/MS units, -80 degree freezers, lab fridges, automation of all sorts are to be found in heaps in that part of town; it's probably got one of the densest concentrations of such equipment anywhere. Getting it all running again will not be enjoyable.
+ TrackBacks (0) | Category: Current Events
For those connoisseurs of things that have gone wrong, here's a list of the worst drug launches of recent years. And there are some rough ones in there, such as Benlysta, Provenge, and (of course) Makena. And from an aesthetic standpoint, it's hard not to think that if you name your drug Krystexxa that you deserve what you get. Read up and try to avoid being part of such a list yourself. . .
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices
In my post the other day on do-it-at-home science experiments and demonstrations, I left out Theo Gray's Mad Science. That's because, although it looks like a very fun book, it seems to require a number of things that most people don't have lying around the house, like a Van der Graaf generator. (If you're in the market, though, you can get one here - I'm starting to wonder what it is that Amazon doesn't sell).
But Gray's The Elements, which I've recommended before, is an excellent thing to have for anyone who's curious about the periodic table or chemistry in general. I remember as a child browsing through the old Time-Life book on the elements (my grandparents had a copy; I'd read it every time we visited them). This is the 21st century version. He's done a follow-up, the Elements Vault, which is more of a tour of the Periodic Table by columns, rather than by rows.
And I'm ordering The Elements Puzzle for the rest of the family for Christmas. (My kids don't read my site, or at least not yet). It's a 1000-piece jigsaw puzzle that produces a three-foot-wide periodic table, with information and photographs of each element. They're bound to learn something by putting it together!
This is a good time to note that this blog is an Amazon affiliate. I get a small cut of whatever's ordered through these links (at no charge to the buyer). And yes, Amazon sends me a W-2 on the yearly total, so I do pay taxes on it!
+ TrackBacks (0) | Category: Science Gifts
Another drug repurposing initiative is underway, this one between Roche and the Broad Institute. The company is providing 300 failed clinical candidates to be run through new assays, in the hopes of finding a use for them.
I hope something falls out of this, because any such compounds will naturally have a substantial edge in further development. They should all have been through toxicity testing, they've had some formulations work done on them, a decent scale-up route has been identified, and so on. And many of these candidates fell out in Phase II, so they've even been in human pharmacokinetics.
On the other hand (there's always another hand), you could also say that this is just another set of 300 plausible-looking compounds, and what does a 300-compound screening set get you? The counterargument to this is that these structures have not only been shown to have good absorption and distribution properties (no small thing!), they've also been shown to bind well to at least one target, which means that they may well be capable of binding well to other similar motifs in other active sites. But the counterargument to that is that now you've removed some of those advantages in the paragraph above, because any hits will now come with selectivity worries, since they come with guaranteed activity against something else.
This means that the best case for any repurposed compound is for its original target to be good for something unanticipated. So that Roche collection of compounds might also be thought of as a collection of failed targets, although I doubt if there are a full 300 of those in there. Short of that, every repurposing attempt is going to come with its own issues. It's not that I think these shouldn't be tried - why not, as long as it doesn't cost too much - but things could quickly get more complicated than they might have seemed. And that's a feeling that any drug discovery researcher will recognize like an old, er, friend.
For more on the trickiness of drug repurposing, see John LaMattina here and here. And the points he raises get to the "as long as it doesn't cost too much" line in the last paragraph. There's opportunity cost involved here, too, of course. When the Broad Institute (or Stanford, or the NIH) screens old pharma candidates for new uses, they're doing what a drug company might do itself, and therefore possibly taking away from work that only they could be doing instead. Now, I think that the Broad (for example) already has a large panel of interesting screens set up, so running the Roche compounds through them couldn't hurt, and might not take that much more time or effort. So why not? But trying to push repurposing too far could end up giving us the worst of both worlds. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History
November 28, 2012
Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Development
So here's a question that a lot of people around here will have strong opinions on. I've heard from someone in an academic group that's looking into doing some high-throughput screening. As they put it, they don't want to end up as "one of those groups", so they're looking for advice on how to get into this sensibly.
I applaud that; I think it's an excellent idea to look over the potential pitfalls before you hop into an area like this. My first advice would be to think carefully about why you're doing the screening. Are you looking for tool compounds? Do they need to get into cells? Are you thinking of following up with in vivo experiments? Are you (God help you) looking for potential drug candidates? Each of these require somewhat different views of the world.
No matter what, I'd say that you should curate the sorts of structures that you're letting in. Consider the literature on frequent-hitter structures (here's a good starting point, blogged here), and decide how much you want to get hits versus being able to follow up on them. I'd also say to keep in mind the Shoichet work on aggregators (most recently blogged here), especially the lesson that these have to be dealt with assay-by-assay. Compounds that behave normally in one system can be trouble in others - make no assumptions.
But there's a lot more to say about this. What would all of you recommend?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays
We have a late entry in this year's "Least Soluble Molecule - Dosed In Vivo Division" award. Try feeding that into your cLogP program and see what it tells you about its polarity. (This would be a good ChemDraw challenge, too). What we're looking at, I'd say, is a sort of three-dimensional asphalt, decorated around its edges with festive scoops of lard.
The thing is, such structures are perfectly plausible building blocks for various sorts of nanotechnology. It would not, though, have occurred to me to feed any to a rodent. But that's what the authors of this new paper managed to do. The compound shown is wildly fluorescent (as well you might think), and the paper explores its possibilities as an imaging agent. The problem with many - well, most - fluorescent species is photobleaching. That's just the destruction of your glowing molecule by the light used to excite it, and it's a fact of life for almost all the commonly used fluorescent tags. Beat on them enough, and they'll stop emitting light for you.
But this beast is apparently more resistant to photobleaching. (I'll bet it's resistant to a lot of things). Its NMR spectrum is rather unusual - those two protons on the central trypticene show up at 8.26 and 8.91, for example. And in case you're wondering, the M+1 peak in the mass spec comes in at a good solid 2429 mass units, a region of the detector that I'm willing to bet most of us have never explored, or not willingly. The melting point is reported as ">300 C", which is sort of disappointing - I was hoping for something in the four figures.
The paper says, rather drily, that "To direct the biological application of our 3D nanographene, water solubilization is necessary", but that's no small feat. They ended up using Pluronic surfactant, which gave them 100nm particles of the stuff, and they tried these out on both cells and mice. The particles showed very low cytotoxicity (not a foregone conclusion by any means), and were actually internalized to some degree. Subcutaneous injection showed that the compound accumulated in several organs, especially the liver, which is just where you'd expect something like this to pile up. How long it would take to get out of the liver, though, is a good question.
The paper ends with the usual sort of language about using this as a platform for chemotherapy, etc., but I take that as the "insert technologically optimistic conclusion here" macro that a lot of people seem to have loaded into their word processing programs. The main reason this caught my eye is that this is quite possibly the least drug-like molecule I've ever seen actually dosed in an animal. When will we see its like again?
+ TrackBacks (0) | Category: Chemical News | Drug Assays
November 27, 2012
+ TrackBacks (0) | Category: Science Gifts
There's an interesting paper out in PLoS One, called "Inside the Mind of a Medicinal Chemist". Now, that's not necessarily a place that everyone wants to go - mine is not exactly a tourist trap, I can tell you - but the authors are a group from Novartis, so they knew what they were getting into. The questions they were trying to answer on this spelunking expedition were:
1) How and to what extent do chemists simplify the problem of identifying promising chemical fragments to move forward in the discovery process? 2) Do different chemists use the same criteria for such decisions? 3) Can chemists accurately report the criteria they use for such decisions?
They took 19 lucky chemists from the Novartis labs and asked them to go through 8 batches of 500 fragments each and select the desirable compounds. For those of you outside the field, that is, unfortunately, a realistic test. We often have to work through lists of this type, for several reasons: "We have X dollars to spend on the screening collection - which compounds should we buy?" "Which of these compounds we already own should still be in the collection, and which should we get rid of?" "Here's the list of screening hits for Enzyme Y: which of these look like useful starting points?" I found myself just yesterday going through about 350 compounds for just this sort of purpose.
They also asked the chemists which of a set of factors they used to make their decisions. These included polarity, size, lipophilicity, rings versus chains, charge, particular functional groups, and so on. Interestingly, once the 19 chemists had made their choices (and reported the criteria they used in doing so), the authors went through the selections using two computational classification algorithms, semi-naïve Bayesian (SNB) and Random Forest (RF). This showed that most of the chemists actually used only one or two categories as important filters, a result that ties in with studies in other fields on how experts in a given subject make decisions. Reducing the complexity of a multifactorial problem is a key step for the human brain to deal with it; how well this reduction is done (trading accuracy for speed) is what can distinguish an expert from someone who's never faced a particular problem before.
But the chemists in this sample didn't all zoom in on the same factors. One chemist showed a strong preference away from the compounds with a higher polar surface area, for example, while another seemed to make size the most important descriptor. The ones using functional groups to pick compounds also showed some individual preferences - one chemist, for example, seemed to downgrade heteroaromatic compounds, unless they also had a carboxylic acid, in which case they moved back up the list. Overall, the most common one-factor preference was ring topology, followed by functional groups and hydrogen bond donors/acceptors.
Comparing structural preferences across the chemists revealed many differences of opinion as well. One of them seemed to like fused six-membered aromatic rings (that would not have been me, had I been in the data set!), while others marked those down. Some tricyclic structures were strongly favored by one chemist, and strongly disfavored by another, which makes me wonder if the authors were tempted to get the two of them together and let them fight it out.
How about the number of compounds passed? Here's the breakdown:
One simple metric of agreement is the fraction of compounds selected by each chemist per batch. The fraction of compounds deemed suitable to carry forward varied widely between chemists, ranging from 7% to 97% (average = 45%), though each chemist was relatively consistent from batch to batch. . .This variance between chemists was not related to their ideal library size (Fig. S7A) nor linearly related to the number of targets a chemist had previously worked on (R2 = 0.05, Fig. S7B). The fraction passed could, however, be explained by each chemist’s reported selection strategy (Fig. S7C). Chemists who reported selecting only the “best” fragments passed a lower fraction of compounds (0.13±0.07) than chemists that reported excluding only the “worst” fragments (0.61±0.34); those who reported intermediate strategies passed an intermediate fraction of compounds (0.39±0.25).
Then comes a key question: how similar were the chemists' picks to each other, or to their own previous selections? A well-known paper from a few years ago suggested that the same chemists, looking at the same list after the passage of time (and more lists!) would pick rather different sets of compounds. Update: see the comments for some interesting inside information on this work.)Here, the authors sprinkled in a couple of hundred compounds that were present in more than one list to test this out. And I'd say that the earlier results were replicated fairly well. Comparing chemists' picks to themselves, the average similarity was only 0.52, which the authors describe, perhaps charitably, as "moderately internally consistent".
But that's a unanimous chorus compared to the consensus between chemists. These had similarities ranging from 0.05 (!) to 0.52, with an average of 0.28. Overall, only 8% of the compounds had the same judgement passed on them by at least 75% of the chemists. And the great majority of those agreements were on bad compounds, as opposed to good ones: only 1% of the compounds were deemed good by at least 75% of the group!
There's one other interesting result to consider: recall that the chemists were asked to state what factors they used in making their decisions. How did those compare to what they actually seemed to find important? (An economist would call this a case of stated preference versus revealed preference). The authors call this an assessment of the chemists' self-awareness, which in my experience, is often a swampy area indeed. And that's what it turned out to be here as well: ". . .every single chemist reported properties that were never identified as important by our SNG or RF classifiers. . .chemist 3 reported that several properties were important, for failed to report that size played any role during selections. Our SNG and RF classifiers both revealed that size, an especially straightforward parameter to assess, was the most important ."
So, what to make of all this? I'd say that it's more proof that we medicinal chemists all come to the lab bench with our own sets of prejudices, based on our own experiences. We're not always aware of them, but they're certainly with us, "sewn into the lining of our lab coats", as Tom Wolfe might have put it. The tricky part is figuring out which of these quirks are actually useful, and how often. . .
+ TrackBacks (0) | Category: Drug Assays | Life in the Drug Labs
November 26, 2012
I've decided this year that I'll be posting some recommendations for science-themed gifts, since this is the season that people will be looking around for them. This article at Smithsonian has a look at the history of the good ol' chemistry set. As I mentioned in this old post, I had one as a boy, augmented by a number of extra reagents, some of which (potassium permanganate!) were in rather too high an oxidation state for a ten-year-old. I can't report that I did much in the way of systematic experiments with all my material, but I did have a good time with it. Once in a while some combination of reagents will remind me of the smell of those bottles, and I'm instantly transported back to the early 1970s, out in a corner of the shop building in back of our house. (Elemental sulfur is a component of that smell; the rest I'm not sure about).
The Smithsonian article mentions that Thames and Kosmos chemistry sets get good reviews from people who've seen them. So if you're in the market for a gift for the kids, that might be a line to try! The potassium permanganate I'll leave up to individual discretion. . .
+ TrackBacks (0) | Category: Chemical News | Science Gifts
As mentioned the other day, this will be a post for people to ask questions directly to Philip Skinner (SDBioBrit) of Perkin-Elmer/Cambridgesoft. He's doing technical support for ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, etc., and will be monitoring the comments and posting there. Hope it helps some people out!
Note - he's out on the West Coast of the US, so allow the poor guy time to get up and get some coffee in him!
+ TrackBacks (0) | Category: Chemical News | In Silico
I don't know how many readers have been following this, but there's been some interesting work over the last few years in using streptavidin (a protein that's an old friend of chemical biologists everywhere) as a platform for new catalyst systems. This paper in Science (from groups at Basel and Colorado State) has some new results in the area, along with a good set of leading references. (One of the authors has also published an overview in Accounts of Chemical Research). Interestingly, this whole idea seems to trace back to a George Whitesides paper from back in 1978, if you can believe that.
(Strept)avidin has an extremely well-characterized binding site, and its very tight interaction with biotin has been used as a set of molecular duct tape in more experiments than anyone can count. Whitesides realized back during the Carter administration that the site was large enough to accommodate a metal catalyst center, and this latest paper is the latest in a string of refinements of that idea, this time using a rhodium-catalyzed C-H activation reaction.
A biotinylated version of the catalyst did indeed bind streptavidin, but this system showed very low activity. It's known, though, that the reaction needs a base to work, so the next step was to engineer a weakly basic residue nearby in the protein. A glutamate sped things up, and an aspartate even more (with the closely related asparagine showing up just as poorly as the original system, which suggests that the carboxylate really is doing the job). A lysine/glutamate double mutant gave even better results.
The authors then fine-tuned that system for enantioselectivity, mutating other residues nearby. Introducing aromatic groups increased both the yield and the selectivity, as it turned out, and the eventual winner was run across a range of substrates. These varied quite a bit, with some combinations showing very good yields and pretty impressive enantioselectivities for this reaction, which has never until now been performed asymmetrically, but others not performing as well.
And that's promise (and the difficulty) with enzyme systems. Working on that scale, you're really bumping up against individual parts of your substrates on an atomic level, so results tend, as you push them, to bin into Wonderful and Terrible. An enzymatic reaction that delivers great results across a huge range of substrates is nearly a contradiction in terms; the great results come when everything fits just so. (Thus the Codexis-style enzyme optimization efforts). There's still a lot of brute force involved in this sort of work, which makes techniques to speed up the brutal parts very worthwhile. As this paper shows, there's still no substitute for Just Trying Things Out. The structure can give you valuable clues about where to do that empirical work (otherwise the possibilities are nearly endless), but at some point, you have to let the system tell you what's going on, rather than the other way around.
+ TrackBacks (0) | Category: Chemical Biology | Chemical News
November 23, 2012
After that ChemDraw post from a few days ago, I had some contact from Philip Skinner, one of the Perkin-Elmer employees who helps support their chemical software (ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, and so on). He's agreed to hang around on Monday here at the site to answer whatever questions people might have about the programs - I'll start a post on the subject, and he'll handle things in the comment thread. So if you have some technical or usage questions for those programs, be sure to stop by!
+ TrackBacks (0) | Category: Blog Housekeeping
I wanted to mention that the crowdfunded CNS research that I mentioned here is now in its final 48 hours for donations. Money seems to be picking up, but it'll be close to see if they can make their target. If you're interested, donations can be made here.
+ TrackBacks (0) | Category: The Central Nervous System
November 21, 2012
Well, I know that it's an odd time for me to be posting here, but I'm up working on some Thanksgiving food for tomorrow. The chocolate pecan pie recipe that I've posted here is back, by popular demand of my family (and me), and one of them is going to be coming out of the oven in about ten minutes. I've made a pumpkin pie as well; America is all about having a multitude of options.
Tomorrow we'll be roasting a large turkey (we buy a kosher one, which takes care of the brining step that really improves the bird). And there will be stuffing - my Iranian mother-in-law's recipe, which features seasoned bread cubes, onion, celery, cranberries, and pepperoni (trust me, it works). Alongside this will be homemade mashed potatoes (with turkey gravy), sweet potatoes, green beans cooked with some Tennessee country ham, creamed onions with thyme, pan-roasted Brussell sprouts, and a huge Iranian basmati rice pullao with saffron, orange zest, pistachios, and tart red zereshk berries.
That should pretty much hold everyone. If not, well, there's not much more I can do. Never trust an organic chemist who can't cook.
+ TrackBacks (0) | Category: Blog Housekeeping
I can't even count the number of e-mails I've gotten over the last few years asking about TauRx and their Alzheimer's program, which made a big splash back in 2008. Finally, there's some news to report. The company is starting Phase III clinical trials, and has announced new financing to see these through. The company is based in Singapore, and they're getting money from a large multinational company in the region.
Good for them. The tau-based therapy they're working on is a very interesting idea, and (of course) extremely significant if it actually works. I'm happy to see that it's going to get a real chance to prove itself, and I look forward to seeing the results. Their earlier compound ("Rember") was reformulated methylene blue, but they now seem to have an improved version to go ahead with (and not just in Alzheimer's, apparently).
I know I'll get more mail about this, but let me save time by telling those interested to go here, to a site run by TauRx about their clinical trials. It seems that they have started enrolling patients in North America.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
We'll start off with a little extraterrestrial chemistry. As many will have heard, there are all sorts of hints being dropped that the sample analyzing equipment on the Mars Curiosity rover has detected something very interesting. We'll have to wait until the first week of December to find out what it is, but my money is on polycyclic aromatic hydrocarbons or something other complex abiotic organics.
Here's a detailed look at the issue. The Martian surface has a pretty vigorous amount of perchlorate in it, which was not realized for a long time (and rather complicates the interpretation of some of the past experiments on it). But Curiosity's analytical suite was designed to deal with this, and my guess is that these techniques have worked and that organic material has been detected.
I would very much bet against any sort of strong signature of life-as-we-know-it, though. For one thing, finding that in a random sand dune would seem pretty unlikely. Actually, finding good traces anywhere in the top layer of Martian rock and dust seems unlikely (as opposed to deeper underground, where I'm willing to speculate freely on the possible existence/persistence of bacteria and such). And I'm not sure the Curiosity would be well equipped to discriminate abiotic versus biotic compounds, anyway.
But organic compounds in general, absolutely. This brings up an interestingly false idea that underlies a lot of casual thinking about Mars (and space in general). Many people have this mental picture of everywhere outside Earth being sort of like the surface of our moon. It leads to a false dichotomy: here we have temperate air, liquid water, life and the byproducts of life (oil and coal, for example). Out there is all cold barren rock directly exposed to vacuum and hard radiation. We associate "space" with clean, barren, surfaces and knife-edge shadows, whereas "down here" it's all wet and messy. Not so.
There's plenty of irradiated rock, true, but there's water all over the outer solar system, in huge amounts. And while what we see out there is frozen, it's a near-certainty that there are massive oceans of the liquid stuff down under the various crusts of the larger outer-planet moons. All those alien-invasion movies, the ones with the extraterrestrials after our planet's water, are fun but ridiculous examples of that false dichotomy in action. There's plenty of organic chemistry, too - I've written before about how the colors of Jupiter's clouds remind me of reaction byproducts, and it's no coincidence that they do. The gas giant planets are absolutely full of organic chemicals of all varieties, and they're getting heated, pressurized, mixed, irradiated, and zapped by huge lightning storms all the hours of their days. What isn't in there?
Everything came that way. The solar system has plenty of hydrocarbons, plenty of small carbohydrates, and plenty of amines and other nitrogen-containing compounds in it. The carbonaceous chrondrites are physical evidence that's fallen to Earth - some of these have clearly never been heated since their formation (since they're full of water and volatile organics), so the universe would seem to be awash in small-molecule gorp. There's another false dichotomy, that the materials for life are very rare and precious and only found down here on Earth. But they're everywhere.
+ TrackBacks (0) | Category: Chemical News | Life As We (Don't) Know It
November 20, 2012
Here's something from just this morning, a whopping large case on illegal trading in Wyeth and Elan stock. This one involves a hedge fund manager, Mathew Martoma, and (quite disturbingly), Dr. Sidney Gilman of the University of Michigan, who was the lead investigator on a very large bapineuzumab trial for Alzheimer's. His conduct appears, from the text of the complaint, to be completely inexcusable, just a total, raw tipoff of confidential information.
I blogged at the time about the trial results, not knowing, of course, that someone had been pre-warned and was trading 20 per cent of Elan's stock volume on the news (and at least ten per cent of Wyeth's). So I take back anything I said about insider trading cases becoming more small-time over the years; this case has jerked the average right back up.
Update: Adam Feuerstein on Twitter: "Gilman's presentation of bapi data at 2008 ICAD meeting was so poorly done. It was shockingly bad. Now we know why."
+ TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Clinical Trials | The Dark Side
The recent entry here on a phenotypic screen got some discussion going in the comments, and I thought I'd bring that out here for more. Some readers objected to the paper being characterized as a phenotypic screen at all, saying that it was just a cell-based screen. That got me to thinking about how I use the term, and to judge from the comments, there are at least two schools of thought on this.
The first says "Phenotypic" to mean something like "Screening for some desired effect in a living system, independent of any defined target". That's where I come from as well, since I've spent so much of my career doing target-based drug discovery. In a target-based program, you have cell assays, too - but they're downstream of the biochemical/pharmacological assay, and are there to answer two key questions: (1) does hitting the desired target do the right things to the cells, and (2) do the compounds break out into new SAR categories in cells that aren't apparent from their activity against the target? That last part can mean that some of the compounds are cytotoxic (while others aren't), or some of them seem to get into cells a lot better than others, and so on. But they're all subordinated to the original target idea, which drives the whole project.
The other definition of phenotypic screen would be something more like: "Screening simultaneously for a broad range of effects in a living system, independent of any defined target". I would call that, personally, a "high-content" screen (or more precisely, a high-content phenotypic screen, but (as mentioned) opinions vary on this. To the people who think this way, that Broad Institute paper I blogged on was merely a cell assay that looked at the most boring endpoint of all (cell death), and hardly lifted its head beyond that. But to a target-based person, everything that involves throwing compounds onto cells, with no defined target in mind, just to see what happens. . .well, that sure isn't target-based drug discovery, so it must be a phenotypic screen. And death is a phenotype, too, you know.
I like both kinds of screening, just for the record. But they're done for different purposes. High-content screening is a great way to harvest a lot of data and generate a lot of hypotheses, but for drug discovery, it can be a bit too much like a firehose water fountain. A more narrowed-down approach (such as "We want to find some compounds that make these kinds of cells perform Action X") is closer to actionable drug discovery efforts.
At any rate, a reader sent along some good high-content-screening work, and I'll blog about that separately. More comparisons will come up then.
+ TrackBacks (0) | Category: Drug Assays
Public biopharma companies have to put in a lot of effort to safeguard sensitive information. Since we have so many big, important binary events in our business (clinical trial results, sales figures for individual drugs, and so on), you really have to keep that stuff from getting out and around.
Which means that there's also a strong incentive for such things to leak. One could do very well for one's self, if one were not so concerned with being forced to disgorge all of one's profits, and even spending one's time in the slammer. And those factors completely neglect one's sense of ethics, assuming that one has any. These concerns are brushed aside strictly on a risk basis, one understands:
John Lazorchak, 42, director of financial reporting at Celgene, regularly tipped others to nonpublic information on acquisitions, quarterly earnings results and regulatory news, according to a Federal Bureau of Investigation complaint filed yesterday in federal court in Newark, New Jersey.
Mark Cupo, 51, the director of accounting and reporting at Sanofi-Aventis, now known as Sanofi; and Mark Foldy, 42, a marketing executive at Stryker Corp., also were charged. Prosecutors said most of the profit went to Lawrence Grum, 48, and Michael Castelli, 48, who also tipped friends and family. The case involves two sets of high school friends and at least one witness who secretly recorded Grum for the FBI.
Oh, dear. The total profit, in this instance, is about $1.5 million, and standards vary, but even if I had ethical problems I wouldn't run such risks for a share of that amount. Or the full amount, either. But as this Bloomberg story details, insider trading seems to have become a rather more democratic activity over the years, and the amounts of money involved have changed accordingly. Perhaps the people involved are thinking that these sums are too small to be noticed, by the standards of Wall Street and the SEC, and that they'll have a better chance of getting away with the trades.
Not so. I knew someone once who was having a dispute with the IRS, and was (by my standards) insufficiently concerned about his situation. "I'm just a little guy", was the response, "they don't care about someone like me". What I told him was "Whales eat plankton, you know". In that spirit, that second link gives the grim details of a case involving an employee at Seattle Genetics, and it could serve as the template for many others like it. It's a sad story. Most of them are.
+ TrackBacks (0) | Category: Business and Markets | The Dark Side
November 19, 2012
If you want to see the effects of (a) patent expirations on big-selling small molecules and (b) the lack of patent expiration effects on biologics (for now), take a look at the likely list of best-selling drugs of 2012. There are three small molecule therapies in the top ten: Advair, Crestor, and Lipitor, all of which are getting rather elderly. More show up below that point, but it's going to be hard to dislodge those antibodies from the upper reaches of the list. . .
Via Rich Apodaca's Twitter feed.
+ TrackBacks (0) | Category: Business and Markets
This would seem to be inviting the wrath of the Drug Development Gods, and man, are they a testy bunch: "Novartis could produce 14 or more new big-selling 'blockbuster' drugs within five years . . ."
I'll certainly wish them luck on that, and it certainly seems true that Novartis research has been productive. But think back - how many press releases have you seen over the years where Drug Company A predicts X number of big product launches in the next Y years? And how many of those schedules have ever quite worked out? The most egregious examples of this take the form of claiming that your new strategy/platform/native genius/good looks have now allowed you to deliver these things on some sort of regular schedule. When you hear someone talking about how even though they haven't been able to do anything like it in the past, they're going to start unleashing a great new drug product launch every year (or every 18 months, what have you) from here on out, run.
Now, Novartis isn't talking like this, and they have a much better chance of delivering on this than most, but still. Might it not be better just to creep up on people with all those great new products in hand, rather than risk disappointment?
+ TrackBacks (0) | Category: Business and Markets | Drug Development
Via a reader, here's an excellent YouTube video for those of you who use ChemDraw. I've been using the software since it came out, and there are several useful tricks here that I didn't know were even in the software. Did you know that you could give your common structures nicknames, so that the program would immediately draw them when you typed in the name? Or how to use the "Sprout" tool for drawing bonds without going to the bond-drawing tool? There's also an detailed look at customizing hotkeys, which for a heavy ChemDraw user will make you look like you have magic powers. Well worth a look. Update: see the comments for more if you're into this sort of thing!
I'd still like to see how quickly all these would allow you to draw something like this (well, other than giving it a nickname - I'd suggest "Jabba" or "Chemzilla" - and having it appear instantly). Of course, those of us old enough to remember the pre-ChemDraw (or any-other-draw) days will have a different perspective on the whole field. I remember the first time I saw the program being used, which would have been 1986, not an awful long time after it came out (see the timeline of computers in chemistry here). Like every other practicing organic chemist, as soon as I saw the program I knew that I had to have it. It was, as they say, a "killer app", and ChemDraw sold Macs, albeit on a smaller scale than VisiCalc sold Apple IIs. But it's hard to get across how those programs felt, unless you've actually rubbed Helvetica capital letters from a transfer sheet into an ink-drawn chair-conformation ring to make a drawing of a carbohydrate, or had to go back and manually erase (and write in) half a column of figures because you had recalculate them. It feels like, instead of hitting "Print", being given instead a slab of hardwood and some sharp tools which which to start carving out a block for inking. Or instead of hitting "Send", having someone bring you a horse.
+ TrackBacks (0) | Category: Chemical News
November 16, 2012
Here's a paper that I missed in Organic Process Research and Development earlier this year, extolling the virtues of sulfolane as a high-temperature polar solvent. I have to say, I've never used it, although I hear of it being used once in a while, mainly by people who are really having to crank the temperature on some poor reaction.
The only bad thing I've heard about it is its difficulty of removal. That high-boiling polar aprotic group all has this problem, of course (DMSO is no treat to get out of your sample sometimes, either, although it's so water-soluble that you always have sheer extraction on your side). But sulfolane is higher-boiling than all the rest (287C!), and it also freezes at about 28C, which could be a problem, too. (The paper notes that small amounts of water lower the freezing temperature substantially, and that 97/3 sulfolane/water is an article of commerce itself, probably for that reason). It has an unusual advantage, though, from a safety standpoint: it stands out from all the other polar aprotics as having remarkably poor skin penetration (as contrasted very much with DMSO, for example). It's more toxic than the others, but the skin penetration makes up for that, as long as you're not ingesting it some other way, which is Not Advised.
The paper gives a number of examples where this solvent proved to be just the thing, so I'll have to keep it in mind. Anyone out there care to share any hands-on experiences?
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
And here's another item, sent in by a reader, who noted this publication in Bioorganic and Medicinal Chemistry Letters. I have no problem with the work at all, and certainly no problem with the people who did it (some of whom I know), but Part Eleven? I'm trying to figure out why this would be sliced quite so thinly - the only thing that comes to mind is to scatter a wide group of co-authors across several publications, so as to give everyone something on their CV. But how much does a multipart BOMCL count for - heck, while we're on the subject, how much does most any publication count for in today's hiring environment? Update: note that this is not one of those multiyear series things - most of these appear to be in press right now.
+ TrackBacks (0) | Category: The Scientific Literature
Continuing with some more short links for today, those of you who are interested in what small-stock operators can get up to will enjoy this one, from Adam Feuerstein. What should have been about a $50 million dollar infusion of cash for a small nutritional-supplement company turned into an $18 million dollar infusion of cash. Where, you ask, did the rest of the money go? Read the fine print, and remember, this sort of thing goes on a lot, although it's rarely quite so blatant as this cynical rip-off. Something to keep in mind when you hear about a distressed small company being "rescued".
+ TrackBacks (0) | Category: Business and Markets
Alan Dove has it right here:
In a groundbreaking new study, scientists at Some University have discovered that a single molecule may drive people to perform that complex behavior we’ve all observed. Though other researchers consider the results of the small, poorly structured experiment misleading, a well-written press release ensures that their criticisms will be restricted to brief quotes buried near the bottoms of most news stories on the work, if they’re included at all.
There's more at the link, and believe me, you've seen releases that conform to this template so perfectly, it's eerie. I'm reminded of this famous BBC news report. . .
+ TrackBacks (0) | Category: Business and Markets | The Scientific Literature
For those wanting a timeline of the whole hexacyclinol business, with links to the articles, blogs, and commentary that's surrounded it, allow me to recommend Carmen Drahl's "History of the Hexacyclinol Hoo-Hah". (And no, the whole thing is not written in alliteration; for that, you'll be wanting this).
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
November 15, 2012
I like to highlight phenotypic screening efforts here sometimes, because there's evidence that they can lead to drugs at a higher-than-usual rate. And who couldn't use some of that? Here's a new example from a team at the Broad Institute.
They're looking at the very popular idea of "cancer stem cells" (CSCs), a population of cells in some tumors that appear to be disproportionately resistant to current therapies (and disproportionately responsible for tumor relapse and regrowth). This screen uses a surrogate breast cell line, with E-cadherin knocked down, which seems to give the dedifferentiated phenotype you'd want to target. That's a bit risky, using an artificial system like that, but as the authors correctly point out, isolating a pure population of the real CSCs is difficult-to-impossible, and they're very poorly behaved in cell culture. So until those problems are solved, you have your choice - work on something that might translate over to the real system, or ditch the screening idea for now entirely. I think the first is worth a shot, as long as its limitations are kept in mind.
This paper does go on to do something very important, though - they use an isogenic cell line as a counterscreen, very close to the target cells. If you find compounds that hit the targets but not these controls, you have a lot more confidence that you're getting at some difference that's tied to the loss of E-cadherin. Using some other cell line as a control leaves too many doors open too wide; you could see "confirmed hits" that are taking advantage of totally irrelevant differences between the cell lines instead.
They ran a library of about 300,000 compounds (the MLSMR collection) past the CSC model cells, and about 3200 had the desired toxic effect on them. At this point, the team removed the compounds that were flagged in PubChem as toxic to normal mammalian cell lines, and also removed compounds that had hit in more than 10% of the assays they'd been through, both of which I'd say are prudent moves. Retesting the remaining 2200 compounds gave a weird result: at the highest concentration (20 micromolar), 97 per cent of them were active. I probably would have gotten nervous at that point, wondering if something had gone haywire with the assay, and I'll bet that a few folks at the Broad felt the same way.
But when used the isogenic cell line, things narrowed down rather quickly. Only 26 compounds showed reasonable potency on the target cells along with at least a 25-fold window for toxicity to the isogenic cells. (Without that screen, then, you'd have been chasing an awful lot of junk). Then they ordered up fresh samples of these, which is another step that believe me, you don't want to neglect. A number of compounds appear to have not been quite what they were supposed to be (not an uncommon problem in a big screening collection; you trust the labels unconditionally at your own peril).
In the end, two acylhydrazone compounds ended up retaining their selectivity after rechecking. So you can see how things narrow down in these situations: 300K to 2K to 26 to 2, and that's not such an unusual progression at all. The team made a series of analogs around the lead chemical matter, and then settled on the acylhydrazone compound shown (ML239) as the best in show. It's not a beauty. There seems to be some rule that more rigorous and unusual a phenotypic screen, the uglier the compounds that emerge from it. I'm only half kidding, or maybe a bit less - there are some issues to think about in there, and that topic is worth a post of its own.
More specifically, the obvious concern in that fulvene-looking pyrrole thingie on the right (I use "thingie" in its strict technical sense here). That's not a happy-looking (that is, particularly stable-looking) group. The acylhydrazine part might raise eyebrows with some people, but Rimonabant (among other compounds) shows that that functional group can be part of a drug. Admittedly, Rimonabant went down with all hands, but it wasn't because of the acylhydrazine. And the trichloroaryl group isn't anyone's favorite, either, but in this context, it's just sort of a dessert topping, in an inverse sense.
But the compound appears to be the real thing, as a pharmacological tool. It was also toxic to another type of breast cancer cell that had had its E-cadherin disrupted, and to a further nonengineered breast cancer cell line. Now comes the question: how does this happen? Gene expression profiling showed a variety of significant changes, with all sorts of cell death and free radical scavenging things altered. By contrast, when they did the same profiling on the isogenic controls, only five genes were altered to any significant extent, and none of those overlapped with the target cells. This is very strong evidence that something specific and important is being targeted here. A closer analysis of all the genes suggests the NF-kappaB system, and within that, perhaps a protein called TRIB3. Further experiments will have to be done to nail that down, but it's a good start. (And yes, in case you were wondering, TRIB3 does, in fact, stand for "tribble-3", and yes, that name did originate with the Drosophila research community, and how did you ever guess?)
So overall, I'd say that this is a very solid example of how phenotypic screening is supposed to work. I recommend it to people who are interested in the topic - and to people who aren't, either, because hey, you never know when it might come in handy. This is how a lot of new biology gets found, through identifying useful chemical matter, and we can never have too much of it.
+ TrackBacks (0) | Category: Cancer | Chemical Biology | Drug Assays
November 14, 2012
Via Chemjobber's Twitter feed comes news of this: the formal retraction of the LaClair hexacyclinol synthesis.
The retraction has been agreed due to lack of sufficient Supporting Information. In particular, the lack of experimental procedures and characterization data for the synthetic intermediates as well as copies of salient NMR spectra prevents validation of the synthetic claims. The author acknowledges this shortcoming and its potential impact on the community
Potential? After six years? There were people taking their first undergraduate organic course when this controversy hit who are now thinking about how to start tying together their PhD dissertations. It seems that Angewandte Chemie is very loath to go the full-retraction route (there haven't been many), but that retraction notice doesn't bring up anything that wasn't apparent after the first ten minutes of reading the paper.
Update: Wavefunction isn't too impressed, either.
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
Note: politics ahead. This will not be a regular feature around here, but when events warrant, it'll rear its scaly head.
BioCentury has an interesting piece this week on the growing budget impasse and its implications for both academic and industrial biomedical research. It's already widely known that the so-called "Fiscal Cliff", the budget sequestration process that will trigger if no better deal is reached, will perforce come after funding for both the NIH and the FDA. It's always tricky to figure out the impact of such spending cuts, due to the well-known "Washington Monument" tactic. (That refers to the way that if you try to cut the budget for, say, the Park Service, the first thing they'll do is close the Washington Monument. After all, you are having to save money, right? And if you can do it in a way that causes the most outrage and inconvenience, thus increasing the chance that your budget will be restored, well, why wouldn't you?)
So that means that I don't necessarily believe all the predictions for what sequestration would do to any given agency's budget. But there's no doubt that it would have a powerful effect. At the very least, current plans for increased services or expanded programs would immediately go into the freezer, and there would be layoffs and program cancellations on top of that. New NIH grants would surely be hit, and the approval process at the FDA would slow down. Budget sequestration would not mean The End of Science in America, but we'd feel it, all right.
The flip side of budget-cutting is raising revenue. And for that, we can (among many other places) turn back to the deals made with PhRMA when the Affordable Care Act (aka "Obamacare") was passed. Says BioCentury:
Many of the deficit reduction playbooks Congress and the White House will consult include recommendations to suck money out of the pharmaceutical industry. These include a number of proposals that were taken off the table in the PhRMA deal to support the Affordable Care Act.
Near the top of the list: Imposing rebates on drugs purchased under Medicare Part D by so-called “dual-eligibles,” individuals who are eligible for both Medicare and Medicaid.
The Obama administration’s proposed fiscal 2013 budget projected $135 billion in revenues over a decade from dual-eligibles rebates. The idea, which is anathema to PhRMA, was also endorsed by the National Commission on Fiscal Responsibility and Reform chaired by Alan Simpson, a former Republican senator from Wyoming, and Erskine Bowles, President Clinton’s chief of staff.
The White House is also likely to continue to press for reducing the exclusivity period for biologics to seven years from the 12 years established when Congress created a biosimilars pathway in the Affordable Care Act.
Some readers may recall that I predicted something like this. There's a quote from the head of a health-care consulting firm, who says that "Everything that was taken off the table is back", and I can't say that I'm surprised. The twelve-year exclusivity idea had already been on the block to be chopped; I assume that one way or another, it's a goner.
Here's another provision of the Affordable Care Act that could affect the pharma industry. Starting in 2014, health insurance plans will have a defined "minimum level of coverage", which will be determined state-by-state. Late last year, the Department of Health and Human Services said that it plans to require that "essential" will mean one drug in each therapeutic class, with that one drug to be determined by some process I can only imagine. That idea hasn't been popular, with either drug companies or patients, and one might expect to see it altered. But not without a huge amount of wrangling, that's for sure.
+ TrackBacks (0) | Category: Business and Markets | Current Events | Regulatory Affairs
November 13, 2012
There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).
Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.
So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).
Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.
But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:
The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.
"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.
An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.
Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.
The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.
Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.
Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.
(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.
(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.
And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.
(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).
(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.
At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.
(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.
(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.
(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.
Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.
Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.
I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why
November 12, 2012
Here's a general organic chemistry question for the crowd, inspired by a recent discussion among colleagues. We were whiteboarding around some structures, and the statement was made that "By this time in the history of organic chemistry, unknown heterocycles are probably unknown for a very good reason". So, true or false? Are the rings that we haven't made yet mostly unmade because they're very hard (or impossible), or mostly because no one's ever cared about them (or realized that they'd made them at all)?
Note that this problem was the subject of some thorough theme-and-variations work a few years ago. That paper would suggest that that as many as 90% of the unknown heterocycles are simply not feasible to make, but that still leaves you with three thousand or so that are. So the answer to the question above might turn out to be "Both at the same time. . ."
+ TrackBacks (0) | Category: Chemical News
The overhyped nature of stem cell therapies is a topic that's come up here several times. In the latest developments, Pluristem, Inc., is threatening to sue Bloomberg New for their recent report, titled "Girl Dies As Pluristem Sells On Gains With Miracle Cells". Gosh, it's hard to see why the company would take exception to a headline like that, but here's how the piece leads off, in case things weren't clear:
Pluristem Therapeutics Inc.’s (PSTI) stock doubled in Nasdaq trading from May through September, helped by three news releases announcing that patients’ lives had been saved by injections of the company’s experimental stem cells.
After the stock soared on the positive news, two top executives profited by selling shares at the highest price in more than four years as part of a pre-determined program. When the first of those patients, a 7-year-old girl with a bone- marrow disease, died four months after the company said her life had been saved, Pluristem was silent. The company raised $34 million selling shares a week later.
Not so good. But as that link in the first paragraph shows, Pluristem's response has not cleared things up very much. In the same press release in which they demanded a correction from Bloombert, they revealed that another of their three initial patients had also died after four months, which also had not been announced before. The earlier press releases for all three patients are well-stocked with phrases like "medical miracle" and "life-saving". As long as this sort of thing is going on, the stem cell field will continue to have problems.
Update: interestingly, this post seems to have brought a lot of Pluristem's stock market fans flocking. And I mean this in the best possible way, but their appearance here does not inspire confidence.
+ TrackBacks (0) | Category: Clinical Trials | Press Coverage
November 9, 2012
Check out this graph from a recent ACS Webinar, as reprinted by Chemjobber. It shows PhDs awarded in the US over a forty-year period. And while chemistry degrees have been running a bit high for a few years, which surely hasn't helped the employment situation, they're still in the same rough 2000 to 2400 per year range that they've been in since I got my own PhD in 1988. The bigger employment problem for chemists is surely demand; that's slumped much harder than any supply increase.
But will you look at the "Biomedical PhD" line! It had a mighty climb in the late 1980s and early 1990s, then leveled off for a few years. But starting in 2004, it has been making another strong, powerful ascent, and into a vicious job market, too. . .what's driving this? Any thoughts?
+ TrackBacks (0) | Category: Business and Markets | Graduate School
There's been an interesting recent development in the biology of Alzheimer's disease. c-Jun N-terminal kinase 3 (JNK3, known to those in the field, semi-affectionately, as "Junk-Three") is expressed mostly in the CNS, and has been implicated as a player in Parkinson's and neurodegeneration in general. There's been evidence of its relevance to Alzheimer's (for example, here's a connection to tau protein), but it's hard to say what's actually going on.
A group at Ohio State has cranked up the interest level. They found that deleting JNK3 in a mouse model of amyloid deposition showed some rather dramatic effects, knocking the amyloid levels down by 90% and improving the cognitive function of the mice relative to controls. The hypothesis is that some unknown factor in Alzheimer's pathology leads to increased JNK3 activity, which sets off downstream effects in the mTOR and AMPK systems. (Given how central those proteins are, I can believe almost anything if you tell me that they're involved). These effects (on protein production and other systems) increase JNK3 activity even more, and a vicious cycle could be well underway.
Now, inhibitors of these enzymes have been the subject of research for quite a while now. Here's the most recent paper on such compounds, but there are quite a few others scattered through the literature - here's a 2010 review of them. Selectivity has been a problem, as has cell penetration - and if you're targeting the CNS, which you'd surely have to do for this approach to Alzheimer's, you have the blood-brain barrier to think about, too. (That link goes to a new paper that's worth a post of its own next week).
So no, there aren't any obvious JNK3 inhibitors ready to go into human Alzheimer's trials. On the other hand, a lot of companies have chemical matter in this area, and this new result makes it worthwhile to go back and see what there might be in the files. AD is one of the biggest "unmet medical need" areas out there, and plausible targets for it are always going to attract attention. Watch this area to see who goes for this one.
+ TrackBacks (0) | Category: Alzheimer's Disease
November 8, 2012
From the folks at FierceBiotech, here's a list of the "most frequently cited" takeover targets in the biotech sector. As John Carroll put it, the key seems to be: "targets that include either late-stage blockbuster candidates or some clearly defined new products on the market that can be had for $1 billion to $6 billion." I'll tell you that Onyx is number one on the list, but take a look at the rest and see if you agree. . .
+ TrackBacks (0) | Category: Business and Markets
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical Biology | Chemical News
November 7, 2012
Now here's a subject that most medicinal chemists have thought of at one point or another: why don't I put a silicon into my compounds? Pretty much like carbon, at least when there's only one of them, right? I've done it myself - I was working on a series of compounds years ago that had a preferred t-butyl group coming off an aryl ring (not anyone's favorite, to be sure). So I made the trimethylsilyl variation, and it worked fine. We had some patent worries in that series, and I pointed out that a silicon would certainly take care of that little problem, but it was still a bit too "out there" for most people's comfort. (And to be fair, it didn't have any particular other advantages; if it had stood out more in activity, things might have been different).
I wrote a bit about this subject a few years ago here, and mentioned a company in the UK, Amedis, that was giving silicon-for-carbon a go. This idea did not, in the end, set much on fire. Amedis was bought by Paradigm Therapeutics in 2005, and a couple of years later, Paradigm was bought out by Takeda. I'm not sure if there's any remnant of the Amedis silicon era left in Takeda's innards by now; if there is, I haven't come across it.
There's a new paper on medicinally active silicon compounds, though, which might get people thinking about this whole idea once more. It's a roundup of what's known about the biological properties and behavior of these things, and will serve as a handy one-stop source for all the reported drug-like molecules in the class. As far as I can tell, the most advanced silane ever in humans has been Karenitecin, a camptothecin analogue that went into Phase III back in 2008 (and does not seem to have been heard from since).
All silicon needs is one success, and then people will take it more seriously. So far, the right combination of activity, interest, and need hasn't quite come together. If you're thinking of giving it a try, though, this new paper is the first place to start reading.
+ TrackBacks (0) | Category: Odd Elements in Drugs
November 6, 2012
How many retracted papers, would you say, are due to honest error rather than fraud and other misconduct? We now can put a number on that, thanks to this paper. The authors have looked over all 2,047 paper listed on PubMed from the life sciences as "retracted" (better them than me), with the earliest going back to 1977. The authors are careful to point out that this is absolutely an underestimate, though, with several examples of papers which are known to be fraudulent but have never been officially retracted. But they find that:
. . .only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).
They blame incomplete and outright misleading retraction notices for confusing the issue about these numbers. (I've always liked, in a teeth-gritting way, the idea of a dubious retraction notice - it gives these things the full surround-sound sensory experience). Many published retractions that blame things like "flaws in the data analysis" turn out, on follow-up, to have been the subject of investigations that strongly suggested fraud.
Other trends: the US, Germany, Japan, and China accounted for the majority of papers pulled because of fraud, but China and India each stand out a bit in a crowded plagiarism field (China also stand out in the "duplicate publication" category). Higher-impact journals were significantly more likely to have papers retracted because of outright fraud rather than plagiarism (a result that makes sense, and squares with my own experience as a reader). And retractions have definitely been increasing over time, probably with several factors operating at once (greater incentives to fraud, coupled with increased detection). The paper sums up this way:
Given that most scientific work is publicly funded and that retractions because of misconduct undermine science and its impact on society, the surge of retractions suggests a need to reevaluate the incentives driving this phenomenon. We have previously argued that increased retractions and ethical breaches may result, at least in part, from the incentive system of science, which is based on a winner-takes-all economics that confers disproportionate rewards to winners in the form of grants, jobs, and prizes at a time of research funding scarcity.
Fixing this, though, will not be easy. There are recommendations for an increased focus on ethics training (which will do nothing at all, I think, to stop the sort of person who would do these sorts of things). But they also call for some standardization of retraction notices, with minimum standards of disclosure, which sounds like a good idea, and also for trying to find some way to reward scientists that doesn't involve publishing a ton of papers. I like that idea, too, although the implementation will be tricky. . .
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Well, every other web site in the US will be going on about the election today, and with good reason. So I'll put up a quick post of my own, because I'll be glued to the returns tonight myself. Along the way, I've been teaching my children how to interpret them, which makes them, I'd say, among the few local middle-schoolers who know that Florida has two different poll closing time (the panhandle's on CST), to wait for Pennsylvania because Philadelphia's ballots always seem to drag in last, and that on a country-by-county basis, Ohio looks like something that Dr. Frankenstein assembled on his day off. My father was an election commissioner when I was growing up back in Arkansas, and it left a mark. I've found, though, that a background in the Arkansas politics of that era has served me well, not least in making me difficult to shock when it comes to the behavior of politicians during (and after) elections.
So what do I think is going to happen? Well, I'm used to seeing raw biological assay data, so when I see people giving probabilities of political victory with figures to the right of the decimal place, I just smile. All I'll say is that I think it's going to be a close call for whoever wins, and that anyone (on either side) who's confident it won't be needs to get out more. Were I a betting man (perish the thought), I'd put some money down on a Romney upset, because I think you could get some good odds, thanks to Nate Silver. But we'll see - tonight, or in the morning, or (God help us all) even later than that.
+ TrackBacks (0) | Category: Current Events
November 5, 2012
For those who have been complaining that Chemical and Engineering News has been minimizing the employment situation for chemists, try this article. Note before you read: it's about as worrisome and depressing as it can be, and will absolutely give you the shakes whether you're currently employed or not. But for its subjects (and the other people in such situations) it's reality.
+ TrackBacks (0) | Category: Business and Markets
The discussion here last week about exaggerated reaction yields has gotten me thinking. I actually seem to go for long periods without ever calculating (or caring much) about the yields of my reactions.
That's largely because of the sort of medicinal chemistry work that I do - very early stage stuff, about as far back as you can get. For that work, I like to say that there are really only two yields: enough, and not enough. And if you can get product into a vial, or intermediate sufficient to make more needed analogs, then you have enough. I'd prefer that reactions work well, of course, but "well" is defined in my mind as much (or more) by how clean the product is than how much of it gets produced. A lower-yielding reaction whose product falls out ready to use seems nicer than a higher-yielding one that needs careful chromatography to get the red stuff out of it.
That's the opposite of the way I used to think when I was doing my grad school work, of course. Twenty-seven steps in a row will get you thinking very hard indeed about yields, especially later on in the synthesis. It occurs to you pretty quickly that if you take a 50% yield on something that took you two months to make, that you're pouring a month's effort into the red waste can. If you're going to take a nasty yield in a long sequence, it's much better to get it over with in step one. You'll see this effect at work in papers that just start off from a literature reference intermediate (the "readily available compound 3" syndrome), which can mean that compound 3 is a nasty prep which would besmirch the rest of the sequence were it included.
I'd certainly think differently were I in process chemistry, too, of course. And when I have to work downstream on a project, I do spare a thought for the ease of the chemistry, because that's closer to the point where my optimization colleagues will have to deal with what we produce. But back at the early stage, I have to admit, I really don't care all that much. The vast majority of the compounds that get made back there are not going to go anywhere, so whatever gets them made and tested quickly is a good thing. The elegant synthesis is the one that gets it out of the lab and down the hall, whatever the yield might be.
+ TrackBacks (0) | Category: Life in the Drug Labs
November 2, 2012
That title should bring in the hits. But don't get your hopes up! This is medicinal chemistry, after all.
"Can't you just put the group in your molecule that does such-and-such?" Medicinal chemists sometimes hear variations of that question from people outside of chemistry - hopeful sorts who believe that we might have some effective and instantly applicable techniques for fixing selectivity, brain penetration, toxicity, and all those other properties we're always trying to align.
Mostly, though, we just have general guidelines - not so big, not so greasy (maybe not so polar, either, depending on what you're after), and avoid a few of the weirder functional groups. After that, it's art and science and hard work. A recent J. Med. Chem. paper illustrates just that point - the authors are looking at the phenomenon of molecular promiscuity. That shows up sometimes when one compound is reasonably selective, but a seemingly closely related one hits several other targets. Is there any way to predict this sort of thing?
"Probably not", is the answer. The authors looked at a range of matched molecular pairs (MMPs), structures that were mostly identical but varied only in one region. Their data set is list of compounds in this paper from the Broad Institute, which I blogged about here. There are over 15,000 compounds from three sources - vendors, natural product collections, and Schreiber-style diversity-oriented synthesis. The MMPs are things like chloro-for-methoxy on an aryl ring, or thiophene-for-pyridyl with other substituents the same. That is, they're just the sort of combinations that show up when medicinal chemists work out a series of analogs.
The Broad data set yielded 30954 matched pairs, involving over 8000 compounds and over seven thousand different transformations. Comparing these compounds and their reported selectivity over 100 different targets (also in the original paper), showed that most of these behaved "normally" - over half of them were active against the same targets that their partners were active against. But at the other end of the scale, 829 compounds showed different activity over at least ten targets, and 126 of those compounds different in activity by fifty targets or more. 33 of them differed by over ninety targets! So there really are some sudden changes out there waiting to be tripped over; they're not frequent, but they're dramatic.
How about correlations between these "promiscuity cliff" compounds and physical properties, such as molecular weight, logP, donor/acceptor count, and so on? I'd have guessed that a change to higher logP would have accompanied this sort of thing over a broad data set, but the matched pairs don't really show that (nor a shift in molecular weight). On the other hand, most of the highly promiscuous compounds are in the high cLogP range, which is reassuring from the standpoint of Received Med-Chem Wisdom. There are still plenty of selective high-logP compounds, but the ones that hit dozens of targets are almost invariably logP > 6.
Structurally, though, no particular substructure (or transformation of substructures) was found to be associated with sudden onset of promiscuity, so to this approximation, there's no actionable "avoid sticking this thing on" rule to be drawn. (Note that this does not, to me at least, say that there are no such things are frequent-hitting structures - we're talking about changes within some larger structure, not the hits you'd get when screening 500 small rhodanine phenols or the like). In fact, I don't think the Broad data set even included many functional groups of that sort to start with.
On the basis of the data available to us, it is not possible to conclude with certainty to what extent highly promiscuous compounds engage in specific and/or nonspecific interactions with targets. It is of course unlikely that a compound might form specific interactions with 90 or more diverse targets, even if the interactions were clearly detectable under the given experimental conditions. . .
. . .it has remained largely unclear from a medicinal chemistry perspective thus far whether certain molecular frameworks carry an intrinsic likelihood of promiscuity and/or might have frequent hitter character. After all, promiscuity is determined for compounds, not their frameworks. Importantly, the findings presented herein do not promote a framework-centric view of promiscuity. Thus, for the evaluation and prioritization of compound series for medicinal chemistry, frameworks should not primarily be considered as an intrinsic source of promiscuity and potential lack of compound specificity. Rather, we demonstrate that small chemical modifications can trigger large-magnitude promiscuity effects. Importantly, these effects depend on the specific structural environment in which these modifications occur. On the basis of our analysis, substitutions that induce promiscuity in any structural environment were not identified. Thus, in medicinal chemistry, it is important to evaluate promiscuity for individual compounds in series that are preferred from an SAR perspective; observed specificity of certain analogs within a series does not guarantee that others are not highly promiscuous."
Point taken. I continue to think, though, that some structures should trigger those evaluations with more urgency than others, although it's important never to take anything for granted with molecules you really care about.
+ TrackBacks (0) | Category: Chemical News | Drug Assays | Natural Products | Toxicology
November 1, 2012
When I mentioned the people working in the research animal facilities before Hurricane Sandy, I had no idea that this was going to happen: thousands of genetically engineered and/or specially bred rodents were lost from an NYU facility due to flooding. The Fishell lab appears to have lost its entire stock of 2,500 mice, representing 10 years of work. Very bad news indeed for the people whose careers were depending on these.
+ TrackBacks (0) | Category: Animal Testing | Current Events
That's the word-for-word title of a provocative article by Rolf Carlson and Tomas Hudlicky in Helvetica Chimica Acta. That journal's usually not quite this exciting, but it is proud of its reputation for compound characterization and experimental accuracy. That probably helped this manuscript find a home there, where it's part of a Festschrift issue in honor of Dieter Seebach's 75th birthday.
The authors don't hold back much (and Hudlicky has not been shyabout these issues, either, as some readers will know). So, for the three categories of malfeasance described in the title, the first (hype) includes the overblown titling of many papers:
As long as the foolish use of various metrics continues there is little hope of return to integrity. Young scientists entering academia and competing for resources and recognition are easily infected with the mantra of importance of
publishing in 'high-impact journals' and, therefore, strive to make their work as noticeable as possible by employing excess hype.
It is the reader, not the author, of papers describing synthetic method who should evaluate its merits. Therefore, self-promoting words like 'novel', 'new', 'efficient', 'simple', 'high-yielding', 'versatile', 'optimum' should not be used in the title of the paper if such qualities are not covered by the actual content of the paper.
It also includes the inflation of reaction yields (see that link in the second paragraph above for more on that topic). This is another one that's going to be hard to fix:
Unfortunately, the community has chosen and continues to choose the yield values in submitted manuscripts as a measure of overall quality and/or utility of the report. This, of course, encourages the 'adjustment' in the values in order to avoid critique. An additional problem in the reported values is the fact that synthesis is performed on small scales, thanks to advances in NMR and other techniques available for structure determination. On milligram scales it is extremely difficult to accurately determine weight and content of a sample, given the equipment available in typical academic laboratory.
The second category, malpractice, is sloppy work, but not outright fraud:
Malpractice, as explained above, is usually not deliberate and derives primarily from ignorance or professional incompetence. The most frequent cases involve improper experimental protocols, improper methods used in characterization of compounds, and the lack of correct citations to previous work.
For example, the authors point out that very, very rarely are any new synthetic methods given a proper optimization. One-variable one-at-a-time changes are worthwhile, but they're not sufficient to explore a reaction manifold, not when these changes can interact with each other. As process chemists in industry know, the only way to explore such landscapes is with techniques such as Design of Experiments (DoE), which try to find out what factors in a multivariate system produce the greatest change in results. Here's an example; the process chemistry literature furnishes many more.
And finally, you have outright scientific misconduct - fraud, poaching of ideas from grant applications and the like, plagiarism in publications, etc. It's hard to get a handle on these - they seem to be increasing, but the techniques to find and expose them are also getting better. Over time, thought, these techniques might just have the effect of making fraud more sophisticated; that would be in line with human behavior as I understand it, and with selection pressure as well. The motives for such acts are with us still, and do not seem to be abating much, so I tend to think that determined miscreants will find ways to do what they want to do.
Thoughts? Some of this paper's points could be put in the "grumblings about the good old days" category, but I think that a lot of it is accurate. I'm not sure how good the old days were, myself, since they were also filled with human beings, but the pressures found today do seem to be bringing on a lot of behaviors we could do without.
+ TrackBacks (0) | Category: Chemical News | The Dark Side | The Scientific Literature