About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
January 31, 2013
Courtesy of the vital site that is TOC ROFL, I wanted to highlight this graphic from this paper in MedChemComm. I always pictured the liver as sort of a sawmill or shredding machine for our drug candidates, with the hepatic portal vein being the conveyer belt hooked up on the front end. But I have to admit, this is a pretty vivid representation.
Update: See Arr Oh has a few issues - rightly so - with the molecule being munched on. . .
+ TrackBacks (0) | Category: Pharmacokinetics
So everyone watching the pharma business has been hearing about how AstraZeneca has all kinds of problems - drug failures, big patent expirations, too much spending on too little output, one damn thing after another. Well, here's the evidence today. Everyone knew that numbers like these were coming, and here they are.
Sales will fall by a “mid- to high-single digit percentage” at constant exchange rates in 2013, the London- based company said today in a statement. Analysts had estimated a decline of about 3 percent, according to data compiled by Bloomberg. The company also said earnings fell for a fourth straight quarter and left the annual dividend unchanged. The stock fell the most in nine months.
And things will continue to be. . .challenging:
AstraZeneca has ended nine drug development programs since June 30, including selumetinib for solid tumors, AZD4017 for glaucoma and AZD9773 for severe sepsis, which were in mid-stage trials. In December, the company said fostamatinib, its experimental drug for rheumatoid arthritis, failed to show a benefit against AbbVie Inc. (ABBV)’s Humira in a mid-stage trial.
On the one hand, you want to get rid of such programs before they chew up still more time and money. But on the other hand, you do need something to sell. All this makes a person think that if you're a small company with an asset to sell, that you're going to want to give AZ a call. I think that they'll be ready to deal.
+ TrackBacks (0) | Category: Business and Markets
So Isis and their partner Sanofi have received FDA approval for mipomersen (branded as Kynamro). Late last year, the European Medicines Agency turned them down, which has people wondering about the drug's future, but here they are, albeit with a warning on the label about liver toxicity.
Mipomersen is designed to lower the Apo-B lipoprotein in people with the most severe (homozygous) form of familial hypercholesterolemia. That's a small patient population, but they're definitely in need of help. The really significant thing about this approval, in my mind, is that it's a pure antisense therapy, and it comes about twenty years after there was supposed to be a world-changing flood of them. (Isis did get one through the process back in 1996, fomivirsen, but it's never had much of an impact). It was a standing joke back in the late 1980s/early 1990s that everyone had heard from a headhunter recruiting for one antisense company or another. (Sheesh, those were the days, eh? There still are search firms, right? When's the last time a headhunter rang your phone?)
I don't think that mipomersen will ever reach the heights that Isis thought it might a few years ago; the liver tox problems will see that it's only used in life-threatening situations. (I note that one time when I wrote about the drug, fans of ISIS showed up rolling their eyes at the mistaken notion that liver tox could ever be a problem). But I'm divided between congratulating them on finally getting something onto the market, and wondering about how difficult it's been to get there. As far as I know, the liver tox seen in this case is largely (completely?) thought to be due to the mechanism of action on lipid handling in the liver itself.
So how about the other antisense compounds in the clinic? As of that 2010 link above, we had trabedersen, for TGF-beta2, which is actively being tried against pancreatic cancer. Alicaforsen, for Crohn's et al., has shown disappointing efficacy in Crohn's, but is still alive for ulcerative colitis.. Aganirsen, for various vascular conditions in the eye, is still in development, with more funding having arrived recently. Oblimersen has shown some effects in the clinic, but CLL is a crowded area, and its current status is unclear, at least to me. And custirsen is in Phase III, with mixed results in Phase II trials.
Actually, that lineup looks a lot like drug development in the rest of the industry, to be honest. Some stuff looks OK and is moving along, some not so OK, and some has wiped out. It's important to realize that even if liver tox is not some general feature of the mipomersen-generation antisense compounds, that we still have efficacy failures. Oh, that we do. The indications where we can really laser right in on a key target do not make a long list. Many of those are orphans, too. In contrast, the list of giant-unmet-medical-need indications where we can laser right in on a key target is, I think, waiting for something, anything, to be written on it.
+ TrackBacks (0) | Category: Clinical Trials
January 30, 2013
Here are some angry views that I don't necessarily endorse, but I can't say that they're completely wrong, either. A programmer bids an angry farewell to the bioinformatics world:
Bioinformatics is an attempt to make molecular biology relevant to reality. All the molecular biologists, devoid of skills beyond those of a laboratory technician, cried out for the mathematicians and programmers to magically extract science from their mountain of shitty results.
And so the programmers descended and built giant databases where huge numbers of shitty results could be searched quickly. They wrote algorithms to organize shitty results into trees and make pretty graphs of them, and the molecular biologists carefully avoided telling the programmers the actual quality of the results. When it became obvious to everyone involved that a class of results was worthless, such as microarray data, there was a rush of handwaving about “not really quantitative, but we can draw qualitative conclusions” followed by a hasty switch to a new technique that had not yet been proved worthless.
And the databases grew, and everyone annotated their data by searching the databases, then submitted in turn. No one seems to have pointed out that this makes your database a reflection of your database, not a reflection of reality. Pull out an annotation in GenBank today and it’s not very long odds that it’s completely wrong.
That's unfair to molecular biologists, but is it unfair to the state of bioinformatic databases? Comments welcome. . .
Update: more comments on this at Ycombinator.
+ TrackBacks (0) | Category: Biological News | In Silico
One more on quackery, and then back to science. You may have seen this story, which broke in Sports Illustrated, on a strange little outfit that called themselves Sports With Alternatives To Steroids, or S.W.A.T.S. They seem to have had a long list of professional and college athlete customers looking for some sort of (legal) performance edge. And who wouldn't sign up when there are cutting-edge therapies like this on offer?
(S.W.A..T.S.) prescribed a deluxe program, including holographic stickers on the right elbow; copious quantities of the powder additive; sleeping in front of a beam-ray light programmed with frequencies for tissue regeneration and pain relief; drinking negatively charged water; a 10-per-day regimen of the deer-antler pills that will "rebuild your brain via your small intestines" (and which Lewis said he hadn't been taking, then swallowed four during the conversation); and spritzes of deer-antler velvet extract (the Ultimate Spray) every two hours.
"Spray on my elbow every two hours?" Lewis asked.
"No," Ross said, "under your tongue."
We never do find out what's in the "powder additive". My guess is sugar-free drink mix, but perhaps I'm just small-minded. I don't think as big as the founders of S.W.A.T.S., that's for sure - these guys are way out in front of the rest of us:
The theoretical underpinning offered by Key is that radio waves can be stored in fluids (the spray) and in holograms (the chips), and that when an athlete consumes the fluid or wears the holograms, the radio waves are re-emitted and prompt his body to create specific nutrients and hormones -- from vitamin B to testosterone. Key says that it's not unlike the way particular wavelengths of sunlight cause the human body to produce vitamin D. In the musty storage room, the holographic stickers and bottles of deer-antler spray are irradiated for 24 straight hours or more in what Ross and Key say is an effort to program them with performance-enhancing frequencies
You know, that reminds me a lot of Nativis, the odd little biotech company I wrote about here, and who threatened me with legal action here. They went on about "photonic signals" stored in water, that were somehow stored and released later. The people at S.W.A.T.S. should look into this technology; it sounds like it would be a good fit. When last heard from, the Nativis folks were touting some sort of radio-frequency cancer zapper - slap some holographic stickers on at the same time, and who knows what might happen?
The SI article is well worth a read, just to show you that the process of separating the gullible from their money is timeless. There are gloomy thoughts to be had about the state of science education, that such things are believed, but education is a thin spray-painted layer on the surface of a brain that wants miracles and wants to believe. The proper response is the one that NBA owner Mark Cuban had to a very similar scam, the Power Bracelets that would, er, align your energies or something. Cuban found the right alignment for them, as far as I'm concerned - check the video clip at that link. I hope the trash can is big enough for all this stuff.
+ TrackBacks (0) | Category: Snake Oil
January 29, 2013
Red palm oil. Green coffee beans. Raspberry ketone. Some of you are wondering what the heck I'm making for dinner, but some of you will recognize the common characteristic: all of these have been promoted by Dr. Mehmet Oz, the most famous physician in the country.
I'm prompted to write about him by this New Yorker profile, which is excellent reading. It author, Michael Specter, tries his best to figure out why a talented, well-trained cardiac surgeon is sitting down on his own television show with psychic healers, fad-diet pushers, and the likes of Joseph Mercola. (In case you haven't run across him, consider yourself fortunate. His eponymous web site, which I will certainly not link to, is a trackless fever swamp of craziness. If you want to hear about how vaccines are killing you, or how cancer is actually a fungus, or how to heal your ulcers with vinegar and your melanoma with baking soda, well, Mercola is your man).
When Oz says that Mercola is “challenging everything you think you know about traditional medicine and prescription drugs,” it’s hard to argue. “I’m usually earnestly honest and modest about what I think we’ve accomplished,” Oz told me when we discussed his choice of guests. “If I don’t have Mercola on my show, I have thrown away the biggest opportunity that I have been given.”
I had no idea what he meant. How was it Oz’s “biggest opportunity” to introduce a guest who explicitly rejects the tenets of science? “The fact that I am a professor—one of the youngest professors ever—at Columbia, and that I earned my stripes writing hundreds of papers in peer-reviewed journals,” Oz began. “I know the system. I’ve been on those panels. I’m one of those guys who could talk about Mercola and not lose everybody. And so if I don’t talk to him I have abdicated my responsibility, because the currency that I deal in is trust, and it is trust that has been given to me by Oprah and by Columbia University, and by an audience that has watched over six hundred shows.”
Well. . .I'm not sure that that's much of an answer. In fact, if the currency that Dr. Oz deals in is trust, then you'd think that he has a responsibility not to abuse that trust by giving his imprimatur to lunatics. To his credit, the New Yorker's Specter also finds this response lacking, so he tries again. What he doesn't realize is that he's traveling up the river to the heart of darkness:
I was still puzzled. “Either data works or it doesn’t,” I said. “Science is supposed to answer, or at least address, those questions. Surely you don’t think that all information is created equal?”
Oz sighed. “Medicine is a very religious experience,” he said. “I have my religion and you have yours. It becomes difficult for us to agree on what we think works, since so much of it is in the eye of the beholder. Data is rarely clean.” All facts come with a point of view. But his spin on it—that one can simply choose those which make sense, rather than data that happen to be true—was chilling. “You find the arguments that support your data,” he said, “and it’s my fact versus your fact.”
Chilling is right. The man's a nihilist. Here we have a massively famous doctor, the public face of medicine to millions of television viewers, and he apparently believes that well, it's hard to say what works, because everyone has their own facts, you know?
A word with you, Dr. Oz, if I may. I know that you're very busy, and that your TV show takes up a lot of your time, and that whatever time you have left is probably occupied with being famous and everything. This won't take long. I only wanted to remind you that you got to wear your scrubs and your stethoscope by virtue of an excellent medical education. But the people who provided it to you (and the people who provided the knowledge that they were passing on) did not get there by assuming that everyone had their own facts. If we'd stayed with that attitude, we'd still be waving bags of magic chicken bones over the groaning bodies of cancer patients. But then, you'll probably have that on your show next week. Why not?
I say all this as someone who has spent his career digging for facts and searching for insight. I'm a scientist, Dr. Oz, and I actually don't think that medicine, at least my end of it, is such a religious experience, at least, not the way you're defining one. My colleagues and I spend our days in the labs. Our facts had better be the same for everyone who looks at them, every time, and if they're not, well, we go back to work until they are.
We can't just go on TV right after we've dosed a few rats, you know. We'd go to jail. The FDA won't listen to anything we come up with unless it's been done under rigorously defined conditions, unless it's been repeated (over and over), and unless we tell them every detail of how we did it all. We can't come in waving our hands and telling everyone how great we are - we have to spend insane amounts of money, time, and effort to put together enough data to convince a lot of very skeptical people. Thank goodness you're not one of them. You're either the easiest person to convince that I've ever seen, or (more likely), you don't worry much about being convinced of anything. Why should you? It would limit your opportunities. That TV show isn't going to produce itself - if you stuck to people who could actually back up their assertions, what would your guest list look like?
But here's a suggestion: get someone on your show who actually knows where medicines come from, and what it takes to find one. Instead of telling people about magic beans, tell them the truth: discovering anything that will treat a sick patient is hard, expensive work. The reason we don't have a Cure For Cancer isn't because there's a conspiracy; it isn't because the Powers That Be are too stupid and greedy to recognize the wonderful healing powers of the latest miracle berry. It's because cancer is really hard to figure out. That would be a lot more of a public service than what you're becoming, which is this:
Most days, Oz mines what he refers to as his go-to subjects: obesity and cancer. . . Cancer, Oz told me, “is our Angelina Jolie. We could sell that show every day.”
I'm sure you could, Dr. Oz. But what you're really selling is yourself. How much is left?
Update: John LaMattina actually did get the Oz experience, as recounted here. And he certainly knows what drug discovery is like, but it doesn't seem to have had much effect on the show, or on Dr. Oz. . .
+ TrackBacks (0) | Category: Snake Oil
January 28, 2013
Well, it is a hard question, and I don't know the answer, either. On Twitter, See Arr Oh wonders:
Know that tangy smell that LAH / NaH give off? Is that oil volatiles, or trace H2 being formed from room moisture?
I'm not sure, but I'd be willing to bet that hydrogen has no smell at all - it would seem too small and too bereft of interactions to see off the nasal receptors. So my guess is mineral oil constituents in the case of sodium hydride, which I usually handle as the dispersion. Now, the lithium aluminum hydride is a dry powder, so in that case, I'd say that I'm smelling the real stuff, which can't be improving my nose very much. That lines up with Chemjobber's explanation: "It's the smell of your nose hairs being deprotonated." Any other guesses?
+ TrackBacks (0) | Category: Life in the Drug Labs
We medicinal chemists talk a good game when it comes to the the hydrophobic effect. It's the way that non-water-soluble molecules (or parts of molecules) like to associate with each other, right? Sure thing. And it works because of. . .well, van der Waals forces. Or displacement of water molecules from protein surfaces. Or entropic effects. Or all of those, plus some other stuff that, um, complicated to explain. Something like that.
Here's a paper in Angewandte Chemie that really bears down on the topic. The authors study the binding of simple ligands to thermolysin, a well-worked-out system for which very high-resolution X-ray structures are available. And what they find is, well, that things really are complicated to explain:
In summary, there are no universally valid reasons why the hydrophobic effect should be predominantly “entropic” or “enthalpic”; small structural changes in the binding features of water molecules on the molecular level determine whether hydrophobic binding is enthalpically or entropically driven.
Admittedly, this study reaches the limits of experimental accuracy accomplishable in contemporary protein–ligand structural work. . .Surprising pairwise systematic changes in the thermodynamic data are experienced for complexes of related ligands, and they are convincingly well reflected by the structural properties. The present study unravels small but important details. Computational methods simulate molecular properties at the atomic level, and are usually determined by the summation of many small details. However, details such as those observed here are usually not regarded by these computational methods as relevant, simply because we are not fully aware of their importance for protein–ligand binding, structure–activity relationships, and rational drug design in general. . .
I think that there are a lot of things in this area of which we're not fully aware. There are many others that we treat as unified phenomena, because we've given them names that make us imagine that they are. The hydrophobic effect is definitely one of these - George Whitesides is right when he says that there are many of them. But when all of these effects, on closer inspection, break down into tiny, shifting, tricky arrays of conflicting components, can you blame us for simplifying?
+ TrackBacks (0) | Category: "Me Too" Drugs | Chemical News | In Silico
The brand names of drugs are famously odd. But they seem to be getting odder. That's the conclusion of a longtime reader, who sent this along:
I was recently perusing through the recent drug approval list and was struck by how strange the trade names have become. Perhaps it is a request from the FDA so that there are fewer prescription errors, but some of these are really bizarre and don't quite roll off the tongue. USAN names I can understand, but trade names, to me anyway, used to be much more polished (Viagra, Lipitor etc). Could it have to do with the fact that most of these are for cancer? I have a list below comparing trade names from 2004 to those from the past year or so.
2004: Vidaza; Avastin; Sensipar; Cymbalta; Tarceva; Certican; Factive; Sinseron; Alimta; Lyrica; Exanta
2012: Fulyzaq; Bosulif; Xeljanz; Myrbetriq; Juxtapid; Iclusig; Fycompa; Zelboraf; Xalkori; Jakafi; Pixuvri
He's got a point; some of those look like someone rested an elbow on the keyboard when they were filling out the form. I'd be willing to bet that the oncology connection is a real one - those drugs don't get mass-market advertising at all, so they don't have to be catchy. This Reuters article also notes the trend in cancer drugs, and brings up the need for novelty. Not only is it good to have a name that stands out in the memory, it's a legal requirement to have one that can't be easily confused with another drug. That goes for handwriting as well:
"Regulators want a lot of pen strokes up and down that provide a much more unique-looking name. It is more readable or interpretable if it has a lot of (Zs and Xs)," said Brannon Cashion, Addison Whitney's president.
Whether anyone can actually pronounce the name is of less concern.
That's for sure, when you're talking about things like Xgeva (edit: fixed this name to eliminate the extra "r" I put into it. Can anyone blame me for getting it wrong?). But that one's a good case in point: the generic name is denosumab. That's a good ol' USAN name, with the "-mab" suffix telling you that it's a monoclonal antibody. It's sold in the oncology market as Xgreva for bone-related cancer complications, but it's also prescribed for postmenopausal women to halt loss of bone tissue. There, the same drug goes under the much more consumer-friendly name of Prolia. Now, that's a blandly uplifting name if I've ever heard one, whereas Xgeva sounds like the name of an alien race in a cheap science fiction epic ("An Xgeva ship has been detected in the quadrant, Captain!").
Or, like its recent peers, it also sounds like an excellent Scrabble word, were it to be allowed, which it wouldn't. Me, my proudest moment was playing "axolotl" one time for seven letters. Come to think of it, Axolotl would make a perfectly good drug name under the current conditions. . .
Update: I notice that the comments are filling up with alternative definitions of some of these names, many of which (not all!) sound more sensible.
+ TrackBacks (0) | Category: Business and Markets | Cancer
January 25, 2013
Here's the latest big picture, from Chemjobber. Note, though, that on Twitter he said that after writing this post he felt as if he could press KBr pellets with his jaws. That should give you some idea.
+ TrackBacks (0) | Category: Business and Markets
Have I mentioned recently what a pain the rear the Ullmann reaction is? Copper, in general? Consider it done, then. I'm trying to make biaryl ethers, not something I'd usually do, and these reactions are the traditional answer. One of my laws of the lab, though, is that when there are fifty ways of doing some reaction in the literature, it means that there's no good way to do it, and the Ullmann is the big, hairy, sweaty example of just that phenomenon. Even when it works, there are worries. But you have to get it to work first. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
CETP, now there's a drug target that has incinerated a lot of money over the years. Here's a roundup of compounds I posted on back last summer, with links to their brutal development histories. I wondered here about what's going to happen with this class of compounds: will one ever make it as a drug? If it does, will it just end up telling us that there are yet more complications in human lipid handling that we didn't anticipate?
Well, Merck and Lilly are continuing their hugely expensive, long-running atempts to answer these questions. Here's an interview with Merck's Ken Frazier in which he sounds realistic - that is, nervous:
Merck CEO Ken Frazier, speaking in Davos on the sidelines of the World Economic Forum, said the U.S. drugmaker would continue to press ahead with clinical research on HDL raising, even though the scientific case so far remained inconclusive.
"The Tredaptive failure is another piece of evidence on the side of the scale that says HDL raising hasn't yet been proven," he said.
"I don't think by any means, though, that the question of HDL raising as a positive factor in cardiovascular health has been settled."
Tredaptive, of course, hit the skids just last month. And while its mechanism is not directly relevant to CETP inhibition (I think), it does illustrate how little we know about this area. Merck's anacetrapib is one of the ugliest-looking drug candidates I've ever seen (ten fluorines, three aryl rings, no hydrogen bond donors in sight), and Lilly's compound is only slightly more appealing.
But Merck finds itself having to bet a large part of the company's future in this area. Lilly, for its part, is betting similarly, and most of the rest of their future is being plunked down on Alzheimer's. And these two therapeutic areas have a lot in common: they're both huge markets that require huge clinical trials and rest on tricky fundamental biology. The huge market part makes sense; that's the only way that you could justify the amount of development needed to get a compound through. But the rest of the setup is worth some thought.
Is this what Big Pharma has come to, then? Placing larger and larger bets in hopes of a payoff that will make it all work out? If this were roulette, I'd have no trouble diagnosing someone who was using a Martingale betting system. There are a few differences, although I'm not sure how (or if) they cancel out For one thing, the Martingale gambler is putting down larger and larger amounts of money in an attempt to win the same small payout (the sum of the initial bet!) Pharma is at least chasing a larger jackpot. But the second difference is that the house advantage at roulette is a fixed 5.26% (at least in the US), which is ruinous, but is at least a known quantity.
But mentioning "known quantities" brings up a third difference. The rules of casino games don't change (unless an Ed Thorp shows up, which was a one-time situation). The odds of drug discovery are subject to continuous change as we acquire more knowledge; it's more like the Monty Hall Paradox. The question is, have the odds changed enough in CETP (or HDL-raising therapies in general) or Alzheimer's to make this a reasonable wager?
For the former, well, maybe. There are theories about what went wrong with torcetrapib (a slight raising of blood pressure being foremost, last I heard), and Merck's compound seems to be dodging those. Roche's failure with dacetrapib is worrisome, though, since the official reason there was sheer lack of efficacy in the clinic. And it's clear that there's a lot about HDL and LDL that we don't understand, both their underlying biology and their effects on human health when they're altered. So (to put things in terms of the Monty Hall problem), a tiny door has been opened a crack, and we may have caught a glimpse of some goat hair. But it could have been a throw rug, or a gorilla; it's hard to say.
What about Alzheimer's? I'm not even sure if we're learned as much as we have with CETP. The immunological therapies have been hard to draw conclusions from, because hey, it's the immune system. Every antibody is different, and can do different things. But the mechanistic implications of what we've seen so far are not that encouraging, unless, of course, you're giving interviews as an executive of Eli Lilly. The small-molecule side of the business is a bit easier to interpret; it's an unrelieved string of failures, one crater after another. We've learned a lot about Alzheimer's therapies, but what we've mostly learned is that nothing we've tried has worked much. In Monty Hall terms, the door has stayed shut (or perhaps has opened every so often to provide a terrifying view of the Void). At any rate, the flow of actionable goat-delivered information has been sparse.
Overall, then, I wonder if we really are at the go-for-the-biggest-markets-and-hope-for-the-best stage of research. The big companies are the ones with enough resources to tackle the big diseases; that's one reason we see them there. But the other reason is that the big diseases are the only things that the big companies think can rescue them.
+ TrackBacks (0) | Category: Alzheimer's Disease | Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History
January 24, 2013
Chemistry World has really touched a lot of nerves with this editorial by economics professor Paula Stephan. It starts off with a look back to the beginnings of the NIH and NSF, Vannevar Bush's "Endless Frontier":
. . .a goal of government and, indirectly, universities and medical schools, was to build research capacity by training new researchers. It was also to conduct research. However, it was never Bush’s vision that training be married to research. . .
. . .It did not take long, however, for this to change. Faculty quickly learned to include graduate students and postdocs on grant proposals, and by the late 1960s PhD training, at least in certain fields, had become less about capacity building and more about the need to staff labs.
Staff them we have, and as Prof. Stephen points out, the resemblence to a pyramid scheme is uncomfortable. The whole thing can keep going as long as enough jobs exist, but if that ever tightens up, well. . .have a look around. Why do chemists-in-training (and other scientists) put up with the state of affairs?
Are students blind or ignorant to what awaits them? Several factors allow the system to continue. First, there has, at least until recently, been a ready supply of funds to support graduate students as research assistants. Second, factors other than money play a role in determining who chooses to become a scientist, and one factor in particular is a taste for science, an interest in finding things out. So dangle stipends and the prospect of a research career in front of star students who enjoy solving puzzles and it is not surprising that some keep right on coming, discounting the all-too-muted signals that all is not well on the job front. Overconfidence also plays a role: students in science persistently see themselves as better than the average student in their program – something that is statistically impossible.
I don't think the job signals are particularly muted, myself. What we do have are a lot of people who are interested in scientific research, would like to make careers of it, and find themselves having to go through the system as it is because there's no other one to go through.
Stephan's biggest recommendation is to try to decouple research from training: the best training is to do research, but you can do research without training new people all the time. This would require more permanent staff, as opposed to a steady stream of new students, and that's a proposal that's come up before. But even if we decide that this is what's needed, where are the incentives to do it? You'd have to go back to the source of the money, naturally, and fund people differently. Until something's done at that level, I don't see much change coming, in any direction.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Graduate School
Here's a structure that caught me eye, in this paper from Georgia State and Purdue. That's a nice-looking group stuck on the side of their HIV protease inhibitor; I don't think I've ever seen three fused THF rings before, and if I have, it certainly wasn't in a drug candidate. From the X-ray structure, it seems to be making some beneficial interactions out in the P2 site.
This is an analog these are analogs of darunavir, which has two THFs fused in similar fashion. That compound's behavior in vivo is well worked out - most of the metabolism is cleavage of the carbamate. Both with and without that, there's a bunch of scattered hydroxylation and glucuronidation; the bis-THF survives just fine. (That's worth thinking about. Most of us would be suspicious of that group, but it's pretty robust in this case). I'd be interested in seeing if this new structure behaves similarly, or if it's now more sensitive to gastric fluid and the like. No data of that sort is presented in this paper (it's an academic group, after all), but perhaps we'll find out eventually.
+ TrackBacks (0) | Category: Infectious Diseases
So Daniel Vasella, longtime chairman of Novartis, has announced that he's stepping down. (He'll be replaced by Joerg Reinhardt, ex-Bayer, who was at Novartis before that). Vasella's had a long run. People on the discovery side of the business will remember him especially for the decision to base the company's research in Cambridge, which has led to (or at the very least accelerated the process of) many of the other big companies putting up sites there as well. Novartis is one of the most successful large drug companies in the world, avoiding the ferocious patent expiration woes of Lilly and AstraZeneca, and avoiding the gigantic merger disruptions of many others.
That last part, though, is perhaps an accident. Novartis did buy a good-sized stake in Roche at one point, and has apparently made, in vain, several overtures over the years to the holders of Roche's voting shares (many of whom are named "Hoffman-LaRoche" and live in very nice parts of Switzerland). And Vasella did oversee the 1996 merger between Sandoz and Ciba-Geigy that created Novartis itself, and he wasn't averse to big acquisitions per se, as the 2006 deal to buy Chiron shows.
It's those very deals, though, that have some investors cheering his departure. Reading that article, which is written completely from the investment side of the universe, is quite interesting. Try this out:
“He’s associated with what we can safely say are pretty value-destructive acquisitions,” said Eleanor Taylor-Jolidon, who manages about 400 million Swiss francs at Union Bancaire Privee in Geneva, including Novartis shares. “Everybody’s hoping that there’s going to be a restructuring now. I hope there will be a restructuring.” . . .
. . .“The shares certainly reacted to the news,” Markus Manns, who manages a health-care fund that includes Novartis shares at Union Investment in Frankfurt, said in an interview. “People are hoping Novartis will sell the Roche stake or the vaccines unit and use the money for a share buyback.”
Oh yes indeed, that's what we're all hoping for, isn't it? A nice big share buyback? And a huge restructuring, one that will stir the pot from bottom to top and make everyone wonder if they'll have a job or where it might be? Speed the day!
No, don't. All this illustrates the different world views that people bring to this business. The investors are looking to maximize their returns - as they should - but those of us in research see the route to maximum returns as going through the labs. That's what you'd expect from us, of course, but are we wrong? A drug company is supposed to find and develop drugs, and how else are you to do that? The investment community might answer that differently: a public drug company, they'd say, is like any other public company. It is supposed to produce value for its shareholders. If it can do that by producing drugs, then great, everything's going according to plan - but if there are other more reliable ways to produce that value, then the company should (must, in fact) avail itself of them.
And there's the rub. Most methods of making a profit are more reliable than drug discovery. Our returns on invested capital for internal projects are worrisome. Even when things work, it's a very jumpy, jerky business, full of fits and starts, with everything new immediately turning into a ticking bomb of a wasting asset due to patent expiry. Some investors understand this and are willing to put up with it in the hopes of getting in on something big. Other investors just want the returns to be smoother and more predictable, and are impatient for the companies to do something to make that happen. And others just avoid us entirely.
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History
January 23, 2013
Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:
A question has been bugging me that I hope you might answer.
My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.
My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?
His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?
My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.
And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.
(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.
Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.
(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.
And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.
How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.
So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.
Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology
Has anyone happened to read this paper, from 2009, or this one, from this year? Well, Shawn Burdette of WPI has, and he noticed that (to a significant extent) they're the same paper. Prof. Valerie Pierre of Minnesota, author of the first paper, is reportedly not too amused, and I don't blame her. But hey, the 2013 authors did at least cite her paper. . .in reference 14d. So at least there's that.
Update: but wait, there's more!
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
January 22, 2013
OK, folks, time to choose: would you rather be downwind of an industrial-scale spill of butyl mercaptan (which started in Rouen and is already being smelled in London), or. . .would you rather deal with a twenty-seven tons of burning goat cheese in Norway?
Tough call. I think, though, that I might go with the devil I know, which means the mercaptan. I've never encountered a Goat Cheese Inferno, and I live in fear of discovering even more revolting odors than I've already experienced. Good luck to the Norwegians, I say.
Update: for the curious, natural gas odorant mixes are usually t-butylthiol and isopropyl thiol, with perhaps some other lovelies (dimethyl sulfide) thrown in for that special je ne sais quoi. Although across northern France today, I'll bet they can tell you quoi for sure at the moment.
+ TrackBacks (0) | Category: Current Events
There's a new Viewpoint piece out in ACS Medicinal Chemistry Letters on academia and drug discovery. Donna Huryn of Pittsburgh is wondering about the wisdom of trying to reproduce a drug-company environment inside a university:
However, rather than asking how a university can mimic a drug discovery company, perhaps a better question is what unique features inherent in an academic setting can be taken advantage of, embellished, and fostered to promote drug discovery and encourage success? Rather than duplicating efforts already ongoing in commercial organizations, a university has an opportunity to offer unique, yet complementary, capabilities and an environment that fosters drug discovery that could generate innovative therapies, all the while adhering to its educational mission.
A corollary to this question is the converse—what aspects of drug discovery efforts within a university might be inconsistent with its primary goal of education and research, and can solutions be found to allow success in both?
Her take is that a university should take advantage of whatever special expertise its faculty have in particular areas of biology, pharmacology, etc., which could give it an advantage compared with the staff of a given pharma company. This isn't always easy, though, for cultural reasons:
While it seems that a university should have the tools to make significant contributions to drug discovery by taking advantage of the resident expertise, a cultural change might be required to foster an environment that values the teamwork required to make these efforts successful. Certainly funding agencies are moving in this direction with the establishment of multi-Principal Investigator designations that are designed to “maximize the potential of team science efforts”. Additionally, internal grants offered by academic institutions often insist that the proposed research involve multiple disciplines, departments, or even schools within the University. However, it seems that a concerted effort to “match-make” scientists with complementary expertise and an interest in drug discovery, finding ways to reward collaborative research efforts, and even, perhaps, establishing a project management-type infrastructure would facilitate a university-based drug discovery program.
She also makes the case the universities should use their ability to pursue higher-risk projects, given that they're not beholden to investors. I couldn't agree more - in fact, I think that's one of their biggest strengths. I'd define "high-risk" (by commercial standards) as any combination of (1) unusual mechanism of action, (2) little-understood disease area, (3) atypical chemical matter, and (4) a need for completely new assay technology. If you try to do all of those at once, you're going to land on your face, most likely. But some pharma companies don't even like to hear about one out of the four, and two out of four is going to be a hard sell.
And I think Huryn's broader point is well taken: we already have drug companies, so trying to make more of them inside universities seems like a waste of time and money. We need as many different approaches as we can get.
+ TrackBacks (0) | Category: Academia (vs. Industry)
So in my post the other day about halogen bonds, I mentioned my unease at sticking in things like bromine and iodine atoms, because of the molecular weight penalty involved. Now, it's only a penalty if you're thinking in terms of ligand efficiency - potency per size of the molecule. I think that it's a very useful concept - one that was unheard of when I started in the industry, but which has now made a wide impression. The idea is that you should try, as much as possible, to make every part of your molecule worth something. Don't hang a chain off unless you're getting binding energy for it, and don't hang a big group off unless you're getting enough binding energy to make it worthwhile.
But how does one measure "worthwhile", or measure ligand efficiency in general? There are several schools of thought. One uses potency divided by molecular weight - there are different ways to make this come out to some sort of standard number, but that's the key operation. Another way, though, is to use potency divided by number of heavy atoms. These two scales will give you answers that are quite close to each other if you're just working in the upper reaches of the periodic table - there's not much difference between carbon, nitrogen, and oxygen. Sulfur will start throwing things off, as will chlorine But where the scales really give totally different answers, at least in common med-chem practice, is with bromine and iodine atoms. A single bromine (edit: fixed from earlier "iodine") weighs as much as a benzene ring, so the molecular-weight-based calculation takes a torpedo, while the heavy atom count just registers one more of the things.
For that very reason, I've been in the molecular-weight camp. But TeddyZ of Practical Fragments showed up in the comments to the halogen bond post, recommending arguments for the other side. But now that I've checked those out, I'm afraid that I still don't find them very convincing.
That's because the post he's referring to makes the case against simple molecular weight cutoffs alone. I'm fine with that. There's no way that you can slice things up by a few mass units here and there in any meaningful way. But the issue here isn't just molecular weight, it's activity divided by weight, and in all the cases shown, the ligand efficiency for the targets of these compounds would have gone to pieces if the "smaller" analog were picked. From a ligand efficiency standpoint, these examples are straw men.
So I still worry about bromine and iodine. I think that they hurt a compound's properties, and that treating them as "one heavy atom", as if they were nitrogens, ignores that. Now, that halogen bond business can, in some cases, make up for that, but medicinal chemists should realize the tradeoffs they're making, in this case as in all the others. I wouldn't, for example, rule out an iodo compound as a drug candidate, just because it's an iodo compound. But that iodine had better be earning its keep (and probably would be doing so via a halogen bond). It has a lot to earn back, too, considering the possible effects on PK and compound stability. Those would be the first things I would check in detail if my iodo candidate led the list in the other factors, like potency and selectivity. Then I'd get it into tox as soon as possible - I have no feel whatsoever for how iodine-substituted compounds act in whole-animal tox studies, and I'd want to find out in short order. That, in fact, is my reaction to unusual structures of many kinds. Don't rule them out a priori; but get to the posteriori part, where you have data, as quickly as possible.
So, thoughts on heavy atoms? Are there other arguments to make in favor of ligand efficiency calculated that way, or do most people use molecule weight?
+ TrackBacks (0) | Category: Drug Assays | Drug Development
January 21, 2013
So PhRMA has a press release out on the state of drug research, but it's a little hard to believe. This part, especially:
The report, developed by the Analysis Group and supported by PhRMA, reveals that more than 5,000 new medicines are in the pipeline globally. Of these medicines in various phases of clinical development, 70 percent are potential first-in-class medicines, which could provide exciting new approaches to treating disease for patients.
This set off discussion on Twitter and elsewhere about how these number could have been arrived at. Here's the report itself (PDF), and looking through it provides a few more details Using figures that show up in the body of the report, that looks like between 2164 compounds in Phase I, 2329 in Phase II, and 833 in Phase III. Of those, by far the greatest number are in oncology, where they have 1265, 1507, and 288 in Phase I, II, and III, respectively. Second is infectious disease (304/289/135), and third is neurology (256/273/74). It's worth noting that "Psychiatry" is a separate category all its own, by the way.
An accompanying report (PDF) gives a few more specific figures. It claims, among other things, 66 medicines currently in clinical trials for Hepatitis C, 61 projects for ALS, and 158 for ovarian cancer. Now, it's good to have the exact numbers broken down. But don't those seem rather high?
Here's the section on how these counts were obtained:
Except where otherwise noted, data were obtained from EvaluatePharma, a proprietary commercial database with coverage of over 4,500 companies and approximately 50,000 marketed and pipeline products (including those on-market, discontinued, and in development), and containing historical data from 1986 onward. Pipeline information is available for each stage of development, defined as: Research Project, Preclinical, Phase I, II, III, Filed, and Approved. EvaluatePharma collects and curates information from publicly available sources and contains drug-related information such as company sponsor and therapy area. The data were downloaded on December 12, 2011.
While our interest is in drugs in development that have the potential to become new treatment options for U.S. patients, it is difficult to identify ex ante which drugs in development may eventually be submitted for FDA approval – development activity is inherently global, although regulatory review, launch, and marketing are market-specific. Because most drugs are intended for marketing in the U.S., the largest drug market in the world, we have not excluded any drugs in clinical development (i.e., in Phases I, II, or III). However, in any counts of drugs currently in regulatory review, we have excluded drugs that were not filed with the FDA.
Unless otherwise noted, the analysis in this report is restricted to new drug applications for medicines that would be reviewed as new molecular entities (NMEs) and to new indications for already approved NMEs. . .
Products are defined as having a unique generic name, such that a single product is counted exactly once (regardless of the number of indications being pursued).
That gives some openings for the higher-than-expected numbers. For one, those databases of company activities always seem to run on the high side, because many companies keep things listed as development compounds when they're really ceased any work on them (or in extreme cases, never even really started work at all). Second, there may be some oddities from other countries in there, where the standards for press releases are even lower. But we can rule out a third possibility, that single compounds are being counted across multiple indications. I think that the first-in-class figures are surely pumped up by the cases where there are several compounds all in development for the same (as yet unrealized) target, though. Finally, I think that there's some shuffling between "compounds" and "projects" taking place, with the latter having even larger figures.
I'm going to see in another post if I can break down any of these numbers further - who know, maybe there are a lot more compounds in development than I think. But my first impression is that these numbers are much higher than I would have guessed. It would be very helpful if someone at PhRMA would release a list of the compounds they've counted from one of these indications, just to give us an idea. Any chance of that?
+ TrackBacks (0) | Category: Clinical Trials | Drug Development
About this time last year, I mentioned Prof. Dipak Das at the University of Connecticut, who was the involved in a large accusation of research fraud. That second link has some quotes from a press release put out by Resveratrol Partners defending Prof. Das and his work, and I've just received another one. So if you're wondering how these things work here in these days of modern times, this is how:
Noted red wine molecule heart researcher Dipak Das PhD has filed a $35 million defamation claim against the University of Connecticut (U CONN) Health Center for wrongful termination, violation of the university’s by-laws, and lack of due process as protected by the 14th Amendment to the United States Constitution.
I don't see a copy of the press release out on the web yet (I'll put up a link when one shows up). One of its claims is that Das's work on resveratrol and heart attacks has not been shown to be invalid:
Specifically, U CONN Health Center authorities claimed Dr. Das had altered images showing the production of gene-derived proteins (called a western blot image). But alteration of these images would only change understanding of the underlying genetic mechanisms involved in Dr. Das’ experiments, not the conclusions of his studies which showed unequivocal ability of resveratrol to protect the heart prior to and during a heart attack.
I believe that this effort is going to be an uphill fight, because those alterations, if they occurred (which they most certainly seem to have) would be enough grounds for dismissal by themselves. The press release also makes much of the university's accusation that Das was the only person with a key to the office where the images were manipulated, saying that this was not the case, that students went in and out all the time. I don't care. Das was the lead author on all those papers, and if he couldn't keep up with his own lab's work enough to catch any of these things, he wan't doing his job. Das was fired from U Conn last year, and (via Retraction Watch, which has him at 19 retractions so far), I see that the university's board of trustees unanimously affirmed that decision.
+ TrackBacks (0) | Category: The Dark Side
January 18, 2013
Here's another one to file under "What we don't know about brain chemistry". That's a roomy category for sure, which (to be optimistic about it) leaves a lot of room for discovery. In that category are the observations that ketamine seems to dramatically help some people with major depression. It's an old drug, of course, still used in some situations as an anesthetic, and also used (or abused) by people who wish to deliberately derange themselves in dance clubs. Chemists will note the chemical resemblance to phencyclidine (PCP), a compound whose reputation for causing derangement is thouroughly deserved. (Ketamine was, in fact, a "second-generation" version of PCP, many years on).
Both of these compounds are, among other things, NMDA receptor antagonists. That had not been considered a high-priority target for treating depression, but you certainly can't argue with results (not, at least, when you know as little about the mechanisms of depression as we do). There are better compounds around, fortunately:
AZD6765, an inhibitor of the N-methyl-D-aspartate (NMDA) receptor, a glutamate signaling protein involved in cellular mechanisms for learning and memory, was originally developed as a treatment for stroke. It was shelved in 2000 by the drug's manufacturer, AstraZeneca, after phase 2 trials failed to show signs of efficacy. In the decade that followed, however, small clinical reports started to emerge showing that ketamine, an analgesic that also blocks the NMDA receptor, produced rapid responses in people who didn't benefit from any other antidepressants. And unlike most therapies for major depression, which usually take weeks to kick in, ketamine's mood-lifting effects could be seen within two hours, with a therapeutic boost that often lasted for weeks following a single infusion. Ketamine treatment also came with a number of debilitating side effects, though, including psychosis and detachment from reality. Fortunately for AstraZeneca, the company had a cleaner drug on its shelves that could harness ketamine's benefits with fewer problems.
Note that AZD6765 (lanicemine) has a rather simple structure, further confirmation (if anyone needed any) that things this size can be very effective drugs. Here's the clinical study that Nature Medicine news item refers to, and it makes clear that this was a pretty tough patient cohort:
This double-blind, placebo-controlled, proof-of-concept study found that a single intravenous infusion of a low-trapping nonselective NMDA channel blocker in patients with treatment-resistant MDD rapidly (within minutes) improved depressive symptoms without inducing psychotomimetic effects. However, this improvement was transitory. To our knowledge, this is the first report showing rapid antidepressant effects associated with a single infusion of a low-trapping nonselective NMDA channel blocker that did not induce psychotomimetic side effects in patients with treatment-resistant MDD.
More specifically, patient depression scores improved significantly more in patients receiving AZD6765 than in those receiving placebo, and this improvement occurred as early as 80 min. This difference was statistically significant for the MADRS, HDRS, BDI, and HAM-A. These findings are particularly noteworthy, because a large proportion of study participants had a substantial history of past treatment that was not efficacious. The mean number of past antidepressant trials was seven, and 45% of participants had failed to respond to electroconvulsive therapy.
The problem is the short duration. By one evaluation scale, the effects only lasted about two hours (by another less stringent test, some small effect could still be seen out to one or two days). Ketamine lasts longer, albeit at a cost of some severe side effects. This doesn't seem to be a problem with high clearance of AZD6765 (its PK had been well worked out when it was a candidate for stroke). Other factors might be operating:
These differences could be due to subunit selectivity and trapping blockade. It is also possible that the metabolites of ketamine might be involved in its relatively sustained antidepressant effects, perhaps acting on off-site targets; a recent report described active ketamine metabolites that last for up to 3 days. It is also important to note that, although trapping blockade or broadness of antagonist effects on the NMDA subunit receptors might be key to the robustness of antidepressant effects, these same properties might be involved in the dissociative and perceptual side effects of ketamine. Notably, these side effects were not apparent at the dose of AZD6765 tested.
If that last part is accurate, this is going to be a tricky target to work with. I doubt if AZD6765 itself has a future as an antidepressant, but if it can help to understand that mode of action, what the downstream effects might be, and which ones are important, it could lead to something very valuable indeed. The time and effort that will be needed for that is food for thought, particularly when you consider the patients in this study. What must it be like to feel the poison cloud of major depression lift briefly, only to descend again? The Nature Medicine piece has this testimony:
(David) Prietz, 48, a scheduling supervisor at a sheet-metal manufacturer in Rochester, New York, who has been on disability leave for several years, started to feel his head clear from the fog of depression within days of receiving AZD6765. After his second infusion, he vividly began noticing the fall foliage of the trees outside his doctor's office—something he hadn't previously appreciated in his depressed state. “The greens seemed a lot greener and the blue sky seemed a lot bluer,” he says. Although the lift lasted only a couple months after the three-week trial finished and the drug was taken away, the experience gave Prietz hope that he might one day get better. “I can't recall feeling as well I did at the time,” he says.
Fall foliage for Algernon? I hope we can do something for these people, because as it is, a short-duration effect is scientifically fascinating but emotionally cruel.
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
January 17, 2013
Here's a recent paper in J. Med. Chem. on halogen bonding in medicinal chemistry. I find the topic interesting, because it's an effect that certainly appears to be real, but is rarely (if ever) exploited in any kind of systematic way.
Halogens, especially the lighter fluorine and chlorine, are widely used substituents in medicinal chemistry. Until recently, they were merely perceived as hydrophobic moieties and Lewis bases in accordance with their electronegativities. Much in contrast to this perception, compounds containing chlorine, bromine, or iodine can also form directed close contacts of the type R–X···Y–R′, where the halogen X acts as a Lewis acid and Y can be any electron donor moiety. . .
What seems to be happening is that the electron density around the halogen atom is not as smooth as most of us picture it. You'd imagine a solid cloud of electrons around the bromine atom of a bromoaromatic, but in reality, there seems to be a region of slight positivecharge (the "sigma hole") out on the far end. (As a side effect, this give you more of a circular stripe of negative charge as well). Both these effects have been observed experimentally.
Now, you're not going to see this with fluorine; that one is more like most of us picture it (and to be honest, fluorine's weird enough already). But as you get heavier, things become more pronounced. That gives me (and probably a lot of you) an uneasy feeling, because traditionally we've been leery of putting the heavier halogens into our molecules. "Too much weight and too much hydrophobicity for too little payback" has been the usual thinking, and often that's true. But it seems that these substituents can actually earn out their advance in some cases, and we should be ready to exploit those, because we need all the help we can get.
Interestingly, you can increase the effect by adding more fluorines to the haloaromatic, which emphasizes the sigma hole. So you have that option, or you can take a deep breath, close your eyes, and consider. . .iodos:
Interestingly, the introduction of two fluorines into a chlorobenzene scaffold makes the halogen bond strength comparable to that of unsubstituted bromobenzene, and 1,3-difluoro-5-bromobenzene and unsubstituted iodobenzene also have a comparable halogen bond strength. While bromo and chloro groups are widely employed substituents in current medicinal chemistry, iodo groups are often perceived as problematic. Substituting an iodoarene core by a substituted bromoarene scaffold might therefore be a feasible strategy to retain affinity by tuning the Br···LB (Lewis base) halogen bond to similar levels as the original I···LB halogen bond.
As someone who values ligand efficiency, the idea of putting in an iodine gives me the shivers. A fluoro-bromo combo doesn't seem much more attractive, although almost anything looks good compared to a single atom that adds 127 mass units at a single whack. But I might have to learn to love one someday.
The paper includes a number of examples of groups that seem to be capable of interacting with halogens, and some specific success stories from recent literature. It's probably worth thinking about these things similarly to the way we think about hydrogen bonds - valuable, but hard to obtain on purpose. They're both directional, and trying to pick up either one can cause more harm than good if you miss. But keep an eye out for something in your binding site that might like a bit of positive charge poking at it. Because I can bet that you never thought to address it with a bromine atom!
Update: in the spirit of scientific inquiry, I've just sent in an iodo intermediate from my current work for testing in the primary assay. It's not something I would have considered doing otherwise, but if anyone gives me any grief, I'll tell them that it's 2013 already and I'm following the latest trends in medicinal chemistry.
+ TrackBacks (0) | Category: Chemical Biology | Chemical News | In Silico
Metformin: what a weird compound it is. Very small, very polar, the sort of thing you'd probably cross off your list of screening hits. But it's been taken by untold millions of diabetics (and made untold billions of dollars in the process), because it really does reduce glucose levels. It does so though mechanisms that are still the subject of vigorous debate and which (I might add) were completely unknown when the drug was approved. (I keep running into people who think that mechanism-of-action is some sort of FDA requirement, but it most certainly is not. Not saying that it wouldn't help, but what the regulatory agencies want are efficacy and safety. As they should).
And evidence has been piling up that the compound does many other things besides. The situation is murky. There was a report in 2009 that suggested that it might exacerbate the pathology of Alzheimer's. But last summer there was a rodent study that showed (in obese mice) that the compound seemed to improve neurogenerative effects seen in the hippocampus. (Whether this operates in animals, or humans, who are not metabolically impaired is an open question, although metformin is right in the middle of the whole "Type III diabetes" debate about Alzheimer's, which I'm going to cover in another post at some point soon). Meanwhile, human studies (in the large populations taking the drug) are not saying one way or another just yet. This British analysis suggested that there might be an association, but it's not for sure.
Then there's oncology. In 2010 I wrote about the evidence linking metformin use with lower incidence of some types of cancer, and one proposal for the mechanism. Now another paper is out suggesting that the compound works in this regard through modifying the inflammatory cascade. (Note that James Watson also highlighted this lab's previous work in his recent paper, blogged about here). The summary:
. . .Taken together, our observations suggest that metformin inhibits the inflammatory pathway necessary for transformation and CSC formation. To link our results with previous work on metformin in the diabetic context, we speculate that metformin may block a metabolic stress response that stimulates the inflammatory pathway associated with a wide variety of cancers. . .
. . .We suspect that this glucose- and metabolism-mediated pathway operates in many different cell types, and hence might explain why metformin reduces incidence of different human cancers and why the combination of metformin and chemotherapy is effective on many cell types in the xenograft context. While this pathway is hypothetical and has not been described in molecular terms, our results suggest that components in this pathway might be potential targets for cancer therapy.
The pathway referred to is through Src and IkappaB (of the NF-kB pathway), among others; the paper goes into more detail for those who are interested. There's a lot of stuff going on in the clinic with metformin added to different chemotherapy regimes, and I very much look forward to seeing the results. On the molecular level, I'd agree with the statement above - there's a lot to dig into here. The whole intersection of metabolism and cancer is a very large, very complex (and very tricky) area, but you'd have to think that there's a lot of really useful stuff to be found in it.
+ TrackBacks (0) | Category: Cancer | Diabetes and Obesity
January 16, 2013
Here's something I've been following for the last couple of weeks in the chemical blogging world, and now it's up on its own site: "Blog Syn", an initiative of the well-known chemblogger See Arr Oh. The idea here is to take interesting new reactions that appear in the literature, and. . .well, see if they actually work. (A radical concept, I know, but stick with me here).
The first example is this recent paper in JACS, which shows an unusual iron-sulfur reaction that ends up generating a benz-azole directly from an active methyl group in one pot. There are now three repeats of the reaction, and the verdict (so far) is that it works, but not quite as well as hoped for. You probably have to be careful to exclude oxygen (the paper itself just says "under argon"), and the yield of the test reaction is not has high as reported. As you'll see, there are spectral data, sources of reagents, photos of experimental setups - everything you'd need to see how this reaction is actually being (re)run.
I like this idea very much, and I look forward to seeing it applied to new reactions as they appear (and I hope to contribute the occasional run myself, when possible). They're accepting nominations for the next reaction to test, so if you have something you've seen that you're wondering about, put it into the hopper.
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
I got caught up this morning in a challenge based on this XKCD strip, the famous "Up-Goer Five". If that doesn't ring a bell, have a look - it's an attempt to describe a Saturn V rocket while using only the most common 1000 words in English. You find, when you do that, that some of the words you really want to be able to use are not on the list - in the case of the Saturn V, "rocket" is probably the first obstacle of that sort you run into, thus "Up-Goer".
So I noticed on Twitter that people are trying to describe their own work using the same vocabulary list, and I thought I'd give it a try. (Here's a handy text editor that will tell you when you've stepped off the path). I quickly found that "lab", "chemical", "test", and "medicine" are not on the list, so there was enough of a challenge to keep me entertained. Here, then, is drug discovery and development, using the simple word list:
I find things that people take to get better when they are sick. These are hard to make, and take a lot of time and money. When we have a new idea, most of them don't actually work, because we don't know everything we need to about how people get sick in the first place. It's like trying to fix something huge, in the dark, without a book to help.
So we have to try over and over, and we often get surprised by what happens. We build our new stuff by making its parts bigger or smaller, or we join a new piece to one end, or we change one part out for another to see if it works better. Some of our new things are not strong enough. Others break down too fast or stay in the body too long, and some would do too many other things to the people who take them (and maybe even make them more sick than they were). To try to fix all of these at the same time is not easy, of course. When we think we've found one, it has to get past all of those problems, and then we have to be able to make a lot of it exactly the same way every time so that we can go to the next part.
And that part is where most of the money and time come in. First, we try our best idea out on a small animal to make sure that it works like we think it will. Only after that we can ask people to take it. First people who are not sick try it, just to make sure, then a few sick ones, then a lot of sick ones of many types. Then, if it still works, we take all our numbers and ask if it is all right to let everyone who is sick buy our new stuff, and to let a doctor tell them to take it.
If they say yes, we have to do well with it as fast as we can, which doesn't always work out, either. That's because there can still be a problem even after all that work. Even if there isn't, after some time (more than a year or two) someone else can let these people buy it, too, and for less. While all that is going on, we are back trying to find another new one before this one runs out, and we had better.
Not everyone likes us. Our stuff can be a lot of money for people. It may not work as well as someone wants it to, or they may not like how we talk with their doctor (and they may have a point there). Even so, many people have no idea of what we do, how hard it is, or how long it can take. But no one has got any other way to do it, at least not yet!
There, that's fairly accurate, and it even manages to sound like me in some parts. Pity there's no Latin on the list, though.
Update: here are some more people describing their work, in the comments over at Just Like Cooking. And I should note that someone has already remarked to me that "This is an explanation that even Marcia Angell could understand".
+ TrackBacks (0) | Category: Drug Development
January 15, 2013
And while we're talking big pharma shakeups, I note that AstraZeneca's new CEO has rearranged the executive furniture pretty vigorously, ditching Martin Mackay. He was the R&D head, ex-Pfizer, and seems to have lasted about two years at AZ, whose well-known problems are going to make the higher positions pretty perilous for some time to come. And the middle positions. And all the others.
+ TrackBacks (0) | Category: Business and Markets
What's going on with Pfizer? I have a few questions, and a rumor that I've heard and would like to float.
There's been all sorts of speculation about what Ian Read is going to do with the company. He's been dropping hints for months about splitting it up. And with Abbott recently doing just that, it's no surprise that there are people on Wall Street making the case for Pfizer following along (after all, think of the fees and commissions to be earned). As of this morning, there's fresh talk of all this, since Pfizer seems to be reorganizing its constituent parts, in a way that makes you think it could all break in two.
Now for the rumor, which more directly concerns the company's med-chem research. As everyone in the industry knows, Pfizer's moving towards an outsource-all-the-early-work model. "Drug designers" will occupy that new building I see going up here in Cambridge, and they will cogitate fiercely, pick up their phones, rattle their keyboards, turn on the video-conferencing software, and tell a bunch of chemists twelve time zones away what to make. Repeat as necessary.
But I've been hearing something else recently, even beyond this. Rumor has it that the company is contemplating getting out of all their in-house small molecule drug discovery, and putting most of the focus on biologics and the like. I have not verified this, but that's what I've heard. Now, I didn't know what to think when this came up, but perhaps it has something to do with the possibility of the company splitting up? The small-molecule stuff gets spun out on its own, as a different entity with a different name? Or would Pfizer split between pharma (in one company) and everything else (consumer, generics, etc.) in the other, in which case - if the story I've heard is true - then the small molecule stuff doesn't spin out, it just goes away.
I'd be interested in hearing thoughts on this - its plausibility, its likelihood, and whether anyone else has heard anything similar. I'm not sure I buy into the idea myself, but (as usual) crazier things have happened.
+ TrackBacks (0) | Category: Business and Markets
Like many people, I have a weakness for "We've had it all wrong!" explanations. Here's another one, or part of one: is obesity an infectious disease?
During our clinical studies, we found that Enterobacter, a genus of opportunistic, endotoxin-producing pathogens, made up 35% of the gut bacteria in a morbidly obese volunteer (weight 174.8 kg, body mass index 58.8 kg m−2) suffering from diabetes, hypertension and other serious metabolic deteriorations. . .
. . .After 9 weeks on (a special diet), this Enterobacter population in the volunteer's gut reduced to 1.8%, and became undetectable by the end of the 23-week trial, as shown in the clone library analysis. The serum–endotoxin load, measured as LPS-binding protein, dropped markedly during weight loss, along with substantial improvement of inflammation, decreased level of interleukin-6 and increased adiponectin. Metagenomic sequencing of the volunteer's fecal samples at 0, 9 and 23 weeks on the WTP diet confirmed that during weight loss, the Enterobacteriaceae family was the most significantly reduced population. . .
They went on to do the full Koch workup, by taking an isolated Enterobacter strain from the human patient and introducing it into gnotobiotic (germ-free) mice. These mice are usually somewhat resistant to becoming obese on a high-fat diet, but after being inoculated with the bacterial sample, they put on substantial weight, became insulin resistant, and showed numerous (consistent) alterations in their lipid and glucose handling pathways. Interestingly, the germ-free mice that were inoculated with bacteria and fed normal chow did not show these effects.
The hypothesis is that the endotoxin-producing bacteria are causing a low-grade chronic inflammation in the gut, which is exacerbated to a more systemic form by the handling of excess lipids and fatty acids. The endotoxin itself may be swept up in the chylomicrons and translocated through the gut wall. The summary:
. . .This work suggests that the overgrowth of an endotoxin-producing gut bacterium is a contributing factor to, rather than a consequence of, the metabolic deteriorations in its human host. In fact, this strain B29 is probably not the only contributor to human obesity in vivo, and its relative contribution needs to be assessed. Nevertheless, by following the protocol established in this study, we hope to identify more such obesity-inducing bacteria from various human populations, gain a better understanding of the molecular mechanisms of their interactions with other members of the gut microbiota, diet and host for obesity, and develop new strategies for reducing the devastating epidemic of metabolic diseases.
Considering the bacterial origin of ulcers, I think this is a theory that needs to be taken seriously, and I'm glad to see it getting checked out. We've been hearing a lot the last few years about the interaction between human physiology and our associated bacterial population, but the attention is deserved. The problem is, we're only beginning to understand what these ecosystems are like, how they can be disordered, and what the consequences are. Anyone telling you that they have it figured out at this point is probably trying to sell you something. It's worth the time to figure out, though. . .
+ TrackBacks (0) | Category: Biological News | Diabetes and Obesity | Infectious Diseases
January 14, 2013
Picking up on that reactive oxygen species (ROS) business from the other day (James Watson's paper suggesting that it could be a key anticancer pathway), I wanted to mention this new paper, called to my attention this morning by a reader. It's from a group at Manchester studying regeneration of tissue in Xenopus tadpoles, and they note high levels of intracellular hydrogen peroxide in the regenerating tissue. Moreover, antioxidant treatment impaired the regeneration, as did genetic manipulation of ROS generation.
Now, inflammatory cells are known to produce plenty of ROS, and they're also involved in tissue injury. But that doesn't seem to be quite the connection here, because the tissue ROS levels peaked before the recruitment of such cells did. (This is consistent with previous work in zebrafish, which also showed hydrogen peroxide as an essential signal in wound healing). The Manchester group was able to genetically impair ROS generation by knocking down a protein in the NOX enzyme complex, a major source of ROS production. This also impaired regeneration, an effect that could be reversed by a rescue competition experiment.
Further experiments implicated Wnt/bet-catenin signaling in this process, which is certainly plausible, given the position of that cascade in cellular processes. That also ties in with a 2006 report of hydrogen peroxide signaling through this pathway (via a protein called nucleoredoxin.
You can see where this work is going, and so can the authors:
. . .our work suggests that increased production of ROS plays a critical role in facilitating Wnt signalling following injury, and therefore allows the regeneration program to commence. Given the ubiquitous role of Wnt signalling in regenerative events, this finding is intriguing as it might provide a general mechanism for injury-induced Wnt signalling activation across all regeneration systems, and furthermore, manipulating ROS may provide a means to induce the activation of a regenerative program in those cases where regeneration is normally limited.
Most of us reading this site belong to one of those regeneration-limited species, but perhaps it doesn't always have to be this way? Taken together, it does indeed look like (1) ROS (hydrogen peroxide among others) are important intracellular signaling molecules (which conclusion has been clear for some time now), and (2) the pathways involved are crucial growth and regulatory ones, relating to apoptosis, wound healing, cancer, the effects of exercise, all very nontrivial things indeed, and (3) these pathways would appear to be very high-value ones for pharmaceutical intervention (stay tuned).
As a side note, Paracelsus has once again been reaffirmed: the dose does indeed make the poison, as does its timing and location. Water can drown you, oxygen can help burn you, but both of them keep you alive.
+ TrackBacks (0) | Category: Biological News
Virtual screening is what many people outside the field are thinking of when they talk about the use of computational models in drug discovery. There are many other places where modeling can pitch in, but one of the dreams has always been to take a given protein target and a long list of chemical structures, hit the button, and come back to a sorted list of which ones are going to bind well. That list could be as long as "every compound in our screening deck", or "every available compound in the commercial catalogs", or "everything that our chemists can think of and draw on a whiteboard, whether it's ever been made or not". So these virtual collections can get rather large, but that's what computer power is for, right?
Despite what some people might think, we're not exactly there yet. But we're not exactly not there, either, if you know what I mean. Like much of drug discovery, it's in that awkward age. Virtual screening is certainly real, and it can be useful, but it can also waste your time if you're not careful. And that's where this paper comes in - it's a fine overview of the issue that you need to think about if you're interested in trying this technique.
For one thing, you need to decide if you're going to be taking a drug target whose structure you know pretty well and modeling a bunch of small compounds into it, or if you're taking a bunch of small molecules whose activities you know pretty well and trying to find more compounds like them. These two approaches call for some different methods, and have different potential problems. The second one, especially in the older literature, often goes under the name of QSAR, for quantitative structure-activity relationship. But as the authors point out, "virtual screening" as a name has some advantages, because many people have been burned by things labeled "QSAR" over the years. They're also being used for different purposes, which is probably a good thing:
A fundamental assumption inherent in QSAR and pharmacophore-based VS is the “similar property principle”, that is, the general observation that molecules with similar structure are likely to have similar properties. While this assumption holds true in many cases, there are many counter-examples in the field of QSAR which lead to erroneous predictions and can shake the confidence of the experimental community in the prospective utility of QSAR modeling. Interestingly, this has not yet (or not to the same extent) been the case with VS. The difference is that QSAR is typically employed to evaluate a limited number of synthetic candidates, where errors are more noticeable and costly. However, when these techniques are applied on a massive scale to screen large chemical libraries, errors are much more easily tolerated as the objective is to increase the number and diversity of hits over what would have been otherwise a random selection.
The authors extensively cover the previous literature on computational screening - successful examples, warnings of trouble, theoretical predictions both optimistic and pessimistic. It would take you quite a while to assemble this list on your own, so that by itself recommends this paper to anyone interested in the area. But they go on to codify the various pitfalls to look out for.
"Such as expecting it to work", the cynics in the audience will remark. I say that sort of thing under my breath for time to time myself - or audibly, as the case may be. But this is the sort of paper that I can really endorse, because it's a completely realistic view of what you can expect with current technology. And that comes down to "Less than you want", but still "More than you might think". You're not going to able to feed the software the complete pile of all the chemical supplier catalogs and come back to find the nanomolar leads printing out. But you can get pointers toward parts of chemical space that you wouldn't have thought about (or wouldn't have been able to physically screen).
One tricky part is that when a virtual screening effort is successful (for whatever value you assign to "success"), it can be hard to tell why, and likewise for failures. There are so many places where things can disconnect - proteins are mobile, and small molecules even more so, and accounting for these conformational ensembles is not trivial. Binding interactions are not always well understood, or well modeled. Water molecules are pesky, but can be vitally important. You might have picked inappropriate controls (positive or negative), or be weighting the various computed factors in the wrong way. Either of those will send your calculation further and further off the rails.
And so on. The paper goes into detail on these possibilities and more; I highly recommend it for anyone getting into virtual screening (or for anyone already doing it, to keep the troubleshooting guide in one handy place).
+ TrackBacks (0) | Category: In Silico
January 11, 2013
The line under James Watson's name reads, of course, "Co-discoverer of DNA's structure. Nobel Prize". But it could also read "Provocateur", since he's been pretty good at that over the years. He seems to have the right personality for it - both The Double Helix (fancy new edition there) and its notorious follow-up volume Avoid Boring People illustrate the point. There are any number of people who've interacted with him over the years who can't stand the guy.
But it would be a simpler world if everyone that we found hard to take was wrong about everything, wouldn't it? I bring this up because Watson has published an article, again deliberately provocative, called "Oxidants, Antioxidants, and the Current Incurability of Metastatic Cancers". Here's the thesis:
The vast majority of all agents used to directly kill cancer cells (ionizing radiation, most chemotherapeutic agents and some targeted therapies) work through either directly or indirectly generating reactive oxygen species that block key steps in the cell cycle. As mesenchymal cancers evolve from their epithelial cell progenitors, they almost inevitably possess much-heightened amounts of antioxidants that effectively block otherwise highly effective oxidant therapies.
The article is interesting throughout, but can fairly be described as "rambling". He starts with details of the complexity of cancerous mutations, which is a topic that's come up around here several times (as it does wherever potential cancer therapies are discussed, at least by people with some idea of what they're talking about). Watson is paying particular attention here to mesenchymal tumors:
Resistance to gene-targeted anti-cancer drugs also comes about as a consequence of the radical changes in underlying patterns of gene expression that accompany the epithelial-to-mesenchymal cell transitions (EMTs) that cancer cells undergo when their surrounding environments become hypoxic . EMTs generate free-floating mesenchymal cells whose flexible shapes and still high ATP-generating potential give them the capacity for amoeboid cell-like movements that let them metastasize to other body locations (brain, liver, lungs). Only when they have so moved do most cancers become truly life-threatening. . .
. . .Unfortunately, the inherently very large number of proteins whose expression goes either up or down as the mesenchymal cancer cells move out of quiescent states into the cell cycle makes it still very tricky to know, beyond the cytokines, what other driver proteins to focus on for drug development.
That it does. He makes the case (as have others) that Myc could be one of the most important protein targets - and notes (as have others!) that drug discovery efforts against the Myc pathway have run into many difficulties. There's a good amount of discussion about BRD4 compounds as a way to target Myc. Then he gets down to the title of the paper and starts talking about reactive oxygen species (ROS). Links in the section below added by me:
That elesclomol promotes apoptosis through ROS generation raises the question whether much more, if not most, programmed cell death caused by anti-cancer therapies is also ROS-induced. Long puzzling has been why the highly oxygen sensitive ‘hypoxia-inducible transcription factor’ HIF1α is inactivated by both the, until now thought very differently acting, ‘microtubule binding’ anti-cancer taxanes such as paclitaxel and the anti-cancer DNA intercalating topoisomerases such as topotecan or doxorubicin, as well as by frame-shifting mutagens such as acriflavine. All these seemingly unrelated facts finally make sense by postulating that not only does ionizing radiation produce apoptosis through ROS but also today's most effective anti-cancer chemotherapeutic agents as well as the most efficient frame-shifting mutagens induce apoptosis through generating the synthesis of ROS. That the taxane paclitaxel generates ROS through its binding to DNA became known from experiments showing that its relative effectiveness against cancer cell lines of widely different sensitivity is inversely correlated with their respective antioxidant capacity. A common ROS-mediated way through which almost all anti-cancer agents induce apoptosis explains why cancers that become resistant to chemotherapeutic control become equally resistant to ionizing radiotherapy. . .
. . .The fact that cancer cells largely driven by RAS and Myc are among the most difficult to treat may thus often be due to their high levels of ROS-destroying antioxidants. Whether their high antioxidative level totally explains the effective incurability of pancreatic cancer remains to be shown. The fact that late-stage cancers frequently have multiple copies of RAS and MYC oncogenes strongly hints that their general incurability more than occasionally arises from high antioxidant levels.
He adduces a number of other supporting evidence for this line of thought, and then he gets to the take-home message:
For as long as I have been focused on the understanding and curing of cancer (I taught a course on Cancer at Harvard in the autumn of 1959), well-intentioned individuals have been consuming antioxidative nutritional supplements as cancer preventatives if not actual therapies. The past, most prominent scientific proponent of their value was the great Caltech chemist, Linus Pauling, who near the end of his illustrious career wrote a book with Ewan Cameron in 1979, Cancer and Vitamin C, about vitamin C's great potential as an anti-cancer agent . At the time of his death from prostate cancer in 1994, at the age of 93, Linus was taking 12 g of vitamin C every day. In light of the recent data strongly hinting that much of late-stage cancer's untreatability may arise from its possession of too many antioxidants, the time has come to seriously ask whether antioxidant use much more likely causes than prevents cancer.
All in all, the by now vast number of nutritional intervention trials using the antioxidants β-carotene, vitamin A, vitamin C, vitamin E and selenium have shown no obvious effectiveness in preventing gastrointestinal cancer nor in lengthening mortality . In fact, they seem to slightly shorten the lives of those who take them. Future data may, in fact, show that antioxidant use, particularly that of vitamin E, leads to a small number of cancers that would not have come into existence but for antioxidant supplementation. Blueberries best be eaten because they taste good, not because their consumption will lead to less cancer.
Now this is quite interesting. The first thing I thought of when I read this was the work on ROS in exercise. This showed that taking antioxidants appeared to cancel out the benefits of exercise, probably because reactive oxygen species are the intracellular signal that sets them off. Taken together, I think we need to seriously consider whether efforts to control ROS are, in fact, completely misguided. They are, perhaps, "essential poisons", without which our cellular metabolism loses its way.
Update: I should also note the work of Joan Brugge's lab in this area, blogged about here. Taken together, you'd really have to advise against cancer patients taking antioxidants, wouldn't you?
Watson ends up the article by suggesting, none too diplomatically, that much current cancer research is misguided:
The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope . Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute's (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today's cancer research establishments. They still are too closely wedded to moving forward with cocktails of drugs targeted against the growth promoting molecules (such as HER2, RAS, RAF, MEK, ERK, PI3K, AKT and mTOR) of signal transduction pathways instead of against Myc molecules that specifically promote the cell cycle.
He singles out the Cancer Genome Atlas project as an example of this sort of thing, saying that while he initially supported it, he no longer does. It will, he maintains, tend to find mostly cancer cell "drivers" as opposed to "vulnerabilities". He's more optimistic about a big RNAi screening effort that's underway at his own Cold Spring Harbor, although he admits that this enthusiasm is "far from universally shared".
We'll find out which is the more productive approach - I'm glad that they're all running, personally, because I don' think I know enough to bet it all on one color. If Watson is right, Pfizer might be the biggest beneficiary in the drug industry - if, and it's a big if, the RNAi screening unearths druggable targets. This is going to be a long-running story - I'm sure that we'll be coming back to it again and again. . .
+ TrackBacks (0) | Category: Biological News | Cancer
January 10, 2013
There's a paper out in Nature with the provocative title of "Automated Design of Ligands to Polypharmcological Profiles". Admittedly, to someone outside my own field of medicinal chemistry, that probably sounds about as dry as the Atacama desert, but it got my attention.
It's a large multi-center contribution, but the principal authors are Andrew Hopkins at Dundee and Bryan Roth at UNC-Chapel Hill. Using James Black's principle that the best place to find a new drug is to start with an old drug, what they're doing here is taking known ligands and running through a machine-learning process to see if they can introduce new activities into them. Now, those of us who spend time trying to take out other activities might wonder what good this is, but there are a some good reasons: for one thing, many CNS agents are polypharmacological to start with. And there certainly are situations where you want dual-acting compounds, CNS or not, which can be a major challenge. And read on - you can run things to get selectivity, too.
So how well does their technique work? The example they give starts with the cholinesterase inhibitor donepezil (sold as Aricept), which has a perfectly reasonable med-chem look to its structure. The groups' prediction, using their current models, was the it had a reasonable chance of having D4 dopaminergic activity, but probably not D2 (which numbers were borne out by experiment, and might have something to do with whatever activity Aricept has for Alzheimer's). I'll let them describe the process:
We tested our method by evolving the structure of donepezil with the dual objectives of improving D2 activity and achieving blood–brain barrier penetration. In our approach the desired multi-objective profile is defined a priori and then expressed as a point in multi-dimensional space termed ‘the ideal achievement point’. In this first example the objectives were simply defined as two target properties and therefore the space has two dimensions. Each dimension is defined by a Bayesian score for the predicted activity and a combined score that describes the absorption, distribution, metabolism and excretion (ADME) properties suitable for blood–brain barrier penetration (D2 score = 100, ADME score = 50). We then generated alternative chemical structures by a set of structural transformations using donepezil as the starting structure. The population was subsequently enumerated by applying a set of transformations to the parent compound(s) of each generation. In contrast to rules-based or synthetic-reaction-based approaches for generating chemical structures, we used a knowledge-based approach by mining the medicinal chemistry literature. By deriving structural transformations from medicinal chemistry, we attempted to mimic the creative design process.
Hmm. They rank these compounds in multi-dimensional space, according to distance from the ideal end point, filter them for chemical novelty, Lipinski criteria, etc., and then use the best structures as starting points for another round. This continues until you reach close enough to the desired point, or until you dead-end on improvement. In this case, they ended up with fairly active D2 compounds, by going to a lactam in the five-membered ring, lengthening the chain a bit, and going to an arylpiperazine on the end. They also predicted, though, that these compounds would hit a number of other targets, which they indeed did on testing.
How about something a bit more. . .targeted? They tried taking these new compounds through another design loop, this time trying to get rid of all the alpha-adrenergic activity they'd picked up, while maintaining the 5-HT1A and dopamine receptor activity they now had. They tried it both ways - running the algorithms with filtration of the alpha-active compounds at each stage, and without. Interestingly, both optimizations came up with very similar compounds, differing only out on the arylpiperazine end. The alpha-active series wanted ortho-methoxyphenyl on the piperazine, while the alpha-inactive series wanted 2-pyridyl. These preferences were confirmed by experiment as well. Some of you who've worked on adrenergics might be saying "Well, yeah, that's what the receptors are already known to prefer, so what's the news here?" But keep in mind, what the receptors are known to prefer is what's been programmed into this process, so of course, that's what it's going to recapitulate. The idea is for the program to keep track of all the known activities - the huge potential SAR spreadsheet - so you don't have to try to do it yourself, with you own grey matter.
The last example asks whether, starting from donezepil, potent and selective D4 compounds could be evolved. I'm going to reproduce the figure from the paper here, to give an idea of the synthetic transformations involved:
So, donezepil (compound 1) is 614 nM against D4, and after a few rounds of optimization, you get structure 13, which is 9 nM. Not bad! Then if you take 13 as a starting point, and select for structural novelty along the way, you get 18 (five micromolar against D4), 20, 21, and (S)-27 (which is 90 nM at D4). All of these compounds picked up a great deal more selectivity for D4 compared to the earlier donezepil-derived scaffolds as well.
Well, then, are we all out of what jobs we have left? Not just yet. You'll note that the group picked GPCRs as a field to work in, partly because there's a tremendous amount known about their SAR preferences and cross-functional selectivities. And even so, of the 800 predictions made in the course of this work, the authors claim about a 75% success rate - pretty impressive, but not the All-Seeing Eye, quite yet. I'd be quite interested in seeing these algorithms tried out on kinase inhibitors, another area with a wealth of such data. But if you're dwelling among the untrodden ways, like Wordsworth's Lucy, then you're pretty much on your own, I'd say, unless you 're looking to add in some activity in one of the more well-worked-out classes.
But knowledge piles up, doesn't it? This approach is the sort of thing that will not be going away, and should be getting more powerful and useful as time goes on. I have no trouble picturing an eventual future where such algorithms do a lot of the grunt work of drug discovery, but I don't foresee that happened for a while yet. Unless, of course, you do GPCR ligand drug discovery. In that case, I'd be contacting the authors of this paper as soon as possible, because this looks like something you need to be aware of.
+ TrackBacks (0) | Category: Drug Assays | In Silico | The Central Nervous System
January 9, 2013
If you haven't seen it yet, this video tour through the DayGlo company's facilities is quite a sight. For us chemists, be sure to check out things at about the 5:30 mark, where they head into the wet chemistry area. You'll see some of the most well-used batch reactors you can picture (their largest one was bought used in the early 1970s, to give you some idea). As the chemist giving the tour says, "This is not like the pharmaceutical industry. . ."
+ TrackBacks (0) | Category: Chemical News
When we last checked in with the Klapötke lab at Munich, it was to highlight their accomplishments in the field of nitrotetrazole oxides. Never forget, the biggest accomplishment in such work is not blowing out the lab windows. We're talking high-nitrogen compounds here (a specialty of Klapötke's group), and the question is not whether such things are going to be explosive hazards. (That's been settled by their empirical formulas, which generally look like typographical errors). The question is whether you're going to be able to get a long enough look at the material before it realizes its dream of turning into an expanding cloud of hot nitrogen gas.
It's time for another dispatch from the land of spiderweb-cracked blast shields and "Oh well, I never liked that fume hood, anyway". Today we have a fine compound from this line of work, part of a series derived from N-amino azidotetrazole. The reasonable response to that statement is "Now hold it right there", because most chemists will take one look at that name and start making get-it-away-from-me gestures. I'm one of them. To me, that structure is a flashing red warning sign on a dead-end road, but then, I suffer from a lack of vision in these matters.
But remember, N-amino azidotetrazole (I can't even type that name without wincing) is the starting material for the work I'm talking about today. It's a base camp, familiar territory, merely a jumping-off point in the quest for still more energetic compounds. The most alarming of them has two carbons, fourteen nitrogens, and no hydrogens at all, a formula that even Klapötke himself, who clearly has refined sensibilities when it comes to hellishly unstable chemicals, calls "exciting". Trust me, you don't want to be around when someone who works with azidotetrazoles comes across something "exciting".
It's a beast, all right. The compound is wildly, ridiculously endothermic, with a heat of formation of 357 kcal/mole, all of which energy is ready to come right back out at the first provocation (see below). To add to the fun, the X-ray crystal structure shows some rather strange bond distances, which indicate that there's a lot of charge separation - the azides are somewhat positive, and the tetrazole ring somewhat negative, which is a further sign that the whole thing is trembling on the verge of not existing at all.
And if you are minded to make some yourself, then you are on the verge of not existing at all, either. Both the initial communication and the follow-up publication go out of their way to emphasize that the compound just cannot be handled:
Due to their behavior during the process of synthesis, it was obvious that the sensitivities (of these compounds) will be not less than extreme. . .
The sensitivity of C2N14 is beyond our capabilities of measurement. The smallest possible loadings in shock and friction tests led to explosive decomposition. . .
Yep, below the detection limits of a lab that specializes in the nastiest, most energetic stuff they can think up. When you read through both papers, you find that the group was lucky to get whatever data they could - the X-ray crystal structure, for example, must have come as a huge relief, because it meant that they didn't have to ever see a crystal again. The compound exploded in solution, it exploded on any attempts to touch or move the solid, and (most interestingly) it exploded when they were trying to get an infrared spectrum of it. The papers mention several detonations inside the Raman spectrometer as soon as the laser source was turned on, which must have helped the time pass more quickly. This shows a really commendable level of persistence, when you think about it - I don't know about you, but one exploding spectrometer is generally enough to make recognize a motion to adjourn for the day. But these folks are a different breed. They ended up having to use a much weaker light source, and consequently got a rather ugly Raman spectrum even after a lot of scanning, but if you think you can get better data, then step right up.
No, only tiny amounts of this stuff have ever been made, or ever will be. If this is its last appearance in the chemical literature, I won't be surprised. There are no conceivable uses for it - well, other than blowing up Raman spectrometers, which is a small market - and the number of research groups who would even contemplate a resynthesis can probably be counted on one well-armored hand.
+ TrackBacks (0) | Category: Things I Won't Work With
January 8, 2013
If you'd like a look under the hood of a lot of research publications, go over to Twitter and check the #OverlyHonestMethods tag. You're sure to find your own sins on display, things like: "Mostly it goes 43%, but once it went 95%. We reported the 95%." And "We used [this program] because doesn't everybody else?". How about "We used a modified version of Dr. Ididitfirst's apparatus, because we couldn't figure out how to make an exact replica" or "For details see Supp. Mat. We put as much as possible in there because it doesn't have to be written as carefully".
There are dozens of them, and more coming all the time. I'm adding a few myself, not that I would ever do anything like these, though, you understand.
+ TrackBacks (0) | Category: General Scientific News
The next entry in the discussion on grad school and mental heath is up here, at Not the Lab. It's a very realistic look at what the pressures are; I think that most organic chemists will nod in recognition.
And I particularly enjoyed the first comment on the post, from a reader outside the US: "Dear Americans: a lot of your professors appear to be totally f*ing mental.". There's a lot of empirical support for that position, I'm afraid.
+ TrackBacks (0) | Category: Graduate School
Here's a paper on a high-throughput screening issue that everyone in the field should be aware of: metal impurities in your compounds. The group (from Roche) describes a recent experience, and I think that many readers will shiver in recognition:
The hits were resynthesized, and close analogues were prepared for early structure−activity relationship (SAR) exploration. All three series lacked conclusive SAR. Most exemplifying are the activities of different batches of the very same compounds that exhibited very different activities from being low micromolar to inactive with IC50 values greater than one millimolar (Table 1). Additionally, the SAR of close analogues was either flat or very steep as indicated by compounds with minimal structural changes losing all activity (data not shown).
For these particular hits, we investigated these findings further. It was discovered that for one series, different routes of synthesis were used for the original preparation of the HTS library compound and its resynthesis. The historic synthesis made use of a zinc/titanium reduction step, whereas the new synthesis leading to inactive compounds did not. The schemes to prepare compounds of the other series also had steps involving zinc. Elemental analysis of the samples to determine the zinc content revealed that the active batches contained different amounts of zinc of up to 20% of total mass, whereas the inactive batches only had traces. . .
I think that many of us have been burned by this in the past, but it's something that should be out there in the literature so that it's easier to make the case to those who haven't heard about it. The Roche group suggests a counterscreen using a zinc chelator (TPEN) that will get rid of zinc-based effects. They pulled out 90 of their hits based on that work, and checking those against past assays showed that they had unusually high hit rates across the years. Some of them had, in fact, been considered hits for current Roche projects, and checking those assays showed that they were sensitive to zinc as well.
I can tell you from personal experience that the stuff can be a real problem. In that case, "impurity" was a relative term - the compound from the files turned out to be a 1:1 zinc complex, not that this little fact was noted anywhere in its (rather ancient) documentation from inside the company.
And I've seen copper do the same sort of thing. I would very much recommend checking out any active compound that looks to have been made by Ullmann chemistry or the like. I mean, I like the Ullmann reaction (and it looks like I may be setting some of them up soon), but there's a lot of copper in those things, and some assays are very sensitive to it. In extreme cases, I've seen compounds come in from custom synthesis houses that were colored green from the stuff, and that's just not going to work out. There are regrettably few lead-like compounds that come by a green tint honestly: you're looking at copper, maybe chromium, or perhaps even nickel, none of which will help you generate reliable assay numbers. Don't even let the green stuff into your collection, if you can - clean it up first, and complain to the people who sent it to you. (Note, by contrast, that zinc complexes tend to show no added colors).
Jonathan Beall speculated to me in an e-mail that maybe this is one way that frequent-hitter compounds can get on such lists, by coordinating metals. It's certainly possible. Ignore metals at your peril!
+ TrackBacks (0) | Category: Drug Assays
January 7, 2013
A couple of years ago, I referred to a journal article summarizing many recent examples of bioisosteres in medicinal chemistry. I've been meaning to mention a book that came out late last year, Bioisosteres in Medicinal Chemistry. It looks to be a compendium of all the latest information on functional group substitutions and their effects on solubility, pharmacokinetics, metabolism and the like. Worth a look if this is one of your interests - you can look over the table of contents at that Amazon link.
+ TrackBacks (0) | Category: Book Recommendations
ChemJobber is starting a series of posts today on grad school and its effects on the mental health of grad students. I have to say, the story he relates sounds very similar to some of my own experiences during my third year or so. I didn't break any household items, but I recall (for example) several instances of leaving the lab and getting back into my car late at night, but first pausing to shout a lot of foul language at the top of my lungs while beating on the steering wheel.
I really did have some moments where I wondered if I had made the mistake of my life, whether I was any good at all in my chosen field, and so on. Another big worry was that I was, from what I could see, losing my ability to enjoy what I was doing, and I had a great deal of worry about whether it would ever come back. (It did, by the way, but I had no way of being sure about that at the time). One of the biggest factors, I think, was the day-night-weekend-holiday nature of the work. My brain has a lot of things it enjoys doing, and being chained to the same wheel for an extended period doesn't help it any. Being persistent on my own motivation is one thing, but forced persistence is another thing entirely. I ended up (as do many grad students) worrying about every break I took from the lab. I'd go see a movie on Saturday night, and come out thinking "Well, there's another two hours added to my PhD"), which isn't a recipe for fun.
There were other stress factors, and looking back, it's a good thing that I started being able to deal with things when I did. The push I made in my fourth year to get things finished up was not without its problems - there's one story that I was sure I had told here before, where I inadvertently destroyed the largest amount of starting material I'd ever made, but I can't seem to find it in the archives. If I'd done that during one of my lowest points, I'm not sure what I would have done. But by that time, I could see the finish line, and I was devoting all my effort to getting out as soon as possible, having decided (correctly, I've always thought since) that doing so was the single biggest thing I could do for my career and for my sanity.
Having that as a goal was important. I saw several examples of grad students who got trapped at some point in their work or their writing-up phase, and were having a lot of trouble actually moving on to something else. Staying where they were was causing them damage, but they seemed to feel even worse when they tried to do something about it. Some of these people eventually pulled themselves up, but not all of them, by any means. I think that everyone who's been in a graduate program in the sciences will have seen similar cases. I became determined not to end up as one of them.
+ TrackBacks (0) | Category: Graduate School
This looks like an interesting reaction; let's see what gets made of it. David Milstein's group at the Weizmann Institute in Israel report a new catalytic system to oxidize alcohols to carboxylic acids, with water as the oxygen donor (as shown from labeling experiments). Hydrogen gas bubbles out of the mixture. The catalyst is a ruthenium complex, and although the reaction is not especially fast (18 hour timescale), the turnover numbers seem to be good (0.2% catalyst loading). Interestingly, oxygen actually seems to hurt the catalyst; the system runs better under argon. One possible drawback is that the ruthenium catalyst can serve as a hydrogenation catalyst - alkenes are reduced, what with all the hydrogen around.
Getting rid of (most of) the metals and the high-valent reagents will be worth the trouble industrially, as will getting rid of the need for pressurized oxygen. As it is now, many carboxylic acid compounds are produced on scale via either alkenes (hydroformylation and then oxidation of the aldehydes with a catalyst, or carbonylation), from alkanes via nonselective oxidation in air, or from alcohols via carbonylation.
We're still a long way from ditching the current processes, but if this reaction is robust enough, it could open up some new industrial feedstock routes. (One that I wonder about is replacing the current route to adipic acid, used in Nylon production. It's currently made through a rather foul nitric acid process - if there's enough hexanediol in the world. (Not sure if that's feasible, though - it looks like most of the hexanediol is made instead by reducing adipic acid! Makes you wonder if there's a potential biological route, as there is for butanediol). Edit - fixed this part, due to dropped some carbons between my brain and the keyboard this morning). Someone may also find a nice use for the hydrogen that's given off, and get some sort of two-for-one process. At the very least, this is a reminder of just how much more metal-catalyzed chemistry there is to be discovered. . .
Update: one of the paper's authors has dropped by the comments section, with interesting further details. . .
+ TrackBacks (0) | Category: Chemical News
January 4, 2013
I wanted to take a moment to highlight this speech, given recently by environmentalist and anti-genetically modified organism activist Mark Lynas.
Let's make that former anti-GMO activist. As the speech makes clear, he's had a completely change of heart:
I want to start with some apologies. For the record, here and upfront, I apologise for having spent several years ripping up GM crops. I am also sorry that I helped to start the anti-GM movement back in the mid 1990s, and that I thereby assisted in demonising an important technological option which can be used to benefit the environment.
As an environmentalist, and someone who believes that everyone in this world has a right to a healthy and nutritious diet of their choosing, I could not have chosen a more counter-productive path. I now regret it completely.
. . .(This was) explicitly an anti-science movement. We employed a lot of imagery about scientists in their labs cackling demonically as they tinkered with the very building blocks of life. Hence the Frankenstein food tag – this absolutely was about deep-seated fears of scientific powers being used secretly for unnatural ends. What we didn’t realise at the time was that the real Frankenstein’s monster was not GM technology, but our reaction against it. . .
. . .desperately-needed agricultural innovation is being strangled by a suffocating avalanche of regulations which are not based on any rational scientific assessment of risk. The risk today is not that anyone will be harmed by GM food, but that millions will be harmed by not having enough food, because a vocal minority of people in rich countries want their meals to be what they consider natural.
As this post and this one make clear, I agree with this point of view wholeheartedly. I'm very glad to see this change of heart, and I hope that Lynas is able to get more people to thinking about this issue. He should be ready for a rough ride, though. . .
Update: well, not quite just this week. Lynas' recent book The God Species, which is referred to in the speech, marks his public break with his former views. He's also recently come to the defense of nuclear power - a view I also support - and this interview gives some of the reactions he's had so far to these turnabouts.
+ TrackBacks (0) | Category: Current Events | General Scientific News
There have been occasional links here in the comments to Science-Fraud.org, although I'm not sure if I ever linked them directly or not. Note the use of the past tense: as detailed here at Retraction Watch, the site has suddenly gone (mostly) dark under threats of legal action. Nothing appears on the Wayback Machine at archive.org, either.
I'm not all that surprised. I've said unkind things about people and organizations on this blog, but Science Fraud seemed to have that pedal pushed down to the floor the entire time. And while I've had threats of legal action, I think that I've managed to stay just this side of defamation, although with some people that's hard to do. (I mean for that to be interpreted both ways - both that it's hard to avoid saying nasty things about some people, but also that in such cases, it's hard to think of things that are nasty enough to be defamatory). But which side of that line Paul Brookes, the now-public U. Rochester scientist behind the Science Fraud site, has landed on is still up for debate. More as this story develops. . .
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Here's a query that I received the other day that I thought I'd pass on to the readership: "What's the one journal article or book chapter that you'd assign to a class to show them what medicinal chemistry and drug discovery are really like?"
That's a tricky one, because (as in many fields) the "what it's really like" aspect doesn't always translate to the printed page. But I'd be interested in seeing some suggestions.
+ TrackBacks (0) | Category: Life in the Drug Labs | The Scientific Literature
January 3, 2013
The folks at InVivoBlog are taking votes for "Deal of the Year" in the biopharma space for 2012. There are three categories: M&A (featuring the likes of BMS/Inhibitek and DeCode/Amgen), alliances (such as U. Penn and Novartis), and exit/financing deals for early-stage companies (Warp Drive Bio, anyone?)
So, which of these were good ideas, and which were. . .well, the other kind of idea? I hope that the results show the entire range of voting, and not just the winners, so we can see what the crowd thinks.
+ TrackBacks (0) | Category: Business and Markets
You may have seen some "wonder drug" news stories over the holiday break about compounds targeting p53 - many outlets picked up this New York Times story. The first paragraph probably got them:
For the first time ever, three pharmaceutical companies are poised to test whether new drugs can work against a wide range of cancers independently of where they originated — breast, prostate, liver, lung. The drugs go after an aberration involving a cancer gene fundamental to tumor growth. Many scientists see this as the beginning of a new genetic age in cancer research.
Now, to read that, you might think we're talking mutated p53, which is indeed found in a wide variety of cancers. It's the absolute first thing you think of when you think of a defective protein that's strongly associated with cancer. And everyone has been trying to target it for years and years now, for just that reason, but without too much success. If you know drug development, you might have seen this article and done what I did - immediately read on wondering who the heck it was with a broad-based p53 therapy and how you missed it.
That's when you find, though, that this is p53 and MDM2. MDM2 is one of those Swiss-army-knife proteins that interacts with a list of other important regulatory proteins as long as your leg. (Take a look at the last paragraph of that Wikipedia link and you'll see what I mean). Its relationship with p53 has been the subject of intense research for many years now - it's a negative regulator, binding to p53 and keeping it from initiating its own transcriptional activity. Since a lot of that transcriptional activity is involved with telling a cell to kill itself, that's the sort of thing you'd normally want to have repressed, but the problem in some tumor lines is that MDM2 never gets around to leaving, allowing damaged cancerous cells to carry on regardless.
So, as that newspaper piece says, there have been several long-running efforts to find compounds that will block the p53/MDM2 interaction. The first big splashes in the area were the "Nutlin" compounds, from Roche - named after Nutley, New Jersey, much good did it do the research site in the end. The tangled history of Nutlin-3 in the clinic is worth considering when you think about this field. But for some kinds of cancer, notably many lipsarcomas, this could be an excellent target. That link discusses some results with RG7112, which is one of the drugs that the Times is talking about. Note that the results are, on one level, quite good. This is a tumor type that isn't affected by much, and 14 out of the 20 patients showed stable disease on treatment. But then again, only one patient showed a response where the tumor actually became smaller, and some showed no effect at all. There were also twelve serious adverse events in eight patients. That's not the sort of thing that you might have expected, given the breathless tone of the press coverage. Now, these results are absolutely enough to go on to a larger trial, and if they replicate (safety profile permitting), I'd certainly expect the drug to be approved, and to save the lives of some liposarcoma patients who might otherwise have no options. That's good news.
But is it "the beginning of a new genetic age in cancer research", to quote Gina Kolata's article? I don't see how. The genetic age of cancer has been underway for some time now, and it's been underway in the popular press for even longer. As for this example, there are several types of cancer for which a p53/MDM2 compound could be useful, but liposarcoma is probably the first choice, which is why it's being concentrated on in the clinic. And as far as I know, the number of cancer patients with mutated p53 proteins well outnumber the ones with intact p53 and overexpressed MDM2. These new compounds won't do anything for those people at all.
I sound like such a curmudgeon. But shouldn't there be some level of press coverage in between total silence and Dawn Of A Glorious New Era? I suppose that "Progress Being Made On Tough Drug Target" isn't the sort of hed that makes Page One. But that's the sort of headline that research programs generate.
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Press Coverage
January 2, 2013
So, how many good screening compounds are there to be had? We can now start to argue about the definition of "good"; that's the traditional next step in this process. But there's a new paper from Australia's Jonathan Baell on this very question that's worth a look.
He and his co-workers have already called attention to the number of compounds with possibly problematic functional groups for high-throughput screening. In this paper, he also quantifies the way that commercial compound collections tend to go wild on certain scaffolds - giving you, say, three hundred of one series and one of another. One does not mind diagnosing synthetic ease as the reason for this. And it's not always bad - if you get a hit from the series, then you have an SAR collection ready to go in the follow-up. But you wouldn't necessarily want all of them in there for the first go-round.
But there are many other criteria, and as anyone who's done the exercise can appreciate, large lists of compounds tend to be cut down to size rapidly. The paper shows this in action with a commercial set of 400,000 compounds. Apply some not-too-stringent criteria (between 1 and 4 rings, molecular weights between 150 and 450, cLogP less than 6, no more than 5 hydrogen bond donors and no more than 8 acceptors, up to three chiral centers, and up to 12 rotatable bonds), and you're down to 250K compounds right there. Clear out some functional groups and the PAINS list, and you're down to 170K. Want to cut the molecular weight down to 400, and rotatable bonds down to 10? 130,000 compounds remain. cLogP only up to 5, donors down to 3 or fewer, acceptors down to 6 or fewer? 110,000.
At this point, the paper says, further inspection of the list led to the realization that there were still a lot of problematic functional groups present. (I had a similar experience myself recently, filtered down a less humungous list. Even after several rounds, I was surprised to find, on looking more closely, how many oximes, hydrazones, Schiff bases, hydrazines, and N-hydroxyls were left). In Baell's case, clearing out the not-so-great at this point cut things down to 50,000 compounds. Then a Tanimoto cutoff (to get rid of things that were at least 90% similar to the existing screening compounds) cleared out all but 10,000. Applying the same cutoff, but getting rid of compounds on the list that were more than 90% similar to each other, reduced it to 6,000. So, in other words, one could make a good case for getting rid of over 98% of the vendor's list for high-throughput screening purposes. Similar results were obtained for many other commercial sets of compounds; the paper has the exact numbers (although not, alas, the vendor names involved!)
There were other vendor considerations as well. By the time Baell and his group had gone through all this compound-crunching and placed orders, significant numbers of compounds turned out to be unavailable. (I'm willing to bet that quite a few of them would have turned out to be unavailable even if they'd placed their orders that afternoon, but I'm of a cynical bent). That catalog turnover also brings up the problem of being able to re-order compounds if they turn out to be hits:
. . .there were only two vendors whose resupply philosophy we considered to be sound, this philosophy being that around 40 mg stock was set aside and accessible exclusively to prior buyers of that compound for the purposes of resupply of ca. 1−2 mg for secondary assay of a confirmed screening hit. We believe this issue of resupply is in urgent need of attention by vendors and will provide a competitive edge to those vendors willing to better guarantee resupply.
By the time they'd surveyed the various large-scale compound vendors, the group had looked over the majority of commercially available screening compounds. Given the attrition rates, how many actual compounds would cover the world's purchasable chemical space? The best guess is about 340,000, of the many millions of potentially purchasable items.
Of course, all these numbers are subject to dispute - you may not agree with some of the functional group or property cutoffs, or you might want things cut down even more. The paper addresses this question, and the general one of why any particular compound should be in a screening collection at all. My own criterion is "Would I be willing to follow up on this compound if it were a hit?" But different chemists, as has been proven many times, will answer such questions in different ways.
A big part of the discussion are those Tanimoto similarity scores, and the paper has a good deal to say about that. You wouldn't want to cut everything down to just singleton compounds (most likely), but you also don't need to have dozens and dozens of para-chloro/para-fluoro methyl-ethyl analogs in each series, either. The best guess is that most vendor catalogs are still rather unbalanced: they have far too many analogs for some compound classes, but too few for many more. Singleton compounds represent most of the chemical diversity for many collections, but you could make the case that there shouldn't be any singletons, ideally. Even two or three representatives from each structural class would be a real improvement. A vendor collection of 400,000 compounds that consisted of 40,000 fairly distinct structures with ten members of each class would be something to see - but no one's ever seen such a thing.
This new paper, by the way, is full of references to the screening-collection literature, as well as discussing many of the issues itself. I recommend it to anyone thinking about these issues; there are a lot of things that you don't want to have to rediscover!
+ TrackBacks (0) | Category: Drug Assays
January 1, 2013
Here's the cornbread that I made to go along with the bean soup - in fact, I'm eating a piece now as I write this. This is adapted from the Cook's Illustrated people, and I've found this to be one of the better all-cornmeal recipes I've tried (a lot of recipes have half wheat flour and half cornmeal, but some of the other all-cornmeal preps come out with an odd soapy flavor, in my experience.
The quantities below are for an 8-inch (20 cm) cast-iron pan. An old black iron skillet is the traditional cornbread implement, and it's probably not possible to improve on it. I've doubled the recipe, though, and done it in a 9-inch round Calphalon frying pan, which worked fine. A Pyrex dish also works, but doesn't produce as good a crust. Using something that can be heated is key.
So what you want to do is heat an oven to 450F (230C). Take your pan, whatever its material, and put enough oil in it to cover the bottom plus a bit more. Bacon grease is traditional, and cooking a slice or two of bacon in the pan while it's heating up will provide just what you need. Update: At any rate, you want to heat up the pan in the oven while you're getting the batter ready.
While things are heating, take 1/3 of a cup (45 grams) of corn meal and put it in a medium-sized bowl. Then take 2/3 of a cup (90 grams) of corn meal and mix it, in another bowl, with a bit over a teaspoon (5 grams) of granulated sugar, 1/2 teaspoon (3.2 grams) of salt, 1 teaspoon (5 grams) of baking powder, and 1/4 teaspoon (1.25 grams) of baking soda (sodium bicarbonate). Blend these dry ingredients together.
Now bring 1/3 cup of water (just under 80 mL) to a full boil, and add this to the plain corn meal in that first dish. Stir it around to make a homogeneous mush out of it, then add 3/4 cup (about 175 mL) buttermilk to that (regular milk can be substituted; the product will be a bit less assertive). Mix this until homogeneous, then mix in one beaten egg.
Add the dry ingredients from the other bowl and stir to form a batter. Now it's time to get that hot pan out of the oven. Quickly swirl the oil or bacon grease around in it to make sure everything's coated, then pour any excess over into the batter and give it a fast stir, then pour the mixture into the hot pan before anything cools down. Back into the oven it goes for about 20 minutes. If you've doubled the recipe in a larger pan, that'll be 25 minutes, perhaps a bit more.
This should make cornbread that any Southerner would be glad to eat. It's not sweet corn-colored cake, like a Northern corn muffin - those were quite a surprise to me when I first moved up to New Jersey. The hot pan will give it a thin brown crust, and you'll often see these served with that side up on a plate, the way that they fall out of the pan. It is, I can testify, excellent with the bean soup recipe posted earlier today, but will also stand up to almost any soup or stew that you care to throw at it.
Variations are legion; many of them are good. You can add creamed corn to the batter, in which case you'd cut down on the milk. Whole-kernal corn is another classic addition, as are chopped jalapeños. I've seen diced red onion go in there as well. Some shredded cheese will make the whole thing richer. Crumbled bacon (perhaps from the slices you used to grease the pan) is another fine addition, and if you have access to pork cracklings, then you'll be making a variation that I first had in Tennessee over 40 years ago. Enjoy!
+ TrackBacks (0) | Category: Blog Housekeeping
I haven't put up any recipes during this break, so I thought I'd get moving on that a bit. Today I'm making a simple one - it's over across the kitchen from me as I write. It's a bean soup that my father often made for New Year's Day, as sort of a counter to the richer, fancier stuff that preceded it during the holidays.
Sitting out in the back yard during the summer, I tried a thought experiment out on my kids. What, I asked, if we had to grow all our own food, on the land we have here in the yard? Could it be done? And if so, what crops would you pick? Some favorites, such as tomatoes and cucumbers (the very things we had growing over in the sunnier part) were eliminated early as not providing enough food value for the space and effort. I pointed out that our yard was not a very large plot of arable land, which meant that we'd have to go first for the maximum yield of calories per area planted, with an aesthetic factors coming in way down the list, if at all. The life, that is, of a peasant. My first choice was potatoes, based on the survival of the Irish farmers (well, at least until the rot) and the gunpoint recommendation of Frederick the Great. Then corn and beans, based on New World agriculture. All three also rank high for their winter keeping qualities - as I mentioned to the kids, we'd have to pile up as much food as possible in the basement and garage to make it through a Massachusetts winter. They didn't find the prospect too appealing, which was one point of the whole exercise.
So here's the bean part of the equation. No doubt it's the sort of thing my own ancestors used to eat this time of the year:
Take 1 pound (or around 0.5 kilo) of dried white beans. I use Great Northern, but just about anything should work, I'd think. Soak them overnight at room temperature in four volumes of water or so - they can sit for longer, if you want to make them later the next day, but I'm sure there's an eventual limit imposed by incipient fermentation, which I would definitely not recommend testing.
Discard the soaking water. Put the beans in a pot and cover with water again, adding one or two bay leaves and salt and ground pepper to taste. You can adjust those later on. Some people like to add chopped onion at this stage; I prefer to put a little raw on the top of the beans when they're served. De gustibus non disputandem est.
Before bringing the beans to a low simmer, I also add some pieces of country ham, a specialty of my native part of the US. Different regions have different ideas about country ham (note that the Virginia/Smithfield ones are rather a different breed), but it's always salty, so if you're doing this, you'll probably want to add no extraneous salt at all until you've tasted the finished product. The amount of ham is also to taste - by the standards of my ancestors, some of them, anyway, this sort of things was no doubt a luxury item, and they'd have put in a mostly bare bone, at most. I'm happy adding a half pound (0.25 kilo), in pieces. If you'd like to try the stuff, I can recommend Burger's (I'm about to go downstairs and get some myself). Tripp is also a reliable brand. I grew up on Mar-Tenn brand, but I'm not even sure if it exists any more. It's not just for bean soup, of course - my Southern roots call for the sliced ham to be gently pan-fried for a winter breakfast and served with biscuits, a fine meal which will have you drinking water at an increased rate for several hours.
So heat the beans gently for two to three hours, depending on how long the earlier soaking has gone (and of course, what sort of bean you might have started with). I like them to the point where the soup has thickened some, but not to where the beans themselves are breaking up. I don't recommend any strong boiling; that'll bring on the bean-mush stage for sure. You'll have to check over so often to make sure that things haven't gotten out of hand. Adding extra water, if needed, is no sin. I eat the resulting bean soup with homemade cornbread, for bonus exiled-Southerner points, and I'll put up a recipe for that, too.
You can start from the straight dried beans, too, if you're a real buckaroo, but you're going to have to get going in the morning to have them for dinner.
+ TrackBacks (0) | Category: Blog Housekeeping