Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

October 24, 2014

Francis Collins Knows Why We Don't Have An Ebola Vaccine

Email This Entry

Posted by Derek

NIH Director Francis Collins has been saying that if only the agency's budget hadn't been cut, that we would already have an Ebola vaccine. He tried this line out during recent Congressional testimony, and apparently liked it enough that he expanded on it in an article for the Huffington Post. Collins' statements have been fodder for election-season attack ads, naturally. Personally I endorse the take on this from Michael Eisen at Berkeley (emphasis added):

. . .it’s time to call this for what it is: complete bullshit.

First, let’s deal with the most immediate assertion – that if there had been more funds there would be an Ebola vaccine today. Collins argues we’d be a few years ahead of where they are today, and that, instead of preparing to enter phase 1 trials today, they’d have done this two years ago. But last time I checked, there was a reason we do clinical trials, which is to determine if therapies are safe and effective. And, crucially, many of these fail (how many times have we heard about HIV vaccines that were effective in animals). Thus, even if you believe the only thing holding up development of the Ebola vaccine was funds, it’s still false to argue that with more money we’d have an Ebola vaccine. Vaccine and drug development just simply doesn’t work this way. There are long lists of projects, in both the public and private sector that have been very well-funded, and still failed.

It is a gross overtrivialization of even the directed scientific process involved in developing vaccines to suggest that simply by spending more money on something you are guaranteed a product. And, if I were in Congress, frankly I’d be sick of hearing this kind of baloney, and would respond with a long list of things I’d been promised by previous NIH Directors if only we’d spend more money on them.

Eisen would have have the case made for basic research, emphasizing that without decades of funding for it that we wouldn't even be in the position to try making an Ebola vaccine at all. But that doesn't grab enough headlines. Better to pander to the Disease of the Week, and tell everyone that if only you had more cash, you most certainly would have done something about it by now.
NIH%20budget.jpg
I'll go one further. Here's a graph of the NIH budget over the last twenty years (inflation adjusted). You will note that the large rise during the last part of the Clinton years and during G. W. Bush's first term; the fact that this trajectory did not continue has been the source of a good deal of turmoil over the years. There was also a blast of funding in the stimulus-spending package in 2009, which has also not been repeated, and I believe that this has caused similar disruption. Neglecting that, the NIH budget has been in the $30-$35 billion range (inflation-adjusted) for many years, although it is down around 10% from its peak in 2004. (Note that 2013 is estimated in this chart - the real number dips just below the $30 billion line).

This, then, is what a "slashed budget" looks like, from above, anyway. The classic Washington way of referring to budget cuts, though, comes from smaller increases than were planned. So if a program was originally budgeted to get 10% more funding, but it only ends up getting 5% more, than everyone who advocates for it goes out and says that spending on it was cut in half. And if it was supposed to stay even and instead shrinks by 2%, well, you can imagine. That's when the so-called "Washington Monument" strategy kicks in - if they decide to freeze or cut the Park Service budget, the first thing you do is close the Washington Monument, so as to make a highly visible and annoying case for your agency to have the money, anyway.

Now, there is room to complain about the allocations inside the NIH itself - there are programs where less money really is being spent. But while the overall budget for the agency has declined, it hasn't done so in the massive clear-cutting fashion that you might imagine if you read, for example, editorials by Francis Collins. But he's just doing what every other agency head does - bang the drum for their vital, essential funding. It would be a better world if Michael Eisen's recommendation of pitching basic research funding were to be effective, but I'm not sure that we live there. Where we live, we get. . .well, bullshit instead.

Update: man, the comments are rolling in on this one. So to clarify things, let me say that I think that NIH is a good thing, and should be well funded. But I also think that trying to get it funded by saying that we surely would have had an Ebola vaccine by now is not a good thing to do - it's not very effective, for one thing, because this is the same sort of story that always gets used, and it's also not very accurate. I realize that good things are often accomplished less exalted means, but you'd hope that there would be better means than this.

Comments (84) + TrackBacks (0) | Category: Academia (vs. Industry) | Infectious Diseases

October 23, 2014

Atmospheric Conditions

Email This Entry

Posted by Derek

Well, I've been busy sciencing away all morning, and we're having the kind of weather outside that makes a person want to stay indoors and do chemistry: rainy, chilly, and windy. You wonder why more big chemical discoveries don't come out of the places that have these sorts of conditions all the time!

Air handling and climate control notwithstanding, this sort of weather naturally raises the humidity, and if I were having to worry about tiny moisture-sensitive reactions, these are not the conditions I would pick. But neither were the conditions back in the Southern US in the summertime - some afternoons down there, the humidity is like sticking your head into a dishwasher. How anyone managed to get chemistry done under those conditions in the days before air conditioning is a mystery to me.

My German lab, on my post-doc, was not air-conditioned (in keeping with much of the rest of the country) and had windows that opened, as very few other labs I've worked in ever have. But a German summer is not like an Arkansas one - in fact, every so often, a German summer can resemble an Arkansas winter. Even so, we did have to watch the air-sensitive reagents, because conditions certainly varied. My summer research in Arkansas, though, back when I was an undergrad, was conducted on the fourth floor of a building with no windows, so when the air conditioning went out there, we basically had to flee after a while. The plastic caps used on the old ether cans would come popping off in the heat, and that was a pretty good sign that it was time to pack it in for the day.

And my grad-school lab was also a windowless cave, thanks to the design of the building, but I really didn't get to experience un-air-conditioned chemistry in there. If the AC was down, it meant that the whole air handling system was messed up, which meant that the lab itself rapidly became uninhabitable. Decades of grad student-led contamination led to a pestilential funk that you could breast-stroke through; there was no way that I was going to hang around and experience it for any longer than I had to.

But all this is first-world complaining - I've had Indian colleagues, among others, describe really severe climate-influenced lab work. So feel free to add your worst examples in the comments, but expect the folks with experience in the tropics to win the competition!

Comments (53) + TrackBacks (0) | Category: Life in the Drug Labs

October 22, 2014

Green Coffee Beans Will Mostly Slim Your Wallet

Email This Entry

Posted by Derek

Very few readers of this site are likely to have a good opinion of Dr. Oz (I certainly don't). And very few readers will be surprised to hear that one of his highly-touted miracle weight loss regiments - green coffee bean extract (GCA) - has turned out to be a load of faked-up nonsense. Retraction Watch has the details, and let's just say that the clinical trial results were. . .a little bit below the desired standard:

The FTC charges that the study’s lead investigator repeatedly altered the weights and other key measurements of the subjects, changed the length of the trial, and misstated which subjects were taking the placebo or GCA during the trial. When the lead investigator was unable to get the study published, the FTC says that AFS hired researchers Joe Vinson and Bryan Burnham at the University of Scranton to rewrite it. Despite receiving conflicting data, Vinson, Burnham, and AFS never verified the authenticity of the information used in the study, according to the complaint.

Other than that, the study was just fine, I guess. Sheesh. I have to admit, that's even worse than I had pictured, and that's saying a lot. Dr. Oz himself, though, will probably not even note this in passing. Too many other miracle cures to peddle, too many TV slots to fill. He's a busy man, you know.

Update: the show has released a rather bland statement about this whole affair, but has also apparently scrubbed the web site of any mention of green coffee beans, had videos taken down at YouTube, and so on. So that's all right, then!

Comments (20) + TrackBacks (0) | Category: Snake Oil

Roche Rebuilds

Email This Entry

Posted by Derek

Roche has announced ambitious plans for new buildings at its home base in Basel:

They're building a new home for John Reed and Roche's pRED research group in Basel – and the pharma giant is thinking big. Roche said today that it is committing $1.8 billion to build a new research center in Switzerland which will encompass 4 new office/lab buildings that will house 1,900 R&D staffers.

The first step of the process will involve construction of an in vivo center for animal research, slated for completion in 2018. And Roche plans to clear away older buildings to make way for a new office tower as part of a wider building plan that will cost $3.2 billion.

I saw the current new Roche office tower when I was in Basel recently - it's hard to miss - and I can't say that it's much of an ornament to the skyline. (People told me that it had originally been planned as more of an architectural statement, but that all that had been scaled back because of the cost, which seemed quite believably Swiss).

Comments (30) + TrackBacks (0) | Category: Business and Markets

Improving the Old-Fashioned Reaction Workup

Email This Entry

Posted by Derek

Here's something new: working up a reaction. The authors say that they have a porous polymer that adsorbs organic compounds from aqueous reaction mixtures, allowing you to just stir and filter rather than doing a liquid/liquid extraction. The adsorbed material can then be taken right to chromatography, as if you'd adsorbed your compound onto any other solid support, or just washed with solvent to liberate the crude product.

I have colleagues who will be trying this out soon, and I'll report on their experience with the stuff. If it really is widely applicable, it could be a nice addition to the parallel synthesis and flow chemistry worlds (pumping a crude reaction through a cartridge of absorbing polymer could be a fast way to do workups and solvent switches).

Comments (14) + TrackBacks (0) | Category: Chemical News

Phenylalanine Crystals

Email This Entry

Posted by Derek

No matter how long you've been doing chemistry, there are still things that you come across that surprise you. Did you know that plain old L-phenylalanine has been one of the most difficult subjects ever for small-molecule crystallography? I sure didn't. But people have tried for decades to grow good enough crystals of it to decide what space group it's in. One big problem has been the presence of several polymorphs (see blog posts here and here), but it looks like the paper linked above has finally straightened things out.

Comments (2) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News

October 21, 2014

Oxygenated Nanobubbles. For Real?

Email This Entry

Posted by Derek

A longtime reader sent along this article, just based on the headline. "This headline triggers instant skepticism in me", he said, and I agree. "Potential to treat Alzheimer's" is both a bold and a weaselly statement to make. The weasel part is that sure, anything has the "potential" to do that, but the boldness lies in the fact that so far, nothing ever has. There are a couple of very weak symptomatic treatments out there, but as far as actually addressing the disease, the clinical success rate is a flat zero. But that's not stopping these folks:

“The impact of RNS60 on Alzheimer’s disease as outlined in our studies presents new opportunities for hope and deeper research in treating a disease that currently cannot be prevented, cured or even successfully managed,” said Dr. Kalipada Pahan, professor of neurological sciences, biochemistry and pharmacology and the Floyd A. Davis, M.D., endowed chair of neurology at the Rush University Medical Center. “Our findings sparked tremendous excitement for RNS60, identifying an opportunity for advanced research to develop a novel treatment to help the rapidly increasing number of Alzheimer’s disease and dementia patients.”

Well, good luck to everyone. But what, exactly, is RNS60, and who is Revalesio, the company developing it? I started reading up on that, and got more puzzled the further I went. That press release described RNS60 as "a therapeutic saline containing highly potent charge-stabilized nanostructures (CSNs) that decrease inflammation and cell death." That didn't help much. Going to the company's web site, I found this:

Revalesio is developing a novel category of therapeutics for the treatment of inflammatory diseases using its proprietary charge-stabilized nanostructure (CSN) technology. Revalesio’s products are created using a patented device that generates rotational forces, cavitation and high-energy fluid dynamics to create unique, stable nanostructures in liquids. CSNs are less than 100 nanometers in size (for reference, the width of a single strand of hair is 100,000 nanometers) and are established through the combination of an ionic scaffold and a nano-sized oxygen bubble core.

RNS60 is Revalesio’s lead product candidate based upon CSN technology. RNS60 is normal, medical-grade, isotonic saline processed with Revalesio’s technology. RNS60 does not contain a traditional active pharmaceutical ingredient and offers a unique and groundbreaking approach to treating diseases . . .

OK, then. If I'm getting this right, this is saline solution with extremely small bubbles of oxygen in it. I'm not familiar with the "nanobubble" literature, so I can't say if these things exist or not. I'm unwilling to say that they don't, because lot of odd things happen down at that small scale, and water is a notoriously weird substance. The size of the bubbles they're talking about would be what, a few hundred oxygen molecules across? Even proving that these structures exist and characterizing them would presumably be a major challenge, analytically, but I have some more reading to do on all that.

My problem is that there have been many, many, odd water products reported over the years that involve some sort of nanostructure in the solution phase. And by "odd", I mean fraudulent. Just do a quick Google search for any combination of phrases in that area, and the stuff will come gushing out - all sorts of claims about how the water being sold is so, so, different, because it has different clusters and layers and what have you. My second problem is that there have been many, many odd products reported over the years that claim to be some sort of "oxygenated" water. Do a Google search for that, but stand back, because you're about to be assaulted by page after page of wild-eyed scam artists. Super-oxygen miracle water has been a stable of the health scam business for decades now.

So the Revalesio people have a real challenge on their hands to distinguish themselves from an absolute horde of crackpots and charlatans. The web site says that these oxygen nanobubbles "have a stabilizing effect on the cell membrane", which modulates signaling of the PI3K pathway. The thing is, there are a number of publications on this stuff, in real journals, which is not the sort of thing you find for your typical Internet Wonder Water. The president of the company is a longtime Eli Lilly executive as well, which is also rather atypical for the fringe. Here's one from J. Biol. Chem., and here's one on nanoparticle handing from the Journal of Physical Chemistry. The current neuronal protection work is in two papers in PLOS ONE, here and here.

I'm baffled. These papers talk about various cellular pathways being affected (PI3K, ATP production, phosphorylation of tau, NF-kb activation, and so on), which is a pretty broad range of effects. It's a bit hard to see how something with such effects could always be positive, but paper after paper talks about benefits for models of Parkinson's, multiple sclerosis, exercise physiology, and now Alzheimer's. A common thread could indeed be inflammation pathways, though, so I can't dismiss these mechanisms out of hand. But then there's this paper, which says that drinking this water after exercise improves muscle recovery, and I'm just having all kinds of trouble picturing how these nanostructured bubbles make it intact out of the gut and into the circulation. If they're sticking all over cell membranes, don't they do that to every cell they come in contact with? Are there noticeable effects in the gut wall or the vascular endothelium? What are the pharmacokinetics of nanobubbles of oxygen, and how the heck do you tell (other than maybe a radiolabel?) I'm writing this blog entry on the train, where I don't have access to all these journal articles, but it'll be interesting to see how these things are addressed. (If I were running a program like this one, and assuming that my head didn't explode from all the cognitive dissonance, I'd be trying it out in Crohn's and IBD, I think - or do all the nanobubbles get absorbed before they make it to the colon?)

So I'm putting this out there to see if everyone else gets the same expressions on their faces as I do when I look this over. Anyone have any more details on this stuff?

Comments (91) + TrackBacks (0) | Category: Biological News

October 20, 2014

Compound Properties: Starting a Renunciation

Email This Entry

Posted by Derek

I've been thinking a lot recently about compound properties, and what we use them for. My own opinions on this subject have been changing over the years, and I'm interested to see if I have any company on this.

First off, why do we measure things like cLogP, polar surface area, aromatic ring count, and all the others? A quick (and not totally inaccurate) answer is "because we can", but what are we trying to accomplish? Well, we're trying to read the future a bit and decrease the horrendous failure rates for drug candidates, of course. And the two aspects that compound properties are supposed to help with are PK and tox.

Of the two, pharmacokinetics is the one with the better shot at relevance. But how fine-grained can we be with our measurements? I don't think it's controversial to say that compounds with really high cLogP values are going to have, on average, more difficult PK, for various reasons. Compounds with lots of aromatic rings in them are, on average, going to have more difficult PK, too. But how much is "lots" or "really high"? That's the problem, because I don't think that you can draw a useful line and say that things on one side of it are mostly fine, and things on the other are mostly not. There's too much overlap, and too many exceptions. The best you can hope for, if you're into line-drawing, is to draw one up pretty far into the possible range and say that things below it may or may not be OK, but things above it have a greater chance of being bad. (This, to my mind, is all that we mean by all the "Rule of 5" stuff). But what good does that do? Everyone doing drug discovery already knows that much, or should. Where we get into trouble is when we treat these lines as if they were made of electrified barbed wire.

That's because of a larger problem with metrics aimed at PK: PK is relatively easy data to get. When in doubt, you should just dose the compound and find out. This makes predicting PK problems a lower-value proposition - the real killer application would be predicting toxicology problems. I fear that over the years many rule-of-five zealots have confused these two fields, out of a natural hope that something can be done about the latter (or perhaps out of thinking that the two are more related than they really are). That's unfortunate, because to my mind, this is where compound property metrics get even less useful. That recent AstraZeneca paper has had me thinking, the one where they state that they can't reproduce the trends reported by Pfizer's group on the influences of compound properties. If you really can take two reasonably-sized sets of drug discovery data and come to opposite conclusions about this issue, what hope does this approach have?

Toxicology is just too complicated, I think, for us to expect that any simple property metrics can tell us enough to be useful. That's really annoying, because we could all really use something like that. But increasingly, I think we're still on our own, where we've always been, and that we're just trying to make ourselves feel better when we think otherwise. That problem is particularly acute as you go up the management ladder. Avoiding painful tox-driven failures is such a desirable goal that people are tempted to reach for just about anything reasonable-sounding that holds out hope for it. And this one (compound property space policing) has many other tempting advantages - it's cheap to implement, easy to measure, and produces piles of numbers that make for data-rich presentations. Even the managers who don't really know much chemistry can grasp the ideas behind it. How can it not be a good thing?

Especially when the alternative is so, so. . .empirical. So case-by-case. So disappointingly back-to-where-we-started. I mean, getting up in front of the higher-ups and telling them that no, we're not doing ourselves much good by whacking people about aromatic ring counts and nitrogen atom counts and PSA counts, etc., that we're just going to have to take the compounds forward and wait and see like we always have. . .that doesn't sound like much fun, does it? This isn't what anyone is wanting to hear. You're going to do a lot better if you can tell people that you've Identified The Problem, and How to Address It, and that this strategy is being implemented right now, and here are the numbers to prove it. Saying, in effect, that we can't do anything about it runs the risk of finding yourself replaced by someone who will say that we can.

But all that said, I really am losing faith in property-space metrics as a way to address toxicology. The only thing I'm holding on to are some of the structure-based criteria. I really do, for example, think that quinones are bad news. I think if you advance a hundred quinones into the clinic, that a far higher percentage of them will fail due to tox and side effects than a hundred broadly similar non-quinones. Same goes for rhodanines, and a few other classes, those "aces in the PAINS deck" I referred to the other day. I'm still less doctrinaire about functional groups than I used to be, but I still have a few that I balk at.

And yes, I know that there are drugs with all these groups in them. But if you look at the quinones, for example, you find mostly cytotoxics and anti-infectives which are cytotoxins with some selectivity for non-mammalian cells. If you're aiming at a particularly nasty target (resistant malaria, pancreatic cancer), go ahead and pull out all the stops. But I don't think anyone should cheerfully plow ahead with such structures unless there are such mitigating circumstances, or at least not without realizing the risks that they're taking on.

But this doesn't do us much good, either - most medicinal chemists don't want to advance such compounds anyway. In fact, rather than being too permissive about things like quinones, most of us are probably too conservative about the sorts of structures we're willing to deal with. There are a lot of funny-looking drugs out there, as it never hurts to remind oneself. Peeling off the outer fringe of these (and quinones are indeed the outer fringe) isn't going to increase anyone's success rate much. So what to do?

I don't have a good answer for that one. I wish I did. It's a rare case when we can say, just by looking at its structure, that a particular compound just won't work. I've been hoping that the percentages would allow us to say more than that about more compounds. But I'm really not sure that they do, at least not to the extent that we need them to, and I worry that we're kidding ourselves when we pretend otherwise.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

October 17, 2014

More on "Metabolite Likeness" as a Predictor

Email This Entry

Posted by Derek

A recent computational paper that suggested that similarity to known metabolites could help predict successful drug candidates brought in a lot of comments around here. Now the folks at Cambridge MedChem Consulting have another look at it here.

The big concern (as was expressed by some commenters here as well) is the Tanimoto similarity cutoff of 0.5. Does that make everything look too similar, or not? CMC has some numbers across different data sets, and suggests that this cutoff is, in fact, too permissive to allow for much discrimination. People with access to good comparison sets of compounds that made it and compounds that didn't - basically, computational chemists inside large industrial drug discovery organizations - will have a better chance to see how all this holds up.

Comments (6) + TrackBacks (0) | Category: Drug Development | In Silico

Different Screening, Different Thermodynamics?

Email This Entry

Posted by Derek

Chris Lipinski and the folks at Collaborative Drug Discovery send word of an interesting webinar that will take place this coming Wednesday (October 22nd) at 2 PM EST. It's on enthalpic and entropic trends in ligand binding, and how various screening and discovery techniques might bias these significantly.

Here's the registration page if you're interested. I'm curious about what they've turned up - my understanding is that it will explore, among other things, the differences in molecules selected by industry-trained medicinal chemists versus the sorts that are reported by more academic chemical biologists. As has come up here several times in the past, there certainly do seem to be some splits there, and the CDD people seem to have some numbers to back up those impressions.

Comments (7) + TrackBacks (0) | Category: Academia (vs. Industry) | Chemical Biology | Drug Assays

October 16, 2014

The Electromagnetic Field Stem Cell Authors Respond

Email This Entry

Posted by Derek

The authors of the ACS Nano paper on using electromagnetic fields to produce stem cells have responded on PubPeer. They have a good deal to say on the issues around the images in their paper (see the link), and I don't think that argument is over yet. But here's what they have on criticisms of their paper in general:

Nowhere in our manuscript do we claim “iPSCs can be made using magnetic fields”. This would be highly suspect indeed. Rather, we demonstrate that in the context of highly reproducible and well-established reprogramming to pluripotency with the Yamanaka factors (Oct4, Sox2, Klf4, and cMyc/or Oct4 alone), EMF influences the efficiency of this process. Such a result is, to us, not surprising given that EMF has long been noted to have effects on biological system(Adey 1993, Del Vecchio et al. 2009, Juutilainen 2005)(There are a thousand of papers for biological effects of EMF on Pubmed) and given that numerous other environmental parameters are well-known to influence reprogramming by the Yamanaka factors, including Oxygen tension (Yoshida et al. 2009), the presence of Vitamin C (Esteban et al. 2010), among countless other examples.

For individuals such as Brookes and Lowe to immediately discount the validity of the findings without actually attempting to reproduce the central experimental finding is not only non-scientific, but borders on slanderous. We suggest that these individuals take their skepticism to the laboratory bench so that something productive can result from the time they invest prior to their criticizing the work of others.

That "borders on slanderous" part does not do the authors any favors, because it's a rather silly position to take. When you publish a paper, you have opened the floor to critical responses. I'm a medicinal chemist - no one is going to want to let me into their stem cell lab, and I don't blame them. But I'm also familiar with the scientific literature enough to wonder what a paper on this subject is doing in ACS Nano and whether its results are valid. I note that the paper itself states that ". . .this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency."

If it makes the authors feel better, I'll rephrase: their paper claims that iPSCs can be made more efficiently by adding electromagnetic fields to the standard transforming-factor mixture. (And they also claim that canceling out the Earth's magnetic field greatly slows this process down). These are very interesting and surprising results, and my first impulse is to wonder if they're valid. That's my first impulse every time I read something interesting and surprising, by the way, so the authors shouldn't take this personally.

There are indeed many papers in PubMed on the effects of electromagnetic fields on cellular processes. But this area has also been very controversial, and (as an outside observer) my strong impression is that there have been many problems with irreproducibility. I have no doubt that people with expertise in stem cell biology will be taking a look at this report and trying to reproduce it as well, and I am eager to see what happens next.

Comments (27) + TrackBacks (0) | Category: Biological News | The Scientific Literature

What's The Going Rate These Days?

Email This Entry

Posted by Derek

Time to break out the pseudonyms for the comments section. I've had a couple of people asking (on both sides of the process) what the starting salaries for medicinal chemists are running in the Boston/Cambridge area. It's been a while since this was much of a topic, sad to say, but there is some hiring going on these days, and people are trying to get a feel for what the going rates are. Companies want to make sure that they're making competitive-but-not-too-generous offers, and applicants want to make sure that they're getting a reasonable one, too, naturally.

So anyone with actual data is invited to leave it in the comments section, under whatever name you like. Reports from outside the Boston/Cambridge area (and at other experience levels) are certainly welcome, too, because the same issues apply in other places as well.

Comments (126) + TrackBacks (0) | Category: Business and Markets | How To Get a Pharma Job

No More Varian

Email This Entry

Posted by Derek

This week has brought news that Agilent is getting out of the NMR business, which brings an end to the Varian line of machines, one of the oldest in the business. (Agilent bought Varian in 2010). The first NMR I ever used was a Varian EM-360, which was the workhorse teaching instrument back then. A full 60 MHz of continuous wave for your resolving pleasure - Fourier transform? Superconducting magnets? Luxury! Why, we used to dream of. . .

I used many others in the years to come. But over time, the number of players in the NMR hardware market has contracted. You used to be able to walk into a good-sized NMR room and see machines from Varian, Bruker, JEOL, Oxford, GE (edit - added them) and once in a while an oddity like the 80-MHz IBM-brand machine that I used to use at Duke thirty years ago. No more - Bruker is now the major player. Their machines are good ones (and they've been in the business a while, too), but I do wish that they had some competition to keep them on their toes.

How come there isn't any? It's not that NMR spectroscopy is a dying art. It's as useful as ever, if not even more so. But I think that the market for equipment is pretty saturated. Every big company and university has plenty of capacity, and will buy a new machine only once in a while. The smaller companies are usually fixed pretty well, too, thanks to the used equipment market. And most of those colleges that used to have something less than a standard 300 MHz magnet have worked their way up to one.

There's not much room for a new company to come in and say that their high-field magnets are so much better than the existing ones, either, because the hardware has also reached something of a plateau. You can go out and buy a 700 MHz instrument (and Bruker no doubt wishes that you would), and that's enough to do pretty much any NMR experiment that you can think of. 1000 MHz instruments exist, but I'm not sure how many times you run into a situation where one of those would do the job for you, but a 700 wouldn't. I'm pretty sure that no one even knows how to build a 2000 MHz NMR, but if they did, the number sold would probably be countable on the fingers of one hand. Someone would have to invent a great reason for such a machine to exist -this isn't supercomputing, where the known applications can soak up all the power you can throw at them.

So farewell to the line of Varian NMR machines. Generations of chemists have used their equipment, but Bruker is the one left standing.

Comments (42) + TrackBacks (0) | Category: Analytical Chemistry

October 15, 2014

Not The Sort Of Thing You'd Work With, Given a Choice

Email This Entry

Posted by Derek

Here's a paper that illustrates a different way of looking at the world than many medicinal chemists would have. It discusses inhibitors of SETD8, an unusual epigenetic enzyme that is the only known methyltransferase to target lysine 20 of histone H4. Inhibitors of it would help to unravel just what functions that has, presumably several that no other pathway is quite handling. But finding decent methyltransferase inhibitors has not been easy.
Quinones.jpg
When you search for them, actually, you find compounds like the ones in this paper. Most medicinal chemists will look at these, say the word "quinone", perhaps take a moment spit on the floor or add a rude adjective, and move on to see if there's anything better to look at. Quinones are that unpopular, and with good reason. They're redox-active, can pick up nucleophiles as Michael acceptors, react with amines - they have a whole suite of unattractive behaviors. And that explains their profile in cells and whole animals, with a range of toxic, carcinogenic, and immunologic liabilities. A lot of very active natural products have a quinone in them - it's a real warhead. No medicinal chemist with any experience would feel good about trying to advance one as a lead compound, and (for the same reasons) they tend to make poor tool compounds as well. You just don't know what else they're hitting, and the chance of them hitting something else are too high.

The authors of this paper, though, have a higher tolerance:

In the present work, we characterize these compounds and demonstrate that NSC663284, BVT948, and ryuvidine (3 out of the 4 HTS hits) inhibit SETD8 via different modes. NSC663284 (SPS8I1), ryuvidine (SPS8I2), and BVT948 (SPS8I3) efficiently and selectively suppress cellular H4K20me1 at doses lower than 5 μM within 24 h. . . The cells treated with SPS8I1−3 (Small-molecule Pool of SETD8 Inhibitor) recapitulate cell-cycle-arrest phenotypes similar to what were reported for knocking down SETD8 by RNAi. Given that the three compounds have distinct structures and inhibit SETD8 in different manners, they can be employed collectively as chemical genetic tools to interrogate SETD8-involved methylation.

I would be very careful about doing that, myself. I don't find those structures as distinct as all that (quinone, quinone, quinone), and I'm not surprised to find that they arrest the cell cycle. But do they do it via SETD8? To be fair, they do show selectivity over the other enzymes used in the screening panel (SETD7, SETD2, and GLP). They went on to profile them against several lysine methyltransferases and several arginine methyltransferases. The most selective of the bunch was 2.5x more active against SETD8 compared to the next most active target, which is honestly not a whole lot. (And I note that the authors spent some time a few paragraphs before talking about how their activity measurements are necessarily uncertain).

They do address the quinone problem, but in a somewhat otherworldly manner:

Given that SPS8I1−3 are structurally distinct except for their quinonic moiety (Figure 1a, highlighted in red), we reasoned that they may act on SETD8 differently (e.g., dependence on cofactor or substrate). . .

This, to many medicinal chemists, is a bit like saying that several species of poisonous snake are distinct except for their venom-filled fangs. The paper does seem to find differences in how the three inhibitors respond to varying substrate concentrations, but they also find (unsurprisingly) that all three work by covalent inhibition. Studies on mutant forms of the enzyme suggest strongly that two of the compounds are hitting a particular Cys residue (270), while the third "may target Cys residues in a more general manner". To their credit, they did try three quinone-containing compounds from commercial sources and found them inactive, but that just shows that not every quinone inhibits their enzyme.

This, too, is what you'd expect: if you did a full proteome analysis of what a given quinone compound hits, I'm sure that you'd find varying fingerprints for each one. But even though I have no objection to covalent inhibitors per se, I'm nervous about ones that have so many potential mechanisms. The size and shape of the three compounds shown will surely keep them from doing all the damage that a smaller quinone is capable of doing, but I fear that there's still plenty of damage in them.

Indeed, when they do cell assays, they find that each of the compounds has a somewhat different profile of cell cycle arrest, and say that this is probably due to their off-target effects. But they go on to wind things up like this:

Structurally distinct SPS8I1−3 also display different modes of SETD8 inhibition. Such differences also make SPS8I1−3 less likely to act on other common cellular targets besides SETD8. As a result, the shared phenotypes of the 3 compounds are expected to be associated with SETD8 inhibition. . .Such robust inhibition of SETD8 by SPS8I1−3, together with their different off-target effects, argues that these compounds can be used collectively as SETD8 inhibitors to offset off-target effects of individual reagents. At this stage, we envision using all three compounds to examine SETD8 inhibition and then focusing on the phenotypes shared by all of them.

I have to disagree there. I would be quite worried about how many other cellular processes are being disrupted by these compounds. In fact, the authors already point to some of these. Their SPS8I1, they note, has already been reported as a CDC25 inhibitor. SPS8I2 has been shown to be a CDK2/4 inhibitor, and SPS8I3 has been reported as an inhibitor of a whole list of protein tyrosine phosphatases. None of these enzymes, I would guess, has any particular great structural homology with SETD8, and those activities are surely only the beginning. How is all this to be untangled? Using all three of them to study the same system is likely to just confuse things more rather than throwing light on common mechanisms. Consider the background: even a wonderful, perfectly selective SETD8 inhibitor would be expected to induce a complex set of phenotypes, varying on the cell type and the conditions.

And these are not wonderful inhibitors. They are quinones, aces in the deck of PAINS. No matter what, they need a great deal more characterization before any conclusions can be drawn from their activity. A charitable view of them would be that such characterization, along with a good deal of chemistry effort, might result in derivatives that have a decent chance of hitting SETD8 in a useful manner. An uncharitable view would be that they should be poured into the red waste can before they use up any more time and money.

Comments (42) + TrackBacks (0) | Category: Drug Assays

October 14, 2014

Combichem Into Drugs: How Many?

Email This Entry

Posted by Derek

So here's a question I got from a reader the other day, that I thought I'd put up on the site. How many drugs have there been whose origins were in combichem? I realize that this could be tricky to answer, because compound origins are sometimes forgotten or mysterious. But did the combichem boom of the 1990s produce any individual compound success stories?

Comments (49) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

Electromagnetic Production of Stem Cells? Really?

Email This Entry

Posted by Derek

Now this is an odd paper: its subject matter is unusual, where it's published is unusual, and it's also unusual that no one seems to have noticed it. I hadn't, either. A reader sent it along to me: "Electromagnetic Fields Mediate Efficient Cell Reprogramming into a Pluripotent State".

Yep, this paper says that stem cells can be produced from ordinary somatic cells by exposure to electromagnetic fields. Everyone will recall the furor that attended the reports that cells could be reprogrammed by exposure to weak acid baths (and the eventual tragic collapse of the whole business). So why isn't there more noise around this publication?

One answer might be that not many people who care about stem cell biology read ACS Nano, and there's probably something to that. But that immediately makes you wonder why the paper is appearing there to start with, because it's also hard to see how it relates to nanotechnology per se. An uncharitable guess would be that the manuscript made the rounds of several higher profile and/or more appropriate journals, and finally ended up where it is (I have no evidence for this, naturally, but I wouldn't be surprised to hear that this was the case).

So what does the paper itself have to say? It claims that "extremely low frequency electromagnetic fields" can cause somatic cells to transform into pluripotent cells, and that this process is mediated by EMF effects on a particular enzyme, the histone methyltransferase MII2. That's an H3K4 methyltransferase, and it has been found to be potentially important in germline stem cells and spermatogenesis. Otherwise, I haven't seen anyone suggesting it as a master regulator of stem cell generation, but then, there's a lot that we don't know about epigenetics and stem cells.

There is, however, a lot that we do know about electromagnetism. Over the years, there have been uncountable reports of biological activity for electromagnetic fields. You can go back to the controversy over the effects of power lines in residential areas and the later disputes about the effects of cell phones, just to pick two that have had vast amounts of coverage. The problem is, no one seems to have been able to demonstrate anything definite in any of these cases. As far as I know, studies have either shown no real effects, or (when something has turned up), no one's been able to reproduce it. That goes both for laboratory studies and for attempts at observational or epidemiological studies, too: nothing definite, over and over.

There's probably a reason for that. I have trouble with is the mechanism by which an enzyme gets induced by low-frequency electromagnetic fields, and that's always been the basic argument against such things. You almost have to assume new physics to make a strong connection, because nothing seems to fit: the energies involved are too weak, the absorptions don't match up, and so on. Or at least that's what I thought, but this paper has a whole string of references about how extremely low-frequency electromagnetic fields do all sorts of things to all sorts of cell types. But it's worth noting that the authors also reference papers showing that they're linked to cancer epidemiology, too. It's true, though, that if you do a Pubmed search for "low frequency electromagnetic field" you get a vast pile of references, although I'm really not sure about some of them.

The authors say that the maximum effect in their study was seen at 50 Hz, 1 mT. That is indeed really, really low frequency - the wavelength for a radio signal down there is about 6000 kilometers. Just getting antennas to work in that range is a major challenge, and it's hard for me to picture how subcellular structures could respond to these wavelengths at all. There seem to be all sorts of theories in the literature about how enzyme-level and transcription-level effects might be achieved, but no consensus (from what I can see). Most of the mechanistic discussions I've seen avoid the question entirely - they talk about what enzyme system or signaling pathway might be the "mechanism" for the reported effects, but skip over the big question of how these effects might arise in the first place.

An even odder effect reported in this paper is that the authors also tried these in an experimental setup (a Helmholz coil) that canceled out the usual environment of the Earth's magnetic field. They found that this worked much less efficiently, and suggest that the natural magnetic field must have epigenetic effects. I don't know what to make of that one, either. Normal cells grown under these conditions showed no effects, so the paper hypothesizes that some part of the pluripotency reprogramming process is exceptionally sensitive. Here, I'll let the authors summarize:

As one of the fundamental forces of nature, the EMF is a physical energy produced by electrically charged objects that can affect the movement of other charged objects in the field. Here we show that this physical energy can affect cell fate changes and is essential for reprogramming to pluripotency. Exposure of cell cultures to EMFs significantly improves reprogramming efficiency in somatic cells. Interestingly, EL-EMF exposure combined with only one Yamanaka factor, Oct4, can generate iPSCs, demonstrating that EL-EMF expo- sure can replace Sox2, Klf4, and c-Myc during reprogramming. These results open a new possibility for a novel method for efficient generation of iPSCs. Although many chemical factors or additional genes have been reported for the generation of iPSCs, limitations such as integration of foreign genetic elements or efficiency remain a challenge. Thus, EMF-induced cell fate changes may eventually provide a solution for efficient, noninvasive cell reprogramming strategies in regenerative medicine.

Interestingly, our results show that ES cells and fibroblasts themselves are not significantly affected by EMF exposure; rather, cells undergoing dramatic epigenetic changes such as reprogramming seem to be uniquely susceptible to the effects of EMFs. . .

I don't know what to make of this paper, or the whole field of research. Does anyone?

Update: PubPeer is now reporting some problems with images in the paper. Stay, uh, tuned. . .

Comments (36) + TrackBacks (0) | Category: Biological News

October 13, 2014

Alzheimer's in Cell Culture?

Email This Entry

Posted by Derek

While we're talking about cell culture, there's some potentially significant news in Alzheimer's. The Tanzi lab at Mass General is reporting in Nature that they've been able to grow 3D neuronal cultures that actually reproduce the plaque-and-tangle symptoms of Alzheimer's. That's quite a surprise - neurons are notoriously badly behaved in vitro, and Alzheimer's has been a beast to model in any system at all. You can't even get neurons from human Alzheimer's patients to behave like that when you culture them (at least, I've never heard of it being done).

These new cultures apparently respond to secretase inhibitors, which on one level is good news - since you'd expect those compounds to have an effect on them. On the other hand, such compounds have been quite ineffective in human trials, so there's a disconnect here. Is there more to Alzheimer's that these cell cultures don't pick up, or are the compounds much less better-behaved in vivo (or both)?

This new system, if validated, would seem to open up a whole new avenue for phenotypic screening, which until now has been a lost cause where Alzheimer's is concerned. It's going to be quite interesting to see how this develops, and to see what it can teach us about the real disease. Nothing in this area has come easy, and a break would be welcome. The tricky part will be whether compounds that come out of such a screen will be telling us something about Alzheimer's, or just telling us something about the model. That's always the tricky part.

Update: FierceBiotech notes that Tanzi's "previous insights about Alzheimer's have run into some serious setbacks."

Comments (29) + TrackBacks (0) | Category: Alzheimer's Disease

Diabetes Progress

Email This Entry

Posted by Derek

There have recently been some welcome developments in diabetes therapy, both Type I and Type II. For the latter, there's an interesting report of a metabolic uncoupling therapy in Nature Medicine. Weirdly, it uses a known tapeworm medication, niclosamide (specifically, the ethanolamine salt). It's toxic to worms by that same mechanism. If you uncouple oxidative phosphorylation and the electron-transport system in the mitochondria, you end up just chewing up lipids through respiration while not generating any ATP. That's what happens in brown fat (through the action of uncoupling proteins), and that's what used in mammals for generating extra body heat. Many schemes for cranking this up in humans have been looked at over the years, but a full-scale mitochondrial uncoupling drug would be a nasty proposition in humans (see, for example, dinitrophenol). DNP will indeed make you lose weight, while at the same time you ravenously try to eat your daily supply of ATP, but this is done at a significant risk of sudden death. (And anything that does a better job than DNP will just skip straight to the "sudden death" part). But niclosamide seems to be less efficacious, which in this case is a good thing.

This mechanism diminishes the fat content in liver and muscle tissue, which should improve insulin sensitivity and glucose uptake, and seems to do so very well in mouse models. The authors (Shengkan Jin and colleagues at Rutgers) have formed a company to try to take something in this area into humans. I wish them luck with that - this really could be a good thing for type II and metabolic-syndrome patients, but the idea has proven very difficult over the years. The tox profile is going to be key, naturally, and taking it into the clinic is really the only way to find out if it'll be acceptable.

The Type I news is even more dramatic: a group at Harvard (led by Doug Melton) report in Cellthat they've been able to produce large quantities of glucose-sensitive beta-cells from stem cell precursors. People have been working towards this goal for years, and it hasn't been easy (you can get cells that secrete insulin, but don't sense glucose, for example, but you really don't want that in your body). Transplantation of these new cells into diabetic mice seem to roll back the disease state, so this is another one to try in humans. The tricky part is the keep the immune system from rejecting them (the problem with cell transplants for diabetes in general), but they've managed to protect them in the mouse models, and there's a lot of work going into this part of the idea as well for human trials. This could be very promising indeed, and could, if things go right, be a flat-out cure for many Type I patients. Now that would be an advance.

Comments (12) + TrackBacks (0) | Category: Diabetes and Obesity

October 10, 2014

More on Fluorescent Microscopy Chemistry Prizes

Email This Entry

Posted by Derek

I wanted to note (with surprise!) that one of this year's Nobel laureates actually showed up in the comments section of the post I wrote about him. You'd think his schedule would be busier at the moment (!), but here's what he had to say:

A friend pointed this site/thread out to me. I apologize if I was unclear in the interview. #3 and #32 have it right -- I have too much respect for you guys, and don't deserve to be considered a chemist. My field is entirely dependent upon your good works, and I suspect I'll be personally more dependent upon your work as I age.

Cheers, Eric Betzig

And it's for sure that most of the readers around here are not physicists nor optical engineers, too! I think science is too important for food fights about whose part of it is where - we're all working on Francis Bacon's program of "the effecting of all things possible", and there's plenty for everyone to do. Thanks very much to Betzig for taking the time to leave the clarification.

rhodamine.jpg
Bacterial%20probe.jpg
With that in mind, I was looking this morning at the various tabs I have open on my browser for blogging subjects, and noticed that one of them (from a week or so back) was a paper on super-resolution fluorescent probes. And it's from one of the other chemistry Nobel winners this year, William Moerner at Stanford! Shown is the rhodamine structure that they're using, which can switch from a nonfluorescent state to a highly fluorescent one. Moerner and his collaborators at Kent State investigated a series of substituted variants of this scaffold, and found one that seems to be nontoxic, very capable of surface labeling of bacterial cells, and is photoswitchable at a convenient wavelength. (Many other photoswitchable probes need UV wavelengths to work, which bacteria understandably don't care for very much).

Shown below the structure drawing is an example of the resolution this probe can provide, using Moerner's double-helix point-spread-function, which despite its name is not an elaborate football betting scheme. That's a single cell of Caulobacter crescentus, and you can see that the dye is almost entirely localized on the cell surface, and that ridiculously high resolutions can be obtained. Being able to resolve features inside and around bacterial cells is going to be very interesting in antibiotic development, and this is the kind of work that's making it possible.

Oh, and just a note: this is a JACS paper. A chemistry Nobel laureate's most recent paper shows up in a chemistry journal - that should make people happy!

Comments (8) + TrackBacks (0) | Category: General Scientific News

You'd Think That This Can't Be Correct

Email This Entry

Posted by Derek

Well, here's something to think about over the weekend. I last wrote here in 2011 about the "E-cat", a supposed alternative energy source being touted/developed by Italian inventor Andrea Rossi. Odd and not all that plausible claims of low-energy fusion reactions of nickel isotopes have been made for the device (see the comments section to that post above for more on this), and the whole thing definitely has been staying in my "Probably not real" file. Just to add one complication, Rossi's own past does not appear to be above reproach. And his conduct (and that of his coworker Sergio Focardi) would seem to be a bit strange during this whole affair.

But today there is a preprint (PDF) of another outside-opinion test of the device (thanks to Alex Tabarrok of Marginal Revolution on Twitter for the heads-up). It has several Swedish co-authors (three from Uppsala and one from the Royal Institute of Technology in Stockholm), and the language is mostly pretty measured. But what it has to say is quite unusual - if it's true.

The device itself is no longer surrounded by lead shielding, for one thing. No radiation of any kind appears to be emitted. The test went on for 32 days of continuous operation, and here's the take-home:

The quantity of heat emitted constantly by the reactor and the length of time during which the reactor was operating rule out, beyond any reasonable doubt, a chemical reaction as underlying its operation. This is emphasized by the fact that we stand considerably more than two order of magnitudes from the region of the Ragone plot occupied by conventional energy sources.

The fuel generating the excessive heat was analyzed with several methods before and after the experimental run. It was found that the Lithium and Nickel content in the fuel had the natural isotopic composition before the run, but after the 32 days run the isotopic composition has changed dramatically both for Lithium and Nickel. Such a change can only take place via nuclear reactions. It is thus clear that nuclear reactions have taken place in the burning process. This is also what can be suspected from the excessive heat being generated in the process.

Although we have good knowledge of the composition of the fuel we presently lack detailed information on the internal components of the reactor, and of the methods by which the reaction is primed. Since we are presently not in possession of this information, we think that any attempt to explain the E-Cat heating process would be too much hampered by the lack of this information, and thus we refrain from such discussions.

In summary, the performance of the E-Cat reactor is remarkable. We have a device giving heat energy compatible with nuclear transformations, but it operates at low energy and gives neither nuclear radioactive waste nor emits radiation. From basic general knowledge in nuclear physics this should not be possible. . .

Told you it was interesting. But I'm waiting for more independent verification. As long as Rossi et al. are so secretive about this device, the smell of fraud will continue to cling to it. I truly am wondering just what's going on here, though.

Update: Elforsk, the R&D arm of Sweden's power utility, has said that they want to investigate this further. Several professors from Uppsala reply that the whole thing is likely a scam, and that Elforsk shouldn't be taken in. Thanks to reader HL in the comments section, who notes that Google Translate does pretty well with Swedish-English.

Comments (38) + TrackBacks (0) | Category: General Scientific News

Things I Won't Work With: Peroxide Peroxides

Email This Entry

Posted by Derek

Everyone knows hydrogen peroxide, HOOH. And if you know it, you also know that it's well-behaved in dilute solution, and progressively less so as it gets concentrated. The 30% solution will go to work immediately bleaching you out if you are so careless as to spill some on you, and the 70% solution, which I haven't seen in years, provides an occasion to break out the chain-mail gloves.

Chemists who've been around that one know that I'm not using a figure of speech - the lab down the hall from me that used to use the stuff had a pair of spiffy woven-metal gloves for just that purpose. Part of the purpose, I believe, was to make you think very carefully about what you were doing as you put them on. Concentrated peroxide has a long history in rocketry, going back to the deeply alarming Me-163 fighter of World War II. (Being a test pilot for that must have taken some mighty nerves). Me, I have limits. I've used 30% peroxide many times, and would pick up a container of 70%, if I were properly garbed (think Tony Stark). But I'm not working with the higher grades under any circumstances whatsoever.

The reason for this trickiness is the weakness of the oxygen-oxygen bond. Oxygen already has a lot of electron density on it; it's quite electronegative. So it would much rather be involved with something from the other end of the scale, or at least the middle, rather than make a single bond to another pile of electrons like itself. Even double-bonded oxygen, the form that we breath, is pretty reactive. And when those peroxides decompose, they turn into oxygen gas and fly off into entropic heaven, which is one of the same problems involved in having too many nitrogens in your molecule. There are a lot of things, unfortunately, that can lead to peroxide decomposition - all sorts of metal contaminants, light, spitting at them (most likely), and it doesn't take much. There are apparently hobbyists, though, who have taken the most concentrated peroxide available to them and distilled it to higher strengths. Given the impurities that might be present, and the friskiness of the stuff even when it's clean, this sounds like an extremely poor way to spend an afternoon, but there's no stopping some folks.

Any peroxide (O-O) bond is suspect, if you know what's good for you. Now, if it's part of a much larger molecule, then it's much less likely to go all ka-pow on you (thus the antimalarial drugs artemisinin) and arterolane, but honestly, I would still politely turn down an offer to bang on a bunch of pure artemisinin with a hammer. It just seems wrong.

But I have to admit, I'd never thought much about the next analog of hydrogen peroxide. Instead of having two oxygens in there, why not three: HOOOH? Indeed, why not? This is a general principle that can be extended to many other similar situations. Instead of being locked in a self-storage unit with two rabid wolverines, why not three? Instead of having two liters of pyridine poured down your trousers, why not three? And so on - it's a liberating thought. It's true that adding more oxygen-oxygen bonds to a compound will eventually liberate the tiles from your floor and your windows from their frames, but that comes with the territory.

These thoughts were prompted by a recent paper in JACS that describes a new route to "dihydrogen trioxide", which I suppose is a more systematic name than "hydrogen perperoxide", my own choice. Colloquially, I would imagine that the compound is known as "Oh, @#&!", substituted with the most heartfelt word available when you realize that you've actually made the stuff. The current paper has a nice elimination route to it via a platinum complex, one that might be used to make a number of other unlikely molecules (if it can make HOOOH in 20% yield, it'll make a lot of other things, too, you'd figure). It's instantly recognizable in the NMR, with a chemical shift of 13.4 for those barely-attached-to-earth hydrogens.

But this route is actually pretty sane: it can be done on a small scale, in the cold, and the authors report no safety problems at all. And in general, most people working with these intermediates have been careful to keep things cold and dilute. Dihydrogen trioxide was first characterized in 1993 (rather late for such a simple molecule), but there had been some evidence for it in the 1960s (and it had been proposed in some reactions as far back as the 1880s). Here's a recent review of work on it. Needless to say, no one has ever been so foolhardy as to try to purify it to any sort of high concentration. I'm not sure how you'd do that, but I'm very sure that it's a bad, bad, idea. This stuff is going to be much jumpier than plain old hydrogen peroxide (that oxygen in the middle of the molecule probably doesn't know what to do with itself), and I don't know how far you could get before everything goes through the ceiling.

But there are wilder poly-peroxides out there. If you want to really oxidize the crap out of things with this compound, you will turn to the "peroxone process". This is a combination of ozone and hydrogen peroxide, for those times when a single explosive oxidizing agent just won't do. I'm already on record as not wanting to isolate any ozone products, so as you can imagine, I really don't want to mess around with that and hydrogen peroxide at the same time. This brew generates substantial amounts of HOOOH, ozonide radicals, hydroxy radicals and all kinds of other hideous thingies, and the current thinking is that one of the intermediates is the HOOOOO- anion. Yep, five oxygens in a row - I did not type that with my elbows. You'll want the peroxone process if you're treated highly contaminated waste water or the like: here's a look at using it for industrial remediation. One of the problems they had was that as they pumped ozone and peroxide into the contaminated site, the ozone kept seeping back up into the equipment trailer and setting off alarms as if the system were suddenly leaking, which must have been a lot of fun.

What I haven't seen anyone try is using this brew in organic synthesis. It's probably going to be a bit. . .uncontrolled, and lead to some peroxide products that will also have strong ideas of their own. But if you keep things dilute, you should be able to make it through. Anyone ever seen it used for a transformation?

Comments (53) + TrackBacks (0) | Category: Things I Won't Work With

October 9, 2014

The Most Common Heterocycles in Drugs

Email This Entry

Posted by Derek

What sorts of heterocycles show up the most in approved drugs? This question has been asked several times before in the literature, but it's always nice to see an update. This one is from the Njardson group at Arizona, producers of the "Top 200 Drugs" posters.

84% of all unique small-molecule drugs approved by the FDA have at least one nitrogen atom in them, and 59% have some sort of nitrogen heterocycle. Leaving out the cephems and penems, which are sort of a special case and not really general-purpose structures, the most popular ones are piperidine, pyridine, pyrrolidine, thiazole, imidazole, indole, and tetrazole, in that order. Some other interesting bits:

All the four-membered nitrogen heterocycles are beta-lactams; no azetidine-containing structure has yet made it to approval.

The thiazoles rank so highly because so many of them are in the beta-lactam antibiotics as well. Every single approved thiazole is substituted in the 2 position, and no monosubstituted thiazole has ever made it into the pharmacopeia, either.

Almost all the indole-containing drugs are substituted at C3 and/or C5 - pindolol is an outlier.

The tetrazoles are all either antibiotics or cardiovascular drugs (the sartans).

92% of all pyrrolidine-substructure compounds have a substituent on the nitrogen.

Morpholine looks more appealing as a heterocycle than it really is - piperidine and piperazine both are found far more frequently. And I'll bet that many of those morpholines are just there for solubility, and that otherwise a piperidine would have served for SAR purposes. Ethers don't always seem to do that much for you.

Piperidines rule. There's a huge variety of them out there, the great majority substituted on the nitrogen. Azepanes, though, one methylene larger, have only three representatives.

83% of piperazine-containing drugs are substituted at both nitrogens.

There are a lot of other interesting bits in the paper, which goes on to examine fused and bicyclic heterocycles. But I think this afternoon I'll go make some piperidines and increase my chances.

Comments (25) + TrackBacks (0) | Category: Chemical News | Drug Industry History

Eric Betzig Is Not a Chemist, And I Don't Much Care

Email This Entry

Posted by Derek

Update: Betzig himself has shown up in the comments to this post, which just makes my day.

Yesterday's Nobel in chemistry set off the traditional "But it's not chemistry!" arguments, which I largely try to stay out of. For one thing, I don't think that the borders between the sciences are too clear - you can certainly distinguish the home territories of each, but not the stuff out on the edge. And I'm also not that worked up about it, partly because it's nowhere near a new phenomenon. Ernest Rutherford got his Nobel in chemistry, and he was an experimental physicist's experimental physicist. I'm just glad that a lot of cutting-edge work in a lot of important fields (nanotechnology, energy, medicine, materials science) has to have a lot of chemistry in it.

With this in mind, I thought this telephone interview with Eric Betzig, one of the three laureates in yesterday's award, was quite interesting:

This is a chemistry prize, do you consider yourself a chemist, a physicist, what?

[EB] Ha! I already said to my son, you know, chemistry, I know no chemistry. [Laughs] Chemistry was always my weakest subject in high school and college. I mean, you know, it's ironic in a way because, you know, trained as a physicist, when I was a young man I would look down on chemists. And then as I started to get into super-resolution and, which is really all about the probes, I came to realise that it was my karma because instead I was on my knees begging the chemists to come up with better probes for me all the time. So, it's just poetic justice but I'm happy to get it wherever it is. But I would be embarrassed to call myself a chemist.

Some people are going to be upset by that, but you know, if you do good enough work to be recognized with a Nobel, it doesn't really matter much what it says on the top of the page. "OK, that's fine for the recipients", comes one answer, "but what about the committee? Shouldn't the chemistry prize recognize people who call themselves chemists?" One way to think about that is that it's not the Nobel Chemist prize, earmarked for whatever chemists have done the best work that can be recognized. (The baseball Hall of Fame, similarly, has no requirement that one-ninth of its members be shortstops). It's for chemistry, the subject, and chemistry can be pretty broadly defined. "But not that broadly!" is the usual cry.

That always worries me. It seems dangerous, in a way - "Oh no, we're not such a broad science as that. We're much smaller - none of those big discoveries have anything to do with us. Won't the Nobel committee come over to our little slice of science and recognize someone who's right in the middle of it, for once?" The usual reply to that is that there are, too, worthy discoveries that are pure chemistry, and they're getting crowded out by all this biology and physics. But the pattern of awards suggests that a crowd of intelligent, knowledgable, careful observers can disagree with that. I think that the science Nobels should be taken as a whole, and that there's almost always going to be some blending and crossover. It's true that this year's physics and chemistry awards could have been reversed, and no one would have complained (or at least, not any more than people are complaining now). But that's a feature, not a bug.

Comments (38) + TrackBacks (0) | Category: Chemical News | General Scientific News

October 8, 2014

XKCD on Protein Folding

Email This Entry

Posted by Derek

I've been meaning to mention this recent XKCD comic, which is right on target:
"Someone may someday find a harder one", indeed. . .

Protein folding

Comments (25) + TrackBacks (0) | Category: Biological News

The 2014 Chemistry Nobel: Beating the Diffraction Limit

Email This Entry

Posted by Derek

This year's Nobel prize in Chemistry goes to Eric Betzig, Stefan Hell, and William Moerner for super-resolution fluorescence microscopy. This was on the list of possible prizes, and has been for several years now (see this comment, which got 2 out of the 3 winners, to my 2009 Nobel predictions post). And it's a worthy prize, since it provides a technique that (1) is useful across a wide variety of fields, from cell biology on through chemistry and into physics, and (2) does so by what many people would, at one time, would have said was impossible.

The impossible part is beating the diffraction limit. That was first worked out by Abbe in 1873, and it set what looked like a physically impassable limit to the resolution of optical microscopy. Half the wavelength of the light you're using is as far as you can go, and (unfortunately) that means that you can't optically resolve viruses, many structures inside the cell, and especially nothing as small as a protein molecule. (As an amateur astronomer, I can tell you that the same limits naturally apply to telescope optics, too: even under perfect conditions, there's a limit to how much you can resolve at a given wavelength, which is why even the Hubble telescope can't show you Neil Armstrong's footprint on the moon). In any optical system, you're doing very well if the diffraction limit is the last thing holding you back, but hold you back it will.
STED.jpg
There are several ways to try to sneak around this problem, but the techniques that won this morning are particularly good ones. Stefan Hell worked out an ingenious method called stimulated emission depletion (STED) microscopy. If you have some sort of fluorescent label on a small region of a sample, you get it to glow, as usual, by shining a particular wavelength of light on it. The key for STED is that if another particular wavelength of light is used at the same time, you can cause the resulting fluorescence to shift. Physically, fluorescence results when electrons get excited by light, and then relax back to where they were by emitting a different (longer) wavelength. If you stimulate those electrons by catching them once they're already excited by the first light, they fall back into a higher vibrational state than they would otherwise, which means less of an energy gap, which means less energetic light is emitted - it's red-shifted compared to the usual fluorescence. Pour enough of that second stimulating light into the system after the first excitation, and you can totally wipe out the normal fluorescence.

And that's what STED does. It uses the narrowest possible dot of "normal" excitation in the middle, and surround that with a doughnut shape of the second suppressing light. Scanning this bulls-eye across the sample gives you better-than-diffraction-limit imaging for your fluorescent label. Hell's initial work took several years just to realize the first images, but the microscopists have jumped on the idea over the last fifteen years or so, and it's widely used, with many variations (multiple wavelength systems at the same time, high frames-per-second rigs for recording video, and so on). There's a STED image of a labeled neurofilament compared to the previous state of the art. You'd think that this would be an obvious and stunning breakthrough that would speak for itself, but Hell himself is glad to point out that his original paper was rejected by both Nature and Science.
STED%20image.jpg
You can, in principle, make the excitation spot as small as you wish (more on this in the Nobel Foundation's scientific background on the prize here). In practice, the intensity of the light needed as you push to higher and higher resolution tends to lead to photobleaching of the fluorescent tags and to damage in the sample itself, but getting around these limits is also an active field of research. As it stands, STED already provides excellent and extremely useful images of all sorts of samples - many of those impressive fluorescence microscopy shots of glowing cells are produced this way.

The other two winners of the prize worked on a different, but related technique: single-molecule microscopy. Back in 1989, Moerner's lab was the first to be able to spectroscopically distinguish single molecules outside the gas phase - pentacene, imbedded in crystals of another aromatic hydrocarbon (terphenyl), down around liquid helium temperatures. Over the next few years, a variety of other groups reported single-molecule studies in all sorts of media, which meant that something that would have been thought crazy or impossible when someone like me was in college was now popping up all over the literature.

But as the Nobel background material rightly states, there are some real difficulties with doing single-molecule spectroscopy and trying to get imaging resolution out of it. The data you get from a single fluorescent molecule is smeared out in a Gaussian (or pretty much Gaussian) blob, but you can (in theory) work back from that to where the single point must have been to give you that data. But to do that, the fluorescent molecules have to scattered apart further than that diffraction limit. Fine, you can do that - but that's too far apart to reconstruct a useful image (Shannon and Nyquist's sampling theorem in information theory sets that limit).

Betzig himself took a pretty unusual route to his discovery that gets around this problem. He'd been a pioneer in another high-resolution imaging technique, near-field microscopy, but that one was such an impractical beast to realize that it drove him out of the field for a while. (Plenty of work continues in that area, though, and perhaps it'll eventually spin out a Nobel of its own). As this C&E News article from 2006 mentions, he. . .took some time off:

After a several-year stint in Michigan working for his father's machine tool business, Betzig started getting itchy again a few years ago to make a mark in super-resolution microscopy. The trick, he says, was to find a way to get only those molecules of interest within a minuscule field of view to send out enough photons in such a way that would enable an observer to precisely locate the molecules. He also hoped to figure out how to watch those molecules behave and interact with other proteins. After all, says Betzig, "protein interactions are what make life."

Betzig, who at the time was a scientist without a research home, knew also that interactions with other researchers almost always are what it takes these days to make significant scientific or technological contributions. Yet he was a scientist-at-large spending lots of time on a lakefront property in Michigan, often in a bathing suit. Through a series of both deliberate and accidental interactions in the past two years with scientists at Columbia University, Florida State University, and the National Institutes of Health, Betzig was able to assemble a collaborative team and identify the technological pieces that he and Hess needed to realize what would become known as PALM.

He and Hess actually built the first instrument in Hess's living room, according to the article. The key was to have a relatively dense field of fluorescent molecules, but to only have a sparse array of them emitting at any one time. That way you can build up enough information for a detailed picture through multiple rounds of detection, and satisfy both limits at the same time. Even someone totally outside the field can realize that this was a really, really good plan. Betzig describes very accurately the feeling that a scientist gets when an idea like this hits: it seems so simple, and so obvious, that you're sure that everyone else in the field must have been hit by it at the same time, or will be in the next five minutes or so. In this case, he wasn't far off: several other groups were working on similar schemes while he and Hess were commandeering space in that living room. (Here's a video of Hess and Betzig talking about their collaboration).
PALM.jpg
Shown here is what the technique can accomplish - this is from the 2006 paper in Science that introduced it to the world. Panel A is a section of a lysozome, with a labeled lysozyme protein. You can say that yep, the enzyme is in the outer walls of that structure (and not so many years ago, that was a lot to be able to say right there). But panel B is the same image done through Betzig's technique, and holy cow. Take a look at that small box near the bottom of the panel - that's shown at higher magnification in panel D, and the classic diffraction limit isn't much smaller than that scale bar. As I said earlier, if you'd tried to sell people on an image like this back in the early 1990s, they'd probably have called you a fraud. It wasn't thought possible.

The Betzig technique is called PALM, and the others that came along at nearly the same time are STORM, fPALM, and PAINT. These are still being modified all over the place, and other techniques like total internal reflection fluorescence (TIRF) are providing high resolution as well. As was widely mentioned when green fluorescent protein was the subject of the 2008 Nobel, we are currently in a golden (and green, and red, and blue) age of cellular and molecular imaging. (Here's some of Betzig's recent work, for illustration). It's wildly useful, and today's prize was well deserved.

Comments (42) + TrackBacks (0) | Category: Biological News | Chemical Biology | Chemical News

October 7, 2014

German Pharma, Or What's Left of It

Email This Entry

Posted by Derek

Busy day around here on the frontiers of science, so I haven't had a chance to get a post up. A reader did send along this article from the Frankfurter Allgemeine Zeitung, the heavyweight German newspaper known as the "Fahts" (FAZ). (The Chrome browser will run Google's auto-translate past it if you ask, and it comes out sort of coherent).

What they're asking is: how and why did the German pharmaceutical industry decline so much? Parts have been sold off (as with Hoechst and BASF), and some remaining players have merged (as with Bayer and Schering AG). There's still Boehringer and Merck (Darmstadt), but they're fairly far down the rankings in size and drug R&D expenditure. And you don't have to compare things just to the US: all this has taken place while the folks just down up the river (Novartis and Roche) have looked much stronger. The article is blaming "Wankelmut" (vacillation, fickleness) at the strategic level for much of this, especially regarding the role of an industrial chemicals division versus a pharma one.

There's something to that. Bayer was urged for years and years by analysts to break up the company, and resisted. Until recently - but now they're going to do it. Meanwhile, the other big German chemistry conglomerates did just that, but divested their pharma ends off to other companies (and countries) rather than spinning them out on their own. And there's not much of a German startup/biotech sector backstopping any of this, either. The successes of Amgen, Biogen, Genentech et al. have not happened in Germany - for the most part, players there stay where they are. The big firms stay the big firms, and no one joins their ranks.

And that's what strikes me about many economies in general, as compared to the US. We have more turmoil. It's not always a good thing, but we're also had a lot of science and technology-based companies come out of nowhere to become world leaders. And you can't do that without shaking things around. Is it partly an aversion to that sort of disruption that's led to the current state of affairs, or is this mistaking symptoms for causes? (I mean, the Swiss are hardly known for wild swings in their business sectors, but Swiss pharma has done fine). Thoughts?

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

October 6, 2014

Sunesis Fails with Vosaroxin

Email This Entry

Posted by Derek

When last heard from, Sunesis was trying to get some last compounds through clinical trials, having cut everything else possible along the way (and having sold more shares to raise cash).

Their lead molecule has been vosaroxin (also known as voreloxin and SNS-595), a quinolone which has been in trials for leukemia. Unfortunately, the company said today that the Phase III trial failed to meet its primary endpoint (and the stock's behavior reflects that, thoroughly). The company's trying to make what it can out of secondary endpoints and possible effects in older patients, but the market doesn't seem to be buying it.

Comments (13) + TrackBacks (0) | Category: Business and Markets | Clinical Trials

A New Way to Estimate a Compound's Chances?

Email This Entry

Posted by Derek

Just a few days ago we were talking about whether anything could be predicted about a molecule's toxicity by looking over its biophysical properties. Some have said yes, this is possible (that less polar compounds tend to be more toxic), but a recent paper has said no, that no such correlation exists. This is part of the larger "Rule of 5" discussion, about whether clinical success in general can be (partially) predicted by such measurements (lack of unexpected toxicity is a big factor in that success). And that discussion shows no sign of resolving any time soon, either.

Now comes a new paper that lands right in the middle of this argument. Douglas Kell's group at Manchester has analyzed a large data set of known human metabolites (the Recon2 database, more here) and looked at how similar marketed drugs are to the structures in it. Using MACCS structural fingerprints, they find that 90% of marketed drugs have a Tanimoto similarity of more than 0.5 to at least one compound in the database, and suggest that this could be a useful forecasting tool for new structures.

Now, that's an interesting idea, and not an implausible one, either. But the next things to ask are "Is it valid?" and "What could be wrong with it?" That's the way we learn how to approach pretty much anything new that gets reported in science, of course, although people do tend to take it the wrong way around the dinner table. Applying that in this case, here's what I can think of that could be off:

1. Maybe the reason that everything looks like one of the metabolites in the database is that the database contains a bunch of drug metabolites to start with, perhaps even the exact ones from the drugs under discussion? This isn't the case, though: Recon2 contains endogenous metabolites only, and the Manchester group went through the list removing compounds that are listed as drugs but are also known metabolites (nutritional supplements, for the most part).

2. Maybe Tanimoto similarities aren't the best measurement to use, and overestimate things? Molecular similarity can be a slippery concept, and different people often mean different things by it. The Tanimoto coefficient is the ratio of shared features of two molecules to their unique features, so a Tanimoto of 1 means that the two are identical. What does a coefficient of 0.5 tell us? That depends on how those "features" are counted, as one could well imagine, and the various ones are usually referred to as compound "fingerprints". The Manchester group tried several of these, and settled on the 166 descriptors of the MACCS set. And that brings up the next potential problem. . .

3. Maybe MACCS descriptors aren't the best ones to use? I'm not enough of an informatics person to say, although this point did occur to the authors. They don't seem to know the answer, either, however:

However, the cumulative plots of the (nearest metabolite Tanimoto similarity) for each drug using different fingerprints do differ quite significantly depending on which fingerprint is used, and clearly the well-established MACCS fingerprints lead to a substantially greater degree of ‘metabolite-likeness’ than do almost all the other encodings (we do not pursue this here).

So this one is an open question - it's not for sure if there's something uniquely useful about the MACCS fingerprint set here, or if there's something about the MACCS fingerprint set that makes it just appear to be uniquely useful. The authors do note in the paper that they tried to establish that the patterns they saw were ". . .not a strange artefact of the MACCS encoding itself." And there's another possibility. . .

4. Maybe the universe of things that make this cutoff is too large to be informative? That's another way of asking "What does a Tanimoto coefficient of 0.5 or greater tell you?" The authors reference a paper (Baldi and Nasr) on that very topic, which says:

Examples of fundamental questions one would like to address include: What threshold should one use to assess significance in a typical search? For instance, is a Tanimoto score of 0.5 significant or not? And how many molecules with a similarity score above 0.5 should one expect to find? How do the answers to these questions depend on the size of the database being queried, or the type of queries used? Clear answers to these questions are important for developing better standards in chemoinformatics and unifying existing search methods for assessing the significance of a similarity score, and ultimately for better understanding the nature of chemical space.

The Manchester authors say that applying the methods of that paper to their values show that they're highly significant. I'll take their word for that, since I'm not in a position to run the numbers, but I do note that the earlier paper emphasizes that a particular Tanimoto score's significance is highly dependent on the size of the database, the variety of molecules in it, and the representations used. The current paper doesn't (as far as I can see) go into the details of applying the Baldi and Nasr calculations to their own data set, though.

The authors have done a number of other checks, to make sure that they're not being biased by molecular weights, etc. They looked for trends that could be ascribed to molecular properties like cLogP, but found none. And they tested their hypothesis by running 2000 random compounds from Maybridge through, which did indeed generate much different-looking numbers than the marketed drugs.

As for whether their overall method is useful, here's the Manchester paper's case:

. . .we have shown that when encoded using the public MDL MACCS keys, more than 90 % of individual marketed drugs obey a ‘rule of 0.5’ mnemonic, elaborated here, to the effect that a successful drug is likely to lie within a Tanimoto distance of 0.5 of a known human metabolite. While this does not mean, of course, that a molecule obeying the rule is likely to become a marketed drug for humans, it does mean that a molecule that fails to obey the rule is statistically most unlikely to do so.

That would indeed be a useful thing to know. I would guess that people inside various large drug organizations are going to run this method over their own internal database of compounds to see how it performs on their own failures and successes - and that is going to be the real test. How well it performs, though, we may not hear for a while. But I'll keep my ears open, and report on anything useful.

Comments (35) + TrackBacks (0) | Category: Drug Assays | In Silico

October 3, 2014

Meinwald Honored

Email This Entry

Posted by Derek

It's been announced today that Jerry Meinwald (emeritus at Cornell) has won the Presidential Medal of Science in chemistry. That's well deserved - his work on natural product pheromone and signaling systems has had an impact all through chemistry, biology, agricultural science, ecology, and beyond. He and Thomas Eisner totally changed the way that we look at insect behavior, among others. Their work is the foundation of the whole field of chemical ecology.

So congratulations to Prof. Meinwald for a well-deserved honor, one of many he's received in his long career. But there's one that's escaped him:I note that (the late) Prof. Eisner has a Wikipedia entry, but Meinwald doesn't, which seems to be a bizarre oversight. If I had the time, I'd write one myself - won't someone?

Update: there's a page up now, and it's being expanded.

Comments (11) + TrackBacks (0) | Category: Chemical News

Molecular Biology Turns Into Chemistry

Email This Entry

Posted by Derek

This is the sort of work that is gradually turning molecular biology into chemistry - and it's a good thing. The authors are studying the movement of a transcription factor protein along a stretch of DNA having two of its recognition sites, and using NMR to figure out how it transfers from one to the other. Does it leave one DNA molecule, dissociate into solution, and land on another? Does it slide along the DNA strand to the next site? Or does it do a sort of hop or backflip while still staying with the same DNA piece?

Now, to a first approximation, you may well not care. And I have no particular interest in the HoxD9 transcription factor myself. But I do find transcription factors in general of interest as drug targets - very hard drug targets that need all the help that they can get. The molecular-biology level of understanding starts out like this: there are protein transcription factors that bind to DNA. The protein sequences are (long list), and the DNA sequences that they recognize are (long list), and the selectivities that they show are (even longer list, and getting longer all the time). Under (huge set of specific conditions), a given transcription factor has been found to (facilitate and/or repress) expression of (huge list of specific genes). (Monstrously complex list of) associated proteins have been found to be involved in these processes.

I'm not making fun of this stuff - that's a lot of information, and people have worked very hard to get it. But as you go further into the subject, you have to stop treating these things are modules, boxy shapes on a whiteboard, or database entries, and start treating them as molecules, and that's where things have been going for some years now. When a transcription factor recognizes a stretch of DNA, how exactly does it do that? What's the structure of the protein in the DNA-recognition element, and what interactions does it make with the DNA itself? What hydrogen bonds are formed, what conformational changes occur, what makes the whole process thermodynamically favorable? If you really want to know that in detail, you're going to be accounting for every atom's position and how it got there. That is a chemistry-level understanding of the situation: it's not just Hox9a, it's a big molecule, interacting with another big molecule, and that interaction needs to be understood in all its detail.

We are not there yet. I say that without fear of contradiction, but it's clearly where we need to go, and where a lot of research groups are going. This paper is just one example; there are plenty of others. I think that all of molecular biology is getting pulled this way - studying proteins, for example, inevitably leads you to studying their structure (and to the protein folding problem), and you can't understand that without treating it at an atom-by-atom scale. Individual hydrogen bonds, water molecule interactions, pi-stacks and other such phenomena are essential parts of the process.

So down with the whiteboard shapes and the this-goes-to-this arrows. Those are OK for shorthand, but it's time we figured out what they're shorthand for.

Comments (19) + TrackBacks (0) | Category: Chemical Biology

October 2, 2014

Speaking at Northeastern

Email This Entry

Posted by Derek

For anyone at Northeastern University here in Boston who would be interested in attending, I'll be speaking this evening at 6 to the student affiliate ACS there, in Hurtig Hall (Room 115). If you'd like to attend, let them know at acs.neu@gmail.com. And don't eat my portion of pizza before I get there - I'll need it to remain coherent. Turn down too much free pizza and they'll kick you out of almost any scientific society there is, you know.

Comments (12) + TrackBacks (0) | Category: Blog Housekeeping

Catalyst Pharmaceuticals and Their Disgusting Business Strategy

Email This Entry

Posted by Derek

OK, this seems to be a new business model, damn it all. I wrote here recently about the huge price increase of Thiola (tiopronin) by a small company called Retrophin.

Now, as I wrote about here last year, another small company called Catalyst Pharmaceuticals is preparing to jack up the price of Firdapse (3,4-diaminopyridine) for the rare disorder Lambert-Eaton Myasthenic Syndrome (LEMS).

This disease is so rare, and the drug is so easily available, that it is currently being given away for free. But Catalyst is going to make sure that it won't stay free for long. Not at all:

There was never any doubt about Firdapse's ability to treat LEMS symptoms effectively because it's the same active drug as 3,4-Dap. With that perspective, Catalyst's triumphant press release Monday is all the more galling. The company took no risks with Firdapse. The company did no development work, made no effort to improve the drug's efficacy, safety or convenience for patients. The only thing Catalyst did was write a check to Biomarin and take over supervision of a Firdapse clinical trial already well underway.

For the zero work done by Catalyst, LEMS patients and their insurance companies will be paying as much as $80,000 for the exact same drug they use now for a fraction of the cost, if not gratis.

To just add a rancid cherry on top, that piece by Adam Feuerstein also details the way the company is apparently intimidating LEMS patients by telling them that they'll need to be deposed in a shareholder lawsuit. Now this is what regulatory failure looks like. I can think of no possible reason, no public good that comes from taking a drug that was easily available and working exactly as it should and have someone suddenly be able to charge $80,000 a year for it. This is not a reward for innovation or risk-taking - this is exploitation of a regulatory loophole, a blatant shakedown, or so it seems to me.

Why does the FDA let this happen? It brings the agency into disrepute, and the whole drug industry as well, and for no benefit at all. Well, unless you're the sort of person who executes one of these business plans, in which case you should get out of my sight. Too many people already think that all drug companies do is grab someone else's inexpensive compound and then raise the price as high as they possibly can. Watching people like Catalyst and Retrophin actually live the stereotype is infuriating.

Update: The previous licensee for this drug, Biomarin (in Europe) was harshly criticized for just this sort of business plan. Here is an open letter from 2010 from a group of British physicians to Prime Minister David Cameron, and its opening paragraph succinctly describes the problem here:

". . .The original purpose of this (orphan drug) legislation, passed in 1999, was to encourage drug companies to conduct research into rare diseases and develop novel treatments. However, as the rules are currently enacted, many drug companies merely address their efforts to licensing drugs that are already available rather than developing new treatments. Once a company has obtained a licence, the legislation then gives the company sole rights to supply the drug. This in turn allows the company to set an exorbitant price for this supply and effectively to bar previous suppliers of the unlicensed preparation from further production and distribution.

Similar regulatory loopholes have been used to raise the price of colchicine and hydroxyprogesterone, among others, and we can expect this to be done over and over, with every single drug that it can be done to, because the supply of people who think that this is a good idea is apparently endless. And the public backlash and the regulatory (and legislative) scrutiny that this brings down will then be distributed not just to the rent-seeking generic companies involved, but to every drug company of any type, because whatever hits the fan is never evenly spread. Do we really want this?

Postscript: In an interesting sequel to the Retrophin story, the company's board this week replaced CEO Martin Shkreli, whose sudden appearance on Reddit in response to this issue probably didn't help his position).

Comments (48) + TrackBacks (0) | Category: Drug Prices | Regulatory Affairs | The Dark Side

We Can't Calculate Our Way Out of This One

Email This Entry

Posted by Derek

Clinical trial failure rates are killing us in this industry. I don't think there's much disagreement on that - between the drugs that just didn't work (wrong target, wrong idea) and the ones that turn out to have unexpected safety problems, we incinerate a lot of money. An earlier, cheaper read on either of those would transform drug research, and people are willing to try all sorts of things to those ends.

One theory on drug safety is that there are particular molecular properties that are more likely to lead to trouble. There have been several correlations proposed between high logP (greasiness) and tox liabilities, multiple aromatic rings and tox, and so on. One rule proposed in 2008 by a group at Pfizer is that clogP >3 and total polar surface area less than 75 square angstroms is a good cutoff - compounds on the other side of it are about 2.5 times more likely to run into trouble. But here's a paper in MedChemComm that asks if any of this has any validity:

What is the likelihood of real success in avoiding attrition due to toxicity/safety from using such simple metrics? As mentioned in the beginning, toxicity can arise from a wide variety of reasons and through a plethora of complex mechanisms similar to some of the DMPK endpoints that we are still struggling to avoid. In addition to the issue of understanding and predicting actual toxicity, there are other hurdles to overcome when doing this type of historical analysis that are seldom discussed.

The first of these is making sure that you're looking at the right set of failed projects - that is, ones that really did fail because of unexpected compound-associated tox, and not some other reason (such as unexpected mechanism-based toxicity, which is another issue). Or perhaps a compound could have been good enough to make it on its own under other circumstances, but the competitive situation made it untenable (something else came up with a cleaner profile at about the same time). Then there's the problem of different safety cutoffs for different therapeutic areas - acceptable tox for a pancreatic cancer drug will not cut it for type II diabetes, for example.

The authors did a thorough study of 130 AstraZeneca development compounds, with enough data to work out all these complications. (This is the sort of thing that can only be done from inside a company's research effort - you're never going to have enough information, working from outside). What they found, right off, was that for this set of compounds the Pfizer rule was completely inverted. The compounds on the too-greasy side actually had shown fewer problems (!) The authors looked at the data sets from several different angles, and conclude that the most likely explanation is that the rule is just not universally valid, and depends on the dataset you start with.

The same thing happens when you look at the fraction of sp3 carbons, which is a characteristic (the "Escape From Flatland" paper) that's also been proposed to correlate with tox liabilities. The AZ set shows no such correlation at all. Their best hypothesis is that this is a likely correlation with pharmacokinetics that has gotten mixed in with a spurious correlation with toxicity (and indeed, the first paper on this trend was only talking about PK). And finally, they go back to an earlier properties-based model published by other workers at AstraZeneca, and find that it, too, doesn't seem to hold up on the larger, more curated data set. Their-take home message: ". . .it is unlikely that a model of simple physico-chemical descriptors would be predictive in a practical setting."

Even more worrisome is what happens when you take a look at the last few years of approved drugs and apply such filters to them (emphasis added):

To investigate the potential impact of following simple metric guidelines, a set of recently approved drugs was classified using the 3/75 rule (Table 3). The set included all small molecule drugs approved during 2009–2012 as listed on the ChEMBL website. No significant biases in the distribution of these compounds can be seen from the data presented in Table 3. This pattern was unaffected if we considered only oral drugs (45) or all of the drugs (63). The highest number of drugs ends up in the high ClogP/high TPSA class and the class with the lowest number of drugs is the low ClogP/low TPSA. One could draw the conclusion that using these simplistic approaches as rules will discard the development of many interesting and relevant drugs.

One could indeed. I hadn't seen this paper myself until the other day - a colleague down the hall brought it to my attention - and I think it deserves wider attention. A lot of drug discovery organizations, particularly the larger ones, use (or are tempted to use) such criteria to rank compounds and candidates, and many of us are personally carrying such things around in our heads. But if these rules aren't valid - and this work certainly makes it look as if they aren't - then we should stop pretending as if they are. That throws us back into a world where we have trouble distinguishing troublesome compounds from the good ones, but that, it seems, is the world we've been living in all along. We'd be better off if we just admitted it.

Comments (25) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Toxicology

October 1, 2014

No More Prearranged Editors at PNAS

Email This Entry

Posted by Derek

While we're on the topic of the literature, I see that PNAS has made some changes to their system:

Although the largest number of submissions to PNAS are through the Direct Submission route, there continues to linger a general perception that to publish a paper in PNAS, an author requires sponsorship of an NAS member. In July 2010, the member Communicated route was eliminated, but to ensure and promote submission of exceptional papers that either were interdisciplinary and hence needed special attention, or were unique and perhaps ahead of their time, authors were able to use the Prearranged Editor (PE) process for Direct Submissions. The PE process was intended to be used on rare occasions but, since we formalized the PE process, more than 11,000 papers have been submitted by authors with a PE designation. Although we are certain that some of these papers truly needed special attention, the vast majority likely did not, and therefore we are discontinuing the PE process as of October 1, 2014. We will continue to honor the current PE submissions.

They're setting up a new submission process, which (from what I can see) will make the journal very much like the rest of the field. Are there any odd routes to a PNAS publication left? As for the whole literature landscape, I'm sticking with my characterizations and there are plenty more in the comments, for now.

Comments (6) + TrackBacks (0) | Category: The Scientific Literature

No, They Really Aren't Reproducible

Email This Entry

Posted by Derek

Here's an interview with Nobel winner Randy Schekman, outspoken (as usual) on the subject of the scientific literature. This part caught my attention:

We have a problem. Some people claim that important papers cannot be replicated. I think this is an argument that has been made by the drug companies. They claim that they take observations in the literature and they can't reproduce them, but what I wonder is whether they're really reproducing the experiments or simply trying to develop a drug in an animal model and not exactly repeating the experiments in the publication. But I think it is unknown what fraction of the literature is wrong, so we're conducting an experiment. We've been approached by an organization called the Reproducibility Project, where a private foundation has agreed to provide funds for experiments in the fifty most highly cited papers in cancer biology to be reproduced, and the work will be contracted out to laboratories for replication. And we've agreed to handle this and eventually to publish the replication studies, so we'll see, you know, at least with these fifty papers. How many of them really have reproducibility. The reproducibility studies will be published in eLife. We're just getting going in that, so it may be a couple of years, but that's what we'd like to do.

OK, then. As one of those drug-company people, I can tell you that we actually do try to run the exact experiment in these papers. We may run that second, after we've tried to reproduce things in our own assays and failed, but we never write things off unless we've tried to do exactly what the paper said to do. And many times, it still doesn't work. Or it gives a readout in that system, but we have strong reason to believe that the assay in the original work was flawed or misinterpreted. We are indeed trying to develop drugs, but (and I speak from personal experience here, and more than once), when we call something irreproducible, that's because we can't actually reproduce it.

And the problem with trying to reproduce the 50 most-cited papers (see Update below!) is that most of those are probably going to work pretty well. That's why they're so highly cited. The stuff that doesn't work are the splashy new press-released papers in Nature, Cell, Science, or PNAS, the one that say that Protein X turns out to be a key player in Disease Y, or that Screening Compound Z turns out to be a great ligand for it. Some of those are right, but too many of them are wrong. They haven't been in the literature long enough to pick up a mound of citations, but when we see these things, we get right to work on them to see if we believe them.

And there really are at least two layers of trouble, as mentioned. Reproducibility is one: can you get the exact thing to happen that the paper reports? That's the first step, and it fails more than it should. But if things do work as advertised, that still doesn't mean that the paper's conclusions are correct. People miss things, overinterpret, didn't run a key control, and so on. If someone reports a polyphenolic rhodanine as a wonderful ligand for The Hot Protein of the Moment, odds are that you can indeed reproduce the results in their assay. But that doesn't mean that the paper is much good to anyone at all, because said rhodanine is overwhelmingly unlikely to be of any use. Run it through an assay panel, and it lights up half the board - try interpreting cell data from that, and good luck. So you have Non-Reproducible, and Reproducible, For All the Good That Does.

But if Shekman and the Reproducibility Project are looking for tires to kick, I recommend picking the fifty papers from the last two or three years that caused the most excitement, press releases, and press coverage. New cancer target! Stem cell breakthrough! Lead structure for previously undruggable pathway! Try that stuff out, and see how much of it stands up.

Update: this interview turns out not to be quite correct about the papers that will be reproduced. More details here, and thanks to Tim in the comments for this. It's actually the "50 most impactful" papers from 2010 through 2012, which sounds a lot more like what I have in mind above. Here's the list. This will be quite interesting. . .

Comments (29) + TrackBacks (0) | Category: The Scientific Literature

September 30, 2014

A New Reductive Amination

Email This Entry

Posted by Derek

A colleague brought this new JACS paper to my attention the other day. It's a complementary method to the classic reductive amination reaction. Instead of an aldehyde and amine (giving you a new alkylated amine), in this case, you use a carboxylic acid and an amine to give you the same product, knocking things down another oxidation state along the way.

This reaction, from Matthias Beller and co-workers at Rostock, uses Karstedt's catalyst (a platinum species) with phenylsilane as reducing agent. Double bonds don't get reduced, Cbz and Boc groups survive, as do aliphatic esters. Most of the examples in the paper are on substituted anilines, but there are several aliphatic amines as well. A wide variety of carboxylic acids seem to work, including fluorinated ones. I like it - as a medicinal chemist, I'm always looking for another way to make amines, and there sure are a lot of carboxylic acids out there in the world.

Comments (17) + TrackBacks (0) | Category: Chemical News