Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

February 28, 2013

IBM's Watson Does Drug Discovery?

Email This Entry

Posted by Derek

I saw this story this morning, about IBM looking for more markets for its Watson information-sifting system (the one that performed so publicly on "Jeopardy". And this caught my eye for sure:

John Baldoni, senior vice president for technology and science at GlaxoSmithKline, got in touch with I.B.M. shortly after watching Watson’s “Jeopardy” triumph. He was struck that Watson frequently had the right answer, he said, “but what really impressed me was that it so quickly sifted out so many wrong answers.”

That is a huge challenge in drug discovery, which amounts to making a high-stakes bet, over years of testing, on the success of a chemical compound. The failure rate is high. Improving the odds, Mr. Baldoni said, could have a huge payoff economically and medically.

Glaxo and I.B.M. researchers put Watson through a test run. They fed it all the literature on malaria, known anti-malarial drugs and other chemical compounds. Watson correctly identified known anti-malarial drugs, and suggested 15 other compounds as potential drugs to combat malaria. The two companies are now discussing other projects.

“It doesn’t just answer questions, it encourages you to think more widely,” said Catherine E. Peishoff, vice president for computational and structural chemistry at Glaxo. “It essentially says, ‘Look over here, think about this.’ That’s one of the exciting things about this technology.”

Now, without seeing some structures and naming some names, it's completely impossible to say how valuable the Watson suggestions were. But I would very much like to know on what basis these other compounds were suggested: structural similarity? Mechanisms in common? Mechanisms that are in the same pathway, but hadn't been specifically looked at for malaria? Something else entirely? Unfortunately, we're probably not going to be able to find out, unless GSK is forthcoming with more details.

Eventually, there's coing to be another, somewhat more disturbing answer to that "what basis?" question. As this Slate article says, we could well get to the point where such systems make discoveries or correlations that are correct, but beyond our ability to figure out. Watson is most certainly not there yet. I don't think anything is, or is really all that close. But that doesn't mean it won't happen.

For a look at what this might be like, see Ted Chiang's story "Catching Crumbs From the Table", which appeared first in Nature, and then in his collection Stories of Your Life and Others, which I highly recommend, as "The Evolution of Human Science".

Comments (32) + TrackBacks (0) | Category: In Silico | Infectious Diseases

The Industrial Diels Alder, Revisited in Detail

Email This Entry

Posted by Derek

Last year I mentioned the "good ol' Diels-Alder reaction", and talked about how it doesn't get used as much in drug discovery and industrial chemistry as one might think.

Now Stefan Abele from Actelion (in Switzerland) sends along this new paper, which will tell you pretty much all you need to know about the reaction's industrial side. The scarcity of D-A chemistry on scale that I'd noticed was no illusion (links below added by me):

According to a survey by Dugger et al. in 2005 of the type of reaction scaled in a research facility at Pfizer, and an analysis of the reactions used for the preparation of drug candidate molecules by Carey et al. in 2006, the DA reaction falls into the “miscellaneous” category that accounts for only 5 to 11 % of C-C bond-forming reactions performed under Good Manufacturing Practice. This observation mirrors the finding that C-C bond-forming reactions account for 11.5% of the entire reaction repertoire used by medicinal chemists in the pursuit of drug candidates. In this group, palladium-catalyzed reactions represent about 60% of the occurrences, while the “other” category, into which the DA reaction falls, represents only 1.8% of the total number of reactions. Careful examination of the top 200 pharmaceutical products by US retail sales in 2010 revealed that only one marketed drug, namely Buprenorphine, is produced industrially by using the DA reaction. Two other drugs were identified in the top 200 generic drugs of US retail sales in 2008: Calcitriol and its precursor Calciferol. Since 2002, Liu and co-workers have been compiling the new drugs introduced each year to the market. From 2002 to 2010, 174 new chemical entities were reported. Among them, two examples (Varenicline from Pfizer in 2006 and Peramivir by Shionogi in 2010) have been explicitly manufactured through a DA reaction. Similarly, and not surprisingly, our consultation with a large corpus of peers, colleagues, and experts in industry and academia worldwide revealed that the knowledge of such examples of the DA reaction run on a large scale is scarce, except perhaps in the field of fragrance chemistry.

But pretty much every reaction that has been run on large scale is in this review, so if you're leaning that way, this is the place to go. It doesn't shy away from the potential problems (chief among them being potential polymerization of one or both of the starting materials, which would really ruin your afternoon). But it's a powerful enough reaction that it really would seem to have more use than it gets.

Comments (11) + TrackBacks (0) | Category: Chemical News

February 27, 2013

A Nobel Follow-Up

Email This Entry

Posted by Derek

Those of you who remember the Green Fluorescent Protein Nobel story will likely recall Douglas Prasher. He was the earliest discoverer of GFP, and Roger Tsien has said that he has no idea why he didn't get the Nobel as well. But Prasher, after a series of career and personal reverses, ended up driving a shuttle bus in Huntsville by the time the prize was announced.

Well, he's back in science again - and working in the Tsien lab. Here's the story, which I was very glad to read. Prasher's clearly smart and talented, and I hope that he can put all that to good use. A happy ending?

Comments (15) + TrackBacks (0) | Category: General Scientific News

Not What It Says On the Label, Though

Email This Entry

Posted by Derek

The topic of compound purity has come up here before, as well it should. Every experienced medicinal chemist knows that when you have an interesting new hit compound, that one of the first things to do is go back and make sure that it really is what it says on the label. Re-order it from the archive (in both powder and DMSO stock), re-order it if it's from a commercial source, and run it through the LC/MS and the NMR. (And as one of those links above says, if you have any thought that metal reagents were used to make the compound, check for those, too - they can be transparent to LC and NMR).

So when you do this, how many compounds flunk? Here are some interesting statistics from the folks at Emerald:

Recently, we selected a random set of commercial fragment compounds for analysis, and closely examined those that failed to better understand the reasons behind it. The most common reason for QC failure was insolubility (47%), followed by degradation or impurities (39%), and then spectral mismatch (17%) [Note: Compounds can acquire multiple QC designations, hence total incidences > 100% ]. Less than 4% of all compounds assayed failed due to solvent peak overlap or lack of non-exchangeable protons, both requirements for NMR screening. Failure rates were as high as 33% per individual vendor, with an overall average of 16%. . .

I very much wish that they'd identified that 33% failure rate vendor. But overall, they're suggesting that of 10 to 15% compounds will wipe out, regardless of source. Now, you may not feel that solubility is a key criterion for your work, because you're not doing NMR assays. (That's one that will only get worse as you move out of fragment-sized space, too). But that "degradation or impurities" category is still pretty significant. What are your estimates for commercial-crap-in-a-vial rates?

Comments (11) + TrackBacks (0) | Category: Chemical News | Drug Assays

Selective Inhibitor, The Catalog Says

Email This Entry

Posted by Derek

There's an interesting addendum to yesterday's post about natural product fragments. bAP15.pngDan Erlanson was pointing out that many of the proposed fragments were PAINS, and that prompted Jonathan Baell (author of the original PAINS paper) to leave a comment there mentioning this compound. Yep, you can buy that beast from Millipore, and it's being sold as a selective inhibitor of two particular enzymes. (Here's the original paper describing it). If it's really that selective, I will join one of those Greek monasteries where they eat raw onions and dry bread, and spend my time in atonement for ever thinking that a double nitrophenyl Schiff base enone with an acrylamide on it might be trouble.

Honestly, guys. Do a Ben Cravatt-style experiment across a proteome with that think, and see what you get. I'm not saying that it's going to absolutely label everything it comes across, but it's surely going to stick to more than two things, and have more effects than you can ascribe to those "selective" actions.

Comments (20) + TrackBacks (0) | Category: Chemical Biology | Drug Assays

February 26, 2013

Standard of Care? Not So Fast, Not in the United Kingdom

Email This Entry

Posted by Derek

Did you know that in the UK, patent law says that using a competitor's compound as a comparison in a clinical trial is an infringement? I sure didn't. The government has realized that this rule is much stricter than most other countries, and is moving to change it in a bid to keep more clinical research in the country. Thanks to FierceBiotech for the heads-up on this.

Comments (11) + TrackBacks (0) | Category: Patents and IP

Natural Product Fragments: Get Rid of the Ugly Ones Now

Email This Entry

Posted by Derek

Here's a paper at the intersection of two useful areas: natural products and fragments. Dan Erlanson over at Practical Fragments has a good, detailed look at the the work. What the authors have done is tried to break down known natural product structures into fragment-sized pieces, and cluster those together for guidance in assembling new screening libraries.

I'm sympathetic to that goal. I like fragment-based techniques, and I think that too many fragment libraries tend to be top-heavy with aromatic and heteroaromatic groups. Something with more polarity, more hydrogen-bonding character, and more three-dimensional structures would be useful, and natural products certainly fit that space. (Some of you may be familiar with a similar approach, the deCODE/Emerald "Fragments of Life", which Dan blogged about here). Synthetically, these fragments turn out to be a mixed bag, which is either a bug or a feature depending on your point of view (and what you have funding for or a mandate to pursue):

The natural-product-derived fragments are often far less complex structurally than the guiding natural products themselves. However, their synthesis will often still require considerable synthetic effort, and for widespread access to the full set of natural-product-derived fragments, the development of novel, efficient synthesis methodologies is required. However, the syntheses of natural-product-derived fragments will by no means have to meet the level of difficulty encountered in the multi-step synthesis of genuine natural products.

But take a look at Dan's post for the real downside:

Looking at the structures of some of the phosphatase inhibitors, however, I started to worry. One strong point of the paper is that it is very complete: the chemical structures of all 193 tested fragments are provided in the supplementary information. Unfortunately, the list contains some truly dreadful members; 17 of the worst are shown here, with the nasty bits shown in red. All of these are PAINS that will nonspecifically interfere with many different assays.

Boy, is he right about that, as you'll see when you take a look at the structures. They remind me of this beast, blogged about here back last fall. These structures should not be allowed into a fragment screening library; there are a lot of other things one could use instead, and their chances of leading only to heartbreak are just too high.

Comments (9) + TrackBacks (0) | Category: Chemical News | Drug Assays | Natural Products

Phil Baran at Blog Syn

Email This Entry

Posted by Derek

I linked recently to the latest reaction check at Blog Syn, benzylic oxidation by IBX. Now Prof. Baran (a co-author on the original paper, from his Nicoloau days) has written See Arr Oh with a detailed repeat of the experiment. He gets it to work, so I think it's fair to say that (1) the reaction is doable, but (2) it's not as easy to reproduce right out of the box as it might be.

I'd like to congratulate him for responding like this. The whole idea of publicly rechecking literature reactions is still fairly new, and (as the comments here have shown), there's a wide range of opinion on it. Getting a detailed, prompt, and civil response from the Baran lab is the best outcome, I think. After all, the point of a published procedure - the point of science - is reproducibility. The IBX reaction is now better known than it was, the details that could make it hard to run are now there for people who want to try it, and Prof. Baran's already high reputation as a scientist actually goes up a bit among the people who've been following this story.

Public reproducibility is an idea whose time, I think, has come, and Blog Syn is only one part of it. When you think about the increasingly well-known problems with reproducing big new biological discoveries, things that could lead to tens and hundreds of millions being spent on clinical research, reproducing organic chemistry reactions shouldn't be controversial at all. As they say to novelists, if you're afraid of bad reviews, there's only one solution: don't show anyone your book.

Comments (59) + TrackBacks (0) | Category: Chemical News | The Scientific Literature

February 25, 2013

An Interview

Email This Entry

Posted by Derek

For those of you with subscriptions to Trends in Pharmacological Sciences, they have an interview with me up at the journal's site. It's in the "Scientific Life" section, and I was very happy to be asked to do it.

Comments (8) + TrackBacks (0) | Category: Blog Housekeeping

ENCODE: The Nastiest Dissent I've Seen in Quite Some Time

Email This Entry

Posted by Derek

Last fall we had the landslide of data from the ENCODE project, along with a similar landslide of headlines proclaiming that 80% of the human genome was functional. That link shows that many people (myself included) were skeptical of this conclusion at the time, and since then others have weighed in with their own doubts.

A new paper, from Dan Graur at Houston (and co-authors from Houston and Johns Hopkins) is really stirring things up. And whether you agree with its authors or not, it's well worth reading - you just don't see thunderous dissents like this one in the scientific literature very often. Here, try this out:

Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that (at least 70%) of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect.

Other than that, things are fine. The paper goes on to detailed objections in each of those categories, and the tone does not moderate. One of the biggest objections is around the use of the word "function". The authors are at pains to distinguish selected effect functions from causal role functions, and claim that one of the biggest shortcomings of the ENCODE claims is that they blur this boundary. "Selected effects" are what most of us think about as well-proven functions: a TATAAA sequence in the genome binds a transcription factor, with effects on the gene(s) downstream of it. If there is a mutation in this sequence, there will almost certainly be functional consequences (and these will almost certainly be bad). If, however, imagine a random sequence of nucelotides that's close enough to TATAAA to bind a transcription factor. But in this case, there are no functional consequences - genes aren't transcribed differently, and nothing really happens other than the transcription factor parking there once in a while. That's a "causal role" function, and the whopping majority of the ENCODE functions appear to be in this class. "It looks sort of like something that has a function, therefore it has one". And while this can lead to discoveries, you have to be careful:

The causal role concept of function can lead to bizarre outcomes in the biological sciences. For example, while the selected effect function of the heart can be stated unambiguously to be the pumping of blood, the heart may be assigned many additional causal role functions, such as adding 300 grams to body weight, producing sounds, and preventing the pericardium from deflating onto itself. As a result, most biologists use the selected effect concept of function. . .

A mutation in that random TATAAA-like sequence would be expected to be silent compared to what would happen in a real binding motif. So one would want to know what percent of the genome is under selection pressure - that is, what part of it is unlikely to be mutatable without something happening. Those studies are where we get the figures of perhaps 10% of the DNA sequence being functional. Almost all of what ENCODE has declared to be functional, though, can show mutations with relative impunity:

From an evolutionary viewpoint, a function can be assigned to a DNA sequence if and only if it is possible to destroy it. All functional entities in the universe can be rendered nonfunctional by the ravages of time, entropy, mutation, and what have you. Unless a genomic functionality is actively protected by selection, it will accumulate deleterious mutations and will cease to be functional. The absurd alternative, which unfortunately was adopted by ENCODE, is to assume that no deleterious mutations can ever occur in the regions they have deemed to be functional. Such an assumption is akin to claiming that a television set left on and unattended will still be in working condition after a million years because no natural events, such as rust, erosion, static electricity, and earthquakes can affect it. The convoluted rationale for the decision to discard evolutionary conservation and constraint as the arbiters of functionality put forward by a lead ENCODE author (Stamatoyannopoulos 2012) is groundless and self-serving.

Basically, if you can't destroy a function by mutation, then there is no function to destroy. Even the most liberal definitions take this principle to apply to about 15% of the genome at most, so the 80%-or-more figure really does stand out. But this paper has more than philosophical objections to the ENCODE work. They point out that the consortium used tumor cell lines for its work, and that these are notoriously permissive in their transcription. One of the principles behind the 80% figure is that "if it gets transcribed, it must have a function", but you can't say that about HeLa cells and the like, which read off all sorts of pseudogenes and such (introns, mobile DNA elements, etc.)

One of the other criteria the ENCODE studies used for assigning function was histone modification. Now, this bears on a lot of hot topics in drug discovery these days, because an awful lot of time and effort is going into such epigenetic mechanisms. But (as this paper notes), this recent study illustrated that all histone modifications are not equal - there may, in fact, be a large number of silent ones. Another ENCODE criterion had to do with open (accessible) regions of chromatin, but there's a potential problem here, too:

They also found that more than 80% of the transcription start sites were contained within open chromatin regions. In yet another breathtaking example of affirming the consequent, ENCODE makes the reverse claim, and adds all open chromatin regions to the “functional” pile, turning the mostly true statement “most transcription start sites are found within open chromatin regions” into the entirely false statement “most open chromatin regions are functional transcription start sites.”

Similar arguments apply to the 8.5% of the genome that ENCODE assigns to transcription factor binding sites. When you actually try to experimentally verify function for such things, the huge majority of them fall out. (It's also noted that there are some oddities in ENCODE's definitions here - for example, they seem to be annotating 500-base stretches as transcription factor binding sites, when most of the verified ones are below 15 bases in length).

Now, it's true that the ENCODE studies did try to address the idea of selection on all these functional sequences. But this new paper has a lot of very caustic things to say about the way this was done, and I'll refer you to it for the full picture. To give you some idea, though:

By choosing primate specific regions only, ENCODE effectively removed everything that is of interest functionally (e.g., protein coding and RNA-specifying genes as well as evolutionarily conserved regulatory regions). What was left consisted among others of dead transposable and retrotransposable elements. . .

. . .Because polymorphic sites were defined by using all three human samples, the removal of two samples had the unfortunate effect of turning some polymorphic sites into monomorphic ones. As a consequence, the ENCODE data includes 2,136 alleles each with a frequency of exactly 0. In a miraculous feat of “next generation” science, the ENCODE authors were able to determine the frequencies of nonexistent derived alleles.

That last part brings up one of the objections that many people many have to this paper - it does take on a rather bitter tone. I actually don't mind it - who am I to object, given some of the things I've said on this blog? But it could be counterproductive, leading to arguments over the insults rather than arguments over the things being insulted (and over whether they're worthy of the scorn). People could end up waving their hands and running around shouting in all the smoke, rather than figuring out how much fire there is and where it's burning. The last paragraph of the paper is a good illustration:

The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.

Well, maybe that was necessary. The amount of media hype was huge, and the only way to counter it might be to try to generate a similar amount of noise. It might be working, or starting to work - normally, a paper like this would get no popular press coverage at all. But will it make CNN? The Science section of the New York Times? ENCODE's results certainly did.

But what the general public things about this controversy is secondary. The real fight is going to be here in the sciences, and some of it is going to spill out of academia and into the drug industry. As mentioned above, a lot of companies are looking at epigenetic targets, and a lot of companies would (in general) very much like to hear that there are a lot more potential drug targets than we know about. That was what drove the genomics frenzy back in 1999-2000, an era that was not without its consequences. The coming of the ENCODE data was (for some people) the long-delayed vindication of the idea that gene sequencing was going to lead to a vast landscape of new disease targets. There was already a comment on my entry at the time suggesting that some industrial researchers were jumping on the ENCODE work as a new area to work in, and it wouldn't surprise me to see many others thinking similarly.

But we're going to have to be careful. Transcription factors and epigenetic mechanisms are hard enough to work on, even when they're carefully validated. Chasing after ephemeral ones would truly be a waste of time. . .

More reactions around the science blogging world: Wavefunction, Pharyngula, SciLogs, Openhelix. And there are (and will be) many more.

Comments (24) + TrackBacks (0) | Category: Biological News

February 22, 2013

What If the Journal Disappears?

Email This Entry

Posted by Derek

Hmm, here's a question I hadn't considered. What happens when an online-only journal quits publishing and (apparently) deletes its archives? That's what seems to have happened with the "Journal of Advances in Developmental Research".

Now, to a first approximation, the loss of many of the papers in this journal will not, in all likelihood, be much of a setback. Here is (was?) its stated focus:

The Journal of Advances in Developmental Research is a peer-reviewed multidisciplinary journal that publishes research articles, general articles, research communications, review article and abstracts of theses from the fields of science, social sciences, sports science, humanities, medical, education, engineering, technology, biotechnology, home science, computer, history, arts and other fields which participates in overall development of society.

It provides a platform to discuss current and future trends of research and their role in development of society.

Now, that doesn't sound like anything anyone would want to read. But as long as your check cleared, you could publish in it - it was one of those bottom-of-the-barrel predatory publishing venues. What happens now, though? If there was something worthwhile in any of those papers, we'll never have any way of knowing, because they're all gone. Can (or should) the authors resubmit the papers somewhere else where they can be seen?

Here, for reference are Jeffrey Beall's current criteria for a predatory publisher. One of them is that they "(Have) no policies or practices for digital preservation". Although these guys seem to have had a policy, if you count "wipe the hard drive" as a policy.

Tip via Ivan Oransky and Jeffrey Beall on Twitter.

Comments (14) + TrackBacks (0) | Category: The Scientific Literature

Nativis Returns

Email This Entry

Posted by Derek

Well, since it's Friday, I thought I'd quickly revisit one of the favorite companies I've written about here: Nativis. You'll recall that this is the outfit that claimed "photonic signatures" of drugs were as effective as the physical molecules themselves. My comments (and those of the readership here) led to some public exchanges with the company's chief financial officer, but last I heard of them they had moved out of San Diego and back to Seattle. Readers mentioned that the company was developing some sort of cancer-treatment device based on their ideas.

A couple of alert readers have now sent along links to the latest news. Nativis has produced a device they're calling the "Voyager", which is being tested in veterinary applications. Here is a YouTube video from a clinic that's trying it out. I have no reason to think that the doctor being interviewed is anything but sincere, but I also tend to think that he may not realize just what the opinion of many observers is about the Nativis technology. The veterinarian says things in the clip about how "the healing energy is then emitted to the tumor from this coil" and "The radiofrequency signal is stored on this device and then played, if you will, through this coil, to the tumor itself".

He does not appear to be misrepresenting Nativis' claims. I believe that this is the relevant patent application. The first claim reads:

"1. An aqueous anti-tumor composition produced by treating an aqueous medium free of paclitaxel, a paclitaxel analog, or other cancer-cell inhibitory compound with a low-frequency, time-domain signal derived from paclitaxel or an analog thereof, until the aqueous medium acquires a detectable paclitaxel activity, as evidenced by the ability of the composition (i) to inhibit growth of human glioblastoma cells when the composition is added to the cells in culture, over a 24 hour culture period, under standard culture conditions, and/or (ii), to inhibit growth of a paclitaxel-responsive tumor when administered to a subject having such a tumor."
.

So yes, we're apparently still talking about turning a sample of water into a drug by playing some sort of radio frequency into it. And no, I still have no idea how this is physically possible, and to the extent that I understand the company's explanations, I do not find them convincing. Here's some more language out of the patent application:

[0151] In one exemplary method, paclitaxel time-domain signals were obtained by recording low-frequency signals from a sample of paclitaxel suspended in CremophorEL™ 529 ml and anhydrous ethanol 69.74 mi to a final concentration of 8 mg/rrtl. The signals were recorded with injected DC offset, at noise level settings between 10 and 241 mV and in increments of 1 mV. A total of 241 time-domain signals over this injected-noise level range were obtained, and these were analyzed by an enhanced autocorrelation algorithm detailed above, yielding 8 time-domain paclitaxel-derived signals for further in vitro testing. One of these, designated signal M2{3), was selected as an exemplary paclitaxel signal effective in producing taxol-specific effects in biological response systems (described below), and when used for producing paclriaxei-specific aqueous compositions in accordance with the invention, also as described below.

[0152] Figs. 9A-9C show frequency-domain spectra of two paclitaxel signals with noise removed by Fourier subtraction (Figs. 9A and 98), and a cross-correlation of the two signals (Fig. 9C), showing agent-specific spectral features over a portion of the frequency spectrum from 3510 to 3650 Hz. As can be seen from Fig. 9C, when a noise threshold corresponding to an ordinate value of about 3 is imposed, the paclitaxel signal in this region is characterized by 7 peaks. The spectra shown in Figs. 9A-9C, but expanded to show spectral features over the entire region between 0-20kHz, illustrate how optimal time-domain signals can be selected, by examining the frequency spectrum of the signal for unique, agent-specific peaks, and selecting a time-domain signal that contains a number of such peaks.

[0153] The time-domain signals recorded, processed, and selected as above may be stored on a compact disc or any other suitable storage media for analog or digital signals and supplied to the transduction system during a signal transduction operation The signal carried on the compact disc is representative, more generally, of a tangible data storage medium having stored thereon, a low-frequency time domain signal effective to produce a magnetic field capable of transducing a chemical or biological system, or in producing an agent-specific aqueous composition in accordance with the invention, when the signal is supplied to electromagnetic transduction coil(s) at a signal current calculated to produce a magnetic field strength in the range between 1 G and 10"8 G, Although the specific signal tested was derived from a paclitaxel sample, it will be appreciated that any taxane-iike compound should generate a signal having the same mechanism of action in transduced form.

I just fail to see how recording "signals" from a drug preparation can then be used to turn water (or water/bubble mixtures, etc., as the patent goes on to claim) into something that acts like the original drug. All the objections I raised in my first post on this company are still in force as far as I'm concerned, and my suggestions for more convincing experimental data are still out there waiting to be fulfilled. Despite various mentions of publications and IND filings when I interacted with Nativis back in 2010, I am unaware of any evidence that has been divulged past their patent filings.

And no, I do not regard patent filings as sufficient evidence that anything actually works - here's one for a process of reincarnation leading to immortality, for example. Even issued patents have proven insufficient in the past: here's one for a faster-than-light radio antenna. If Nativis wants to end up in a different bin than those people, they are, in my opinion, taking an odd path to doing so.

Comments (96) + TrackBacks (0) | Category: Snake Oil

February 21, 2013

An Incentive For Hype

Email This Entry

Posted by Derek

Here's an article illustrating what goes into high-profile journal publications, and why you should always read past the title and the abstract. Björn Brembs noticed this paper coming out in Current Biology on fruit fly receptors and behavior, whose abstract claims that "blocking synaptic output from octopamine neurons inverts the valence assigned to CO2 and elicits an aversive response in flight". As Brembs puts it:

We currently have a few projects in our lab that target these octopamine neurons, so this was a potentially very important finding. It was my postdoc, Julien Colomb, who spotted the problem with this statement first. In fact, if it wasn't for Julien, I might have never looked at the data myself, as I know the technique and I know and trust the lab the paper was from. I probably would just have laid the contents of the abstract to my memory and cited the paper where appropriate, as the results confirmed our data and those in the literature (a clear case of confirmation bias on my part).

When you look harder, you find that yes, the genetically manipulated flied do seem averse to carbon dioxide plumes. But when you check the control experiments, you find that the two transgenes added to the flies (independent of the change to the octopamine system that's the subject of the paper) both decrease the tropism for CO2. So there's really no way of knowing what the effect of both of them might be, octopamine signaling or not, and you might well suspect that the two of them together could hose up the carbon dioxide response without invoking the receptor pathways at all.

As Brembs says, though, the authors aren't trying to hide this. It's in the body of their paper. Abstract be damned, the paper itself states:

"We note that the Tdc2-GAL4/+ driver line does not spend a significantly greater amount of time in the CO2 plume by comparison to air, but this line, as well as the UAS-TNT/+ parent line, spends significantly more time in the CO2 plume in comparison to their progeny. Therefore, this experimental result cannot be fully attributable to the genetic background."

No, not fully attributable at all, especially if the progeny show some sort of additive effect of the two transgenes. Of course, if you water down your conclusions too much, you might not get the paper into as good a journal as you'd like. I'll let Brembs sum up:

To make this unambiguously clear: I can't find any misconduct whatsoever in this paper, only clever marketing of the sort that occurs in almost every 'top-journal' paper these days and is definitely common practice. On the contrary, this is exactly the behavior incentivized by the current system, it's what the system demands, so this is what we get. It's precisely this kind of marketing we refer to in our manuscript, that is selected for in the current evolution of the scientific community. If you don't do it, you'll end up unemployed. It's what we do to stay alive.

If there's anyone out there who thinks that this doesn't go on in the chemistry literature, my advice is to please look around you a bit. This sort of thing goes on all the time, and I'd guess that most of us automatically dial down the statements in paper titles and abstracts as we read them, without even realizing any more that we're doing so. But in a case like this (and there are many others), even that process will still let erroneous conclusions into your head. And we all have enough of those already.

Comments (6) + TrackBacks (0) | Category: The Scientific Literature

The Hard Targets: How Far Along Are We?

Email This Entry

Posted by Derek

I wrote here about whole classes of potential drug targets that we really don't know how to deal with. It's been several years since then, and I don't think that the situation has improved all that much. (In 2011 I reviewed a book that advocated attacking these as a way forward for drug discovery).

Protein-protein interactions are still the biggest of these "undruggable targets", and there has been some progress made there. But I think we still don't have much in the way of general knowledge in this area. Every PPI target is its own beast, and you get your leads where you can, if you can. Transcription factors are the bridge between these and the protein-nucleic acid targets, which have been even harder to get a handle on (accounting for their appearance on lists like this one).

There are several chicken-and-egg questions in these areas. Getting chemical matter seems to be hard (that's something we can all agree on). Is that because we don't have compound collections that are biased the right way? If so, what the heck would the right way look like? Is is because we have trouble coming up with good screening techniques for some of these targets? (And if so, what are we lacking?) How much of the slower progress in these areas has been because of their intrinsic difficulty, and how much has been because people tend to avoid them (because of their, well, intrinsic difficulty?) If we all had our backs to the wall, could we do better, or would we generate just a lot more of the same?

I ask these questions because for years now, a lot of people in the industry have been saying that we need to get more of a handle on these things, because the good ol' small-molecule binding sites are getting scarcer. Am I right to think that we're still at the stage of telling each other this, or are there advances that I haven't kept up with?

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

An Anniversary

Email This Entry

Posted by Derek

I wanted to repost an old entry of mine, from back in 2002 (!) It's appropriate this week, and just as I was in 2002, I'm a couple of days late with the commemeration:

I missed a chance yesterday to note an anniversary. Giordano Bruno was something of a crank, not normally the sort of person I'd be commemorating. But in his time, it didn't take very much to be considered either of those, or worse, and we have to make allowances.

He was headstrong. We can see now that he was sometimes eerily right, other times totally wrong. Either way, many of these strongly held positions were sure sources of trouble for anyone who advocated them. All living things were made up of matter, and that matter was the same across the universe - that one was not going to go over well in the late 16th century.

There was more. The stars, he said, were nothing more than other suns, and our sun was nothing more than a nearby star. He saw no reason why these other suns should not have planets around them, and no reason why those planets should not have life: "Innumerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds."

He went on at length. And as I said, much of it was, by scientific standards, mystical rot. His personality was no help whatsoever in getting his points across. He appears to have eventually gotten on the nerves of everyone he dealt with. But no one deserves to pay what he did for it all.

Bruno was excommunicated and hauled off in chains. He spent the next several years in prison, and was given chances to recant up until the very end. He refused. On February 19th, 1600, he was led into the Campo dei Fiori plaza in Rome, tied to a post, and burned to death in front of a crowd.

Mystic, fool, pain in the neck. I went out tonight to see Saturn disappear behind the dark edge of the moon, putting the telescope out on the driveway and calling my wife out to see. Then I came inside, sat down at my computer, wrote exactly what I thought, and put it out for anyone who wanted to read it around the world. While I did all that, I remembered that things haven't always been this way, haven't been this way for long at all, actually. And resolved to remember to enjoy it all as much as I can, and to remember those who never got to see it.

Comments (6) + TrackBacks (0) | Category: Who Discovers and Why

February 20, 2013

A New Old Diabetes and Obesity Drug Candidate

Email This Entry

Posted by Derek

Obesity is a therapeutic area that has broken a lot of hearts (and wallets) over the years. A scroll back through this category will show some of the wreckage, and there's plenty more out there. But hope does that springing-eternal thing that it does, and there's an intriguing new possibility for a target in this area. Alan Saltiel of Michigan (whose group has had a long presence in this sort of research), along with a number of other well-known collaborators, report work on the inflammation connection between diabetes and obesity:

Although the molecular events underlying the relationship between obesity and insulin resistance remain uncertain, numerous studies have implicated an inflammatory link. Obesity produces a state of chronic, low-grade inflammation in liver and fat accompanied by the local secretion of cytokines and chemokines that attenuate insulin action. Knockout or pharmacological inhibition of inflammatory pathways can disrupt the link between genetic- or diet-induced obesity and insulin resistance, suggesting that local inflammation is a key step in the generation of cellular resistance to important hormones that regulate metabolism.

Saltiel's lab had already implicated IKK-epsilon as a kinase involved in this pathway in obese mouse models, and they've been searching for small-molecule inhibitors of it. As it turns out, a known compound (amlexanox) with an uncertain mechanism of action is such an inhibitor. It's best-known, if it's known at all, as a topical canker sore treatment, and has been around since at least the early 1990s.

Administration of this selective TBK1 and IKK-ε inhibitor to obese mice produces reversible weight loss and improved insulin sensitivity, reduced inflammation and attenuated hepatic steatosis without affecting food intake. These data suggest that IKK-ε and TBK1 are part of a counterinflammatory process that sustains energy storage in the context of insulin resistance. Disruption of this process by amlexanox thus increases adaptive energy expenditure and restores insulin sensitivity. Because of the apparent safety of this drug in patients, we propose that it undergo study for the treatment of obesity, type 2 diabetes and nonalcoholic fatty liver disease in patients.

I don't see why not. The compound does seem to be absorbed after oral dosing (most of the topical paste ends up going down into the stomach and intestines), and about 17% is excreted unchanged in the urine. You'd think some sort of oral formulation could be worked out, given those numbers. It looks like a low-micromolar inhibitor, and is selective against a kinase panel, which is good news. And treatment of mice on a high fat diet prevented weight gain, while not altering food intake. Their insulin sensitivity improved, as did the amount of fat in the liver tissue. Giving the compound to already-obese mice (either through diet or genetically predisposed (ob/ob) animals) caused the same effect. Metabolic cage studies showed that increased energy expenditure seemed to be the mechanism (as you'd think - thermodynamics will only give you so many ways of losing weight while eating the same amount of food, and the obvious alternative mechanism might not be very popular).

Just how the compound does all this is somewhat mysterious:

The precise mechanisms by which amlexanox produces these beneficial effects in obese rodents have not yet been completely elucidated. Although amlexanox is known to be a mast cell stabilizer of unknown mechanism20, and depletion of mast cells may have beneficial metabolic effects59, most of the in vivo and in vitro evidence points to a role for the drug in increasing expenditure of energy while reducing its storage in adipocytes and hepatocytes. Furthermore, the lack of a phenotype in wild-type mice reconstituted with Ikbke knockout bone marrow indicates that the role of IKK-ε in bone marrow-derived cells such as mast cells and macrophages is less important than its role in other cell types such as adipocytes and hepatocytes. Although IKK-ε and TBK1 expression is elevated as part of the inflammatory program downstream of NF-κB, the kinase targets of the drug do not seem to be direct participants in the increased inflammatory program. In fact, the reduced inflammation observed in vivo with amlexanox treatment may be an indirect effect of improved metabolic disease or, perhaps, of elimination of a feedback pathway that maintains inflammation at low levels such that inflammation is permitted to resolve. Moreover, despite the fact that administration of amlexanox to obese mice restores insulin sensitivity, these compounds are not direct insulin sensitizers in vitro.

This level of unworkedoutness will surely interest some companies in taking a look at this, and if proof-of-concept can be found with amlexanox itself, a more potent inhibitor would also be something to search for. I have just one worry, though (he said, in his Peter Falk voice).

We were just talking around here about how mouse models of inflammation are probably useless, were we not? So it would be good news if, as speculated above, the inflammation component of this mechanism were to be an effect, not a cause. A direct attack on metabolic syndrome inflammation in mouse models is something that I'd be quite wary of, given the recent reports. But this might well escape the curse. Worth keeping an eye on!

Comments (12) + TrackBacks (0) | Category: Diabetes and Obesity

February 19, 2013

The Wages of Copy-Pasting

Email This Entry

Posted by Derek

A few weeks ago I mentioned this situation regarding work by Prof. Xi Yan. Two recent papers seem to have been substantially copy-pasted from earlier work published by completely different groups. Now See Arr Oh has some details on what happens to you when you have the nerve to do that in a journal of the Royal Society of Chemistry: why, you have to publish a note regretting that you didn't cite the paper you copied from, that's what. "The authors apologize for this oversight."

There, that should square things up. Right? See Arr Oh is not very happy about this response, and I don't blame him for a minute. The RSC editors seem to be ignoring the word-for-word aspect of a substantial part of the new paper; it really is a paste job, and you're not supposed to do that. And the only problem they have is that the paper being stolen from wasn't cited? Oversight, my various body parts.

Comments (20) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

More From Blog Syn

Email This Entry

Posted by Derek

I wanted to mention that there are two more entries up on Blog Syn: one of them covering this paper on alkenylation of pyridines. (It's sort of like a Heck reaction, only you don't have to have an iodo or triflate on the pyridine; it just goes right into the CH bond). The short answer: the reaction works, but there are variables that seem crucial for its success that were under-reported in the original paper (and have been supplied, in part, by responses from the original author to the Blog Syn post). Anyone thinking about running this reaction definitely needs to be aware of this information.

The latest is a re-evaluation of a older paper on the use of IBX to (among many other things) oxidize arylmethane centers. It's notable for a couple of reasons: it's been claimed that this particular reaction completely fails across multiple substrates, and the reaction itself is from the Nicolau lab (with Phil Baran as a co-author). Here's the current literature situation:

A day in the library can save you a week in the lab, so let’s examine this paper’s impact using SciFinder: it's been cited 179 times from 2002-2013. Using the “Get Reactions" tool, coupled with SciFinder’s convenient new “Group by Transformation” feature, we identified 54 reactions from the citing articles that can be classified as “Oxidations of Arylmethanes to Aldehydes/Ketones" (the original reaction's designation). Of these 54 reactions, only four (4) use the conditions reported in this paper, and all four of those come from one article: Binder, J. T.; Kirsch, S. F. Org. Lett. 2006, 8, 2151–2153, which describes IBX as “an excellent reagent for the selective oxidation to generate synthetically useful 5-formylpyrroles.” Kirsch's yields range from 53-79% for relatively complex substrates, not too shabby.

I'll send you over to Blog Syn for the further details, but let's just say that not many NMR peaks are being observed around 10 ppm. Phil Baran himself makes an appearance with more details about his recollection of the work (to his credit). Several issues remain, well, unresolved. (If any readers here have ever tried the reaction, or have experience with IBX in general, I'm sure comments would be very welcome over there as well).

Comments (73) + TrackBacks (0) | Category: Chemical News | The Scientific Literature

February 18, 2013

What Would Go Into the Chemistry Museum Displays, Anyway?

Email This Entry

Posted by Derek

Well, Cambridge is quiet today, as are many workplaces across the US. My plan is to go out for some good Chinese food and then spend the afternoon in here with my family; my kids haven't been there for at least a couple of years now.

And that brings up a thought that I know many chemists have had: how ill-served chemistry is by museums, science centers, and so on. Physics has a better time of it, or at least some parts of it. You can demo Newtonian mechanics with a lot of hands-on stuff, and there's plenty to do with light, electricity and magnetism and so on. (Quantum mechanics and particle physics, well, not so much). Biology at least can have some live creatures (large and small), and natural-history type exhibits, but its problems for public display really kick in when it shades over to biochemistry.

Chemistry, though, is a tough sell. Displays of the elements aren't bad, but many of them are silvery metals that can't be told apart by the naked eye. Crystals are always good, so perhaps we can claim some of the mineral displays for our own. But physical chemistry, organic chemistry, and analytical chemistry are difficult to show off. The time scales tend to be either too fast or too slow for human perception, or the changes aren't noticeable except with the help of instrumentation. There are still some good demonstrations, but many of these have to be run with freshly prepared materials, and by a single trained person. You can't just turn everyone loose with the stuff, and it's hard to come up with an automated, foolproof display that can run behind glass (and still attract anyone's interest). An interactive "add potassium to water to see what happens" display would be very popular, but rather hard to stage, both practically and from an insurance standpoint. You'd also run through a lot of potassium, come to think of it.

Another problem is that chemistry tends to deal with topics that people either don't see, or don't notice. Cooking food, for example, is sheer chemistry, but no one thinks of it like that - well, except Harold McGee and now the molecular gastronomy people. (Speaking of which, if any of you are crazy enough to order this from Amazon, I'll be very impressed indeed). Washing with soap or detergent, starting a fire, using paint or dye - there are plenty of everyday processes that illustrate chemistry, but they're so familiar that it's hard to use them as demonstrations. Products as various as distilled liquor, plastic containers, gasoline, and (of course) drugs of all sorts are pure examples of all sorts of chemical ideas, but again, it's hard to show them as such. They're either too well-known (think of Dustin Hoffman being advised to go into plastics), or too esoteric (medicinal chemistry, for most people).

So I started asking myself, what would I do if I had to put up some chemistry exhibits in a museum? How would I make them interesting? For med-chem, I'm imagining some big video display that starts out with a molecule and lets people choose from some changes they can make to it (oxidation, adding a fluorine, changing a carbon to nitrogen, etc.) The parts of the molecule where these change are allowed could glow or something when an option is chosen, then when you make the change, the structure snazzily shifts and the display tells you if you made a better drug, a worse one, something inactive, or a flat-out poison. You'd have to choose your options and structures carefully, but you might be able to come up with something.

But other things would just have to be seen and demonstrated, which is tricky. Seen on a screen, the Belousov-Zhabotinskii reaction just looks like a special effect, and a rather cheap one at that. But seeing it done by mixing up real chemicals and solutions right in front of you is much more impressive, but it's hard for me to think of a way to do that which would be done often enough (and on large enough scale) for people to see it, and wouldn't cost too much to do (supplies, staff, flippin' insurance, etc.)

If you had to build out the chemistry hallway at the museum, then, what would you fill it with? Suggestions welcome.

Comments (70) + TrackBacks (0) | Category: Chemical News

February 15, 2013

Pfizer's Covx Closing?

Email This Entry

Posted by Derek

A Friday night blog entry is a rare event around here, but I've had a report that Pfizer has been closing down their Covx unit in San Diego today. It is (or was) the peptide therapeutic part of the company. This makes this part of the Pfizer web site a bit. . .inoperative:

CovX and Rinat are two biotechnology companies acquired by Pfizer that are currently operating as independent units within Worldwide R&D. This operating model allows CovX and Rinat to maintain their unique cultures and scientific approaches while having full access to Pfizer's world-class capabilities and resources.

So much for that. Can anyone confirm the report?

Comments (35) + TrackBacks (0) | Category: Business and Markets

The Finest Blue in the Lab

Email This Entry

Posted by Derek

CuSO4.jpg
For Friday afternoon, a bit of chem-geekery. I recently had occasion to use some copper sulfate, and the bottle I had was marked "large crystals" of the pentahydrate. I have loved the color of that stuff since I was a kid, and still do. Powdered, you lose a lot of the effect, but the chunks of crystalline stuff are the very definition of blue. (Photo from egeorge96 on Flickr).

Does anyone know a better one? That's my candidate for the solid phase. In solution, the complex of copper II and pyridine is a good one, a bit more towards royal blue/purple. You can definitely see the change when the pyridine hits it. I can't find a photo of that one on the web; if anyone has one, I'll be glad to post it. More colors to come on other slow Friday afternoons.

Update: a rare gas-phase blue (!) from the comments. Never seen that before!

And another one from the comments: here's someone who really, really, really likes copper sulfate. Here's how it was done.

Comments (42) + TrackBacks (0) | Category: Life in the Drug Labs

ABT-199 Clinical Trial Suspended (Updated)

Email This Entry

Posted by Derek

Abbott - whoops, pardon me, I mean AbbVie, damn that name - has been developing ABT-199, a selective Bcl-2-targeted oncology compound for CLL. Unlike some earlier shots in this area (ABT-263, navitoclax), it appeared to spare platelet function, and was considered a promising drug candidate in the mid-stage clinical pipeline.

Not any more, perhaps. Clinical work has been suspended after a patient death due to tumor lysis syndrome. This is a group of effects caused by sudden breakdown of the excess cells associated with leukemia. You get too much potassium, too much calcium, too much uric acid, all sorts of things at once, which lead to many nasty downstream events, among them irreversible kidney damage and death. So yes, this can be caused by a drug candidate working too well and too suddenly.

The problem is, as the Biotech Strategy Blog says in that link above, that this would be more understandable in some sort of acute leukemia, as opposed to CLL, which is the form that ABT-199 is being tested against. So there's going to be some difficulty figuring out how to proceed. My guess is that they'll be able to restart testing, but that they'll be creeping up on the dosages, with a lot of blood monitoring along the way, until they get a better handle on this problem - if a better handle is available, that is. ABT-199 looks too promising to abandon, and after all, we're talking about a fatal disease. But this is going to slow things down, for sure.

Update: I've had email from the company, clarifying things a bit: "While AbbVie has voluntarily suspended enrollment in Phase 1 trials evaluating ABT-199 as a single agent and in combination with other agents such as rituximab, dosing of active patients in ABT-199 trials is continuing. Previous and current trials have shown that dose escalation methods can control tumor lysis syndrome and we have every expectation that the trials will come off of clinical hold and that we will be able to initiate Phase 3 trials in 2013, as planned."

Comments (18) + TrackBacks (0) | Category: Cancer | Clinical Trials | Toxicology

Merck Finally Settles Over Vytorin

Email This Entry

Posted by Derek

You may remember that Merck and Schering-Plough took a lot of fire for the way that they released the clinical data for one of the key Vytorin trials (ENHANCE). The numbers were delayed for months, and when they were finally released, they were. . .problematic for the drug. And for the companies' stocks.

The institutional shareholders did not take that one well; and a number of them filed suit. This week it was announced that Merck has settled for $688 million, while admitting no wrongdoing. This settles the suit, but it isn't going to settle anyone's nerves, as Matthew Herper rightly observes:

Merck admitted no liability or wrongdoing in the decision, and continues to believe its handling of the study was proper. But the settlement could make investors nervous anyway. One of the reasons Vytorin has never recovered (sales of the pill are $1.5 billion, $1 billion less than before the results were released, but that partly reflects a price increase) is that Merck’s other clinical trials, so far, have never again compared Vytorin to Zocor to look for differences in real cardiovascular problems like heart attack and stroke. Instead, the other big trial of Vytorin compared it to placebo in patients who had a heart valve that did not close fully.

But Merck is doing that big Vytorin versus Zocor study, a giant clinical trial called IMPROVE-IT. Results have been delayed several times, and probably won’t come until next year. But the company has said that the independent board that is monitoring the results of the trial will meet in March. They could decide to stop the trial if it has already proved more effective, if Vytorin appears more dangerous than Zocor, or if there is no hope that Vytorin will prove more effective.

I doubt that the trial will be stopped, but at this point I'll be surprised if it yield enough strong data to vindicate Vytorin, either. The delays seen in the trial so far make that look like a very outside chance. My guess is "beneficial effect, but not as much as you'd want", which won't satisfy anyone.

Comments (3) + TrackBacks (0) | Category: Cardiovascular Disease

February 14, 2013

How Can There Be a Shortage of Scientists And An Excess At The Same Time/

Email This Entry

Posted by Derek

I wanted to come back to the topic of whether we have (1) too many unemployed (or underemployed) scientists and technology people in the US, or (2) a critical shortage of qualified people that's leading companies to complain that they can't fill positions. Can we really have both at the same time? All this bears on (3): should we revise the visa rules to let in more technically qualified immigrants?

The other day I wrote about a PriceWaterhouseCooper (PwC as they would have it) report on this very issue. I'll pick up where that post left off. One thing to notice about the PwC report is that it's aimed at HR departments, and it tells them some of the things they want to hear - that they're important, that they're unappreciated, and that they have a crucial role to play in today's hiring environment. This is not just flattery; this is advertising - aspirational advertising, to be more accurate. That's the technique (used since forever) of pitching an ad to a slightly more elevated group (socioeconomically) than the one it's actually aimed at. Think of mail-order catalogues and credit-card offers; that's where you see this in the crudest form. The idea is to make the recipients think "Wow, they must think I'm one of those people", or (even better) "Wow, I must really be one of those people". That is, the sort of people who shop for this pricey merchandise, or who think nothing of paying the annual fee for a MatteBlackAnodizedPlatinum Card, what have you, because that's the high-end life they lead.

What's PwC selling, then? Why, consulting services to all these HR departments, to help them navigate their extremely important, critical-like-never-before jobs in this extraordinary environment. The HR people have their morale improved, PwC gets some new accounts, and everyone's happy. But the report is still a pure example of the "critical lack of good candidates" idea, being put to more immediate use by a company that sees an opportunity to trade on what's saturating the air right now.

But how can there be a shortage and an excess at the same time? Part of the answer might be found in the work of Peter Cappelli of the Wharton School at Penn. A reader sent that link along to me the other day, and it's well worth a look. Cappelli is the author of Why Good People Can't Get Jobs, and his take is that employers are largely to blame for this situation:

. . .Today’s CEOs regularly blame schools and colleges for their difficulties in finding adequately prepared employees. The complaint shows up in survey after survey, as Cappelli shows in his book, and it is substantially more common among American employers than their peers in most other developed and developing economies.

But do these surveys “show that the United States is among the world leaders in skills gaps,” Cappelli asks, “or simply in employer whining and easy media acceptance of employer complaints?”

He thinks a body of lesser-reported studies contains the answer. “If you look at the studies of hiring managers and what they want, they’re not complaining about academic skills,” Cappelli says. “You hear the business spokespeople saying this, but the actual hiring managers are not saying this now. And in fact they’ve never, in modern times, said that.”

And Cappelli also has pointed out that this view of the world is appealing to several constituencies at the same time, among them, people who advocate school reform and changes in research funding, social reformers of several different kinds, and employers who would rather place the blame for some of their problems on outside factors. There's a reason this idea keeps circulating around - there are a lot of extraneous reasons to keep believing it.

He goes on to decry what he calls the "Home Depot" approach to hiring:

In a 2011 op-ed article for The Wall Street Journal, Cappelli remarked on a telling statistic from the Silicon Valley tech boom of the 1990s: only 10 percent of the people in IT jobs had IT-related degrees. But a lot of the same people would probably have a hard time landing similar jobs today, because employers have increasingly adopted what Cappelli calls “a Home Depot view of the hiring process, in which filling a job vacancy is seen as akin to replacing a part in a washing machine.

“We go down to the store to get that part,” he explains, “and once we find it, we put it in place and get the machine going again. Like a replacement part, job requirements have very precise specifications. Job candidates must fit them perfectly or the job won’t be filled and business can’t operate.”

He lays some of the blame for this on software-based hiring practices, the CV-scanning programs that look for the keywords that supposedly have to be present for a candidate to be considered. (Many readers here may have run into this problem; chemistry and its associated disciplines are an unfortunately good fit for this approach). And here's where some sympathy for the HR people might be appropriate: these sorts of "solutions" are often used when there aren't enough people (or enough time, or money) to do a good job of screening applicants. That's not to say that there probably aren't some HR people who truly believe that this is the best way to do things, but some of them also have their backs to their own walls.

There's another part of that article on Cappelli that takes us to the H1B visa issue:

When there are three or four job-seekers for every vacancy—and some postings draw applicants by the hundred—firms have an understandable incentive to wait for a dream candidate to show up. And ideally, a dream candidate who expresses a low salary requirement.

In (a recent) Manpower survey, 11 percent of the employers reporting skill shortages chalked it up to applicants unwilling to accept job offers at the wages companies were willing to pay.

I have the impression that much of the push to open up the technical-worker visas is coming from Silicon Valley and the IT world in general. (Someone correct me if I'm wrong). And it's also my impression that there are already a lot of people in that job market looking for work - again, if I'm mistaken about this, I'll revise this post. So one (not very charitable) explanation for a drive to bring in more job candidates from abroad is that they will be cheaper to hire, and that employers will have more leverage over them because of their visa situation. Plausible, or not? Update: apparently all too plausible - see this New York Times piece.

Now, it pains me to write that sort of thing, because we could head right off into the whole immigration-reform swamp, which is concerned with a lot of issues that are peripheral to this discussion. (Undocumented workers from Central America, for example, are not a big factor in IT or chemistry hiring). And I think that the US should indeed admit immigrants, that doing so has been one of the big factors in making us the nation we are (the good parts, I mean), and that if we're going to let people in, that we should strongly, strongly bias the process towards smart, entrepreneurial, hard-working ones. So I have a natural sympathy towards the idea of bringing in technically and scientifically trained people.

But not to use them as a source of cheap labor that can be leaned on because of their immigrant status. I don't like that idea much at all, not for what it does to the people who are already here, and not for what it does to the ones who would come here looking for something better, either. And this illustrates the tangle of mixed motives, declared and otherwise, that this whole issue is stuck in. The real reasons people advocate the positions they do in this area can be hard to work out, and that has the nasty side effect of giving everyone plenty of opportunities to accuse others of acting in bad faith, etc. It's a mess.

So, in the same way that I tried to dig into the motives of PhRMA the other day, one can try to look at motivations here. Employers, in fact, could well have an interest in keeping the whole "We can't find good people" line of thinking alive, which is something I mentioned when I brought up the PriceWaterhouseCoopers report. It gives upper management someone else to blame, and in some cases it can be used to keep wages down. As I've said here before, the idea that companies here in the US will hire workers here if they're forced to is, I think, a fantasy. They'll keep those positions open and complain about it instead.

And although this is a particular problem for Silicon Valley and that industry, biopharma is not immune. Not at all.

Comments (60) + TrackBacks (0) | Category: Business and Markets

February 13, 2013

Mouse Models of Inflammation Are Basically Worthless. Now We Know.

Email This Entry

Posted by Derek

We go through a lot of mice in this business. They're generally the first animal that a potential drug runs up against: in almost every case, you dose mice to check pharmacokinetics (blood levels and duration), and many areas have key disease models that run in mice as well. That's because we know a lot about mouse genetics (compared to other animals), and we have a wide range of natural mutants, engineered gene-knockout animals (difficult or impossible to do with most other species), and chimeric strains with all sorts of human proteins substituted back in. I would not wish to hazard a guess as to how many types of mice have been developed in biomedical labs over the years; it is a large number representing a huge amount of effort.

But are mice always telling us the right thing? I've written about this problem before, and it certainly hasn't gone away. The key things to remember about any animal model is that (1) it's a model, and (2) it's in an animal. Not a human. But it can be surprisingly hard to keep these in mind, because there's no other way for a compound to become a drug other than going through the mice, rats, etc. No regulatory agency on Earth (OK, with the possible exception of North Korea) will let a compound through unless it's been through numerous well-controlled animal studies, for short- and long-term toxicity at the very least.

These thoughts are prompted by an interesting and alarming paper that's come out in PNAS: "Genomic responses in mouse models poorly mimic human inflammatory diseases". And that's the take-away right there, which is demonstrated comprehensively and with attention to detail.

Murine models have been extensively used in recent decades to identify and test drug candidates for subsequent human trials. However, few of these human trials have shown success. The success rate is even worse for those trials in the field of inflammation, a condition present in many human diseases. To date, there have been nearly 150 clinical trials testing candidate agents intended to block the inflammatory response in critically ill patients, and every one of these trials failed. Despite commentaries that question the merit of an overreliance of animal systems to model human immunology, in the absence of systematic evidence, investigators and public regulators assume that results from animal research reflect human disease. To date, there have been no studies to systematically evaluate, on a molecular basis, how well the murine clinical models mimic human inflammatory diseases in patients.

What this large multicenter team has found is that while various inflammation stresses (trauma, burns, endotoxins) in humans tend to go through pretty much the same pathways, the same is not true for mice. Not only do they show very different responses from humans (as measured by gene up- and down-regulation, among other things), they show different responses to each sort of stress. Humans and mice differ in what genes are called on, in their timing and duration of expression, and in what general pathways these gene products are found. Mice are completely inappropriate models for any study of human inflammation.

And there are a lot of potential reasons why this turns out to be so:

There are multiple considerations to our finding that transcriptional response in mouse models reflects human diseases so poorly, including the evolutional distance between mice and humans, the complexity of the human disease, the inbred nature of the mouse model, and often, the use of single mechanistic models. In addition, differences in cellular composition between mouse and human tissues can contribute to the differences seen in the molecular response. Additionally, the different temporal spans of recovery from disease between patients and mouse models are an inherent problem in the use of mouse models. Late events related to the clinical care of the patients (such as fluids, drugs, surgery, and life support) likely alter genomic responses that are not captured in murine models.

But even with all the variables inherent in the human data, our inflammation response seems to be remarkably coherent. It's just not what you see in mice. Mice have had different evolutionary pressures over the years than we have; their heterogeneous response to various sorts of stress is what's served them well, for whatever reasons.

There are several very large and ugly questions raised by this work. All of us who do biomedical research know that mice are not humans (nor are rats, nor are dogs, etc.) But, as mentioned above, it's easy to take this as a truism - sure, sure, knew that - because all our paths to human go through mice and the like. The New York Times article on this paper illustrates the sort of habits that you get into (emphasis below added):

The new study, which took 10 years and involved 39 researchers from across the country, began by studying white blood cells from hundreds of patients with severe burns, trauma or sepsis to see what genes are being used by white blood cells when responding to these danger signals.

The researchers found some interesting patterns and accumulated a large, rigorously collected data set that should help move the field forward, said Ronald W. Davis, a genomics expert at Stanford University and a lead author of the new paper. Some patterns seemed to predict who would survive and who would end up in intensive care, clinging to life and, often, dying.

The group had tried to publish its findings in several papers. One objection, Dr. Davis said, was that the researchers had not shown the same gene response had happened in mice.

“They were so used to doing mouse studies that they thought that was how you validate things,” he said. “They are so ingrained in trying to cure mice that they forget we are trying to cure humans.”

“That started us thinking,” he continued. “Is it the same in the mouse or not?”

What's more, the article says that this paper was rejected from Science and Nature, among other venues. And one of the lead authors says that the reviewers mostly seemed to be saying that the paper had to be wrong. They weren't sure where things had gone wrong, but a paper saying that murine models were just totally inappropriate had to be wrong somehow.

We need to stop being afraid of the obvious, if we can. "Mice aren't humans" is about as obvious a statement as you can get, but the limitations of animal models are taken so much for granted that we actually dislike being told that they're even worse than we thought. We aren't trying to cure mice. We aren't trying to make perfect diseases models and beautiful screening cascades. We aren't trying to perfectly match molecular targets with diseases, and targets with compounds. Not all the time, we aren't. We're trying to find therapies that work, and that goal doesn't always line up with those others. As painful as it is to admit.

Comments (50) + TrackBacks (0) | Category: Animal Testing | Biological News | Drug Assays | Infectious Diseases

February 12, 2013

Do We Really Know the Cause for Over 4500 Diseases?

Email This Entry

Posted by Derek

Since I mentioned the NIH in the context of the Molecular Libraries business, I wanted to bring up something else that a reader sent along to me. There's a persistent figure that's floated whenever the agency talks about translational medicine: 4500 diseases. Here's an example:

Therapeutic development is a costly, complex and time-consuming process. In recent years, researchers have succeeded in identifying the causes of more than 4,500 diseases. But it has proven difficult to turn such knowledge into new therapies; effective treatments exist for only about 250 of these conditions.

It shows up again in this paper, just out, and elsewhere. But is it true?

Do we really know the causes of 4,500 diseases? Outside of different cancer cellular types and various infectious agents, are there even 4,500 diseases, total? And if not, how many are there, anyway, then? I ask because that figure seems rather high. There are a lot of single-point-mutation genetic disorders to which we can pretty confidently assign a cause, but some of them (cystic fibrosis, for example) are considered one disease even though they can be arrived at through a variety of mutations. Beyond that, do we really know the absolute molecular-level cause of, say, type II diabetes? (We know a lot of very strong candidates, but the interplay between them, now, there's the rub). Alzheimer's? Arthritis? Osteoporosis? Even in the cases where we have a good knowledge of what the proximate cause of the trouble is (thyroid insufficiency, say, or Type I diabetes), do we really know what brought on that state, or how to prevent it? Sometimes, but not very often, is my impression. So where does this figure come from?

The best guess is here, GeneMap. But read the fine print: "Phenotypes include single-gene mendelian disorders, traits, some susceptibilities to complex disease . . . and some somatic cell genetic disease. . ." My guess is that a lot of what's under that banner does not rise to "knowing the cause", but I'd welcome being corrected on that point.

Comments (22) + TrackBacks (0) | Category: Biological News

Pfizer Slowly Shrinks in Groton

Email This Entry

Posted by Derek

Here's the story, from Lee Howard of The Day, who's covered the company for years.

Pfizer had 4,500 employees - mostly scientists - at its Groton and New London campuses two years ago, when the New York-based company announced a major downsizing that would cut the local workforce to slightly less than 3,400. By June of last year, Pfizer reported that reductions were well under way, with about 3,700 employees remaining on the Groton campus.

Pfizer's response to a request last week for an update on the local jobs number initially indicated there were now slightly fewer than 3,150 Pfizer employees at the company's consolidated site in Groton - 250 fewer than had been anticipated when the local downsizing was announced. The company later amended the number, however, saying the initial report had neglected to count some personnel, and Pfizer gave a new census of about 3,300 employees, only a hundred less than what had been projected.

There were as many as 6,000 employees at one point, but it's been a long and bouncy ride since those days. The article says that Pfizer has been trying to find buyers for a number of vacant buildings (with, in this market and in that region, little success). Part of the Groton reduction is the move of the drug discovery people up to Cambridge. I go past the new building, still in construction, fairly often - it's right down the street from a gigantic hole in the ground that will be an expansion of the Novartis site. All of this construction recalls Levi Strauss getting rich during the California gold rush - not by doing anything so chancy as panning for gold, but by selling trousers to those who did. I've been in Cambridge for over five years now, and I have never yet traveled across it without going past some sort of academic/scientific construction site.

Comments (38) + TrackBacks (0) | Category: Business and Markets

The European Lead Factory

Email This Entry

Posted by Derek

That's what they're calling this new initiative, so points for bravery. I wrote about this proposal here last year, and now more details have emerged. The good news is that the former Merck facilities (well, Organon via Schering-Plough) in Newhouse (Scotland) and Oss (the Netherlands) will be part of the effort, so those will be put to some good use.

". . . the consortium consists of 30 academic and corporate partners, and aims to fill company pipelines with promising drug candidates. . .the initiative will build and curate a collection of 500,000 molecules for screening, 300,000 of which will come from the seven large pharmaceutical partners. The rest (are) intended to cover classes of biologically active molecule that are poorly represented in current libraries. . .Starting this July or August, the pharmaceutical partners will be able to use the library — including molecules from their competitors — in their own drug screens. Any academic group or company can also propose assays to test molecules in the library for biological activity."

Interestingly, both the compounds and the assay results will be proprietary (first refusal) for the people/organizations requesting them. The plan is for the whole thing to pay for itself through milestone payments and screening-for-hire for groups that are not part of the consortium. That's a worthy goal, but it's going to be complicated. One thing you can bet on is that the compounds in the collection themselves will not be the eventual ones that head to the clinic, so you get a "ship of Theseus" problem when you try to decide what belongs to whom (and at what point it started belonging). Note that the NIH's Molecular Libraries Program, by contrast, is made up of nonproprietary compounds, which were always assumed to be just starting points. (It's been having its own problems, too - many of the starting-point compounds, it seems, were - at least at first - probably never going to be starting points for anyone, so there still will not have been a clean head-to-head comparison between these two models).

And that brings up another difficulty with figuring out how things are going. I can certainly see why the results from the ELF will be proprietary, but that means that it may be some time before we can figure out whether it's providing anything worthwhile. The people running it will presumably have a better view.

Comments (8) + TrackBacks (0) | Category: Drug Assays

February 11, 2013

How Not to Do It: Chromium Trioxide

Email This Entry

Posted by Derek

Note: this was a post on my old blog site, and never made the migration over to the current "In the Pipeline". I was reminded of it this morning, and thought I'd bring it more out into the light.

There are reports (updated here - DBL) that Mars may have hexavalent chromium compounds in its surface dust, which is already being brought up as a concern for future human exploration. I agree with comments I've seen that this is putting the cart in front of the horse a bit, but it also means that I probably wouldn't be a good candidate for the expedition. I've already had my lifetime's exposure to Cr(VI).

Back in grad school, I had an undergraduate assistant one summer, a guy who was pretty green. I'll refer to him by an altered form of his nickname, henceforth as Toxic Jim. I shouldn't be too hard on him, I guess: I was a summer undergrad in my time, too, and I wasn't a lot of help to anyone, either. But TJ did manage to furnish me with some of my more vivid lab stories in his brief time in my fume hood.

One morning I showed him how to make PCC. That's pyridinium chlorochromate for the non-organic chemists out there, an oxidizing agent that doesn't seem to be used as much as it was 15 or 20 years ago. Even in '85, you could buy it, but the freshly-made stuff was often better. It certainly looked nicer. Like all the Cr(VI) salts, it has a vivid color, in this case a flaming orange. I shouldn't say "flaming;" that's getting ahead of the story. . .

It's not hard to make. You take chromium trioxide, a vicious oxidant in itself which comes as clumpy fine purple crystals, and dissolve it in 6N hydrochloric acid. That's an easy solution to whip up, since it's just concentrated HCl out of the jug cut 1:1 with water. I had Toxic Jim do all this - weighing out the chromium compound, making the HCl. During that part I couldn't resist quoting the ancient adage, which works well in the East Arkansas accent of my youth: "Do like you oughter, add acid to water." Most chemists either remember that one, or they remember the syrupy conc. acids splattering all over their arm when they did it (once!) the other way around.

We set up a three-neck flask with an overhead stirrer to run this in. That's just a motor mounted above the flask, turning a shaft with a paddle on the end of it. Works well for really thick mixtures, which this was supposed to turn into. As things turned out, it was even thicker than planned, for a brief exciting interlude.

In went the HCl, out of a big Erlenmeyer flask, and in went the chromium trioxide. Here's where the wheels began to come off. Instead of a vivid red-orange solution, the stuff got dark and began to thicken. I could tell it was getting hot, too, since you could see the clear wavery solvent vapors coming out of the open necks of the flask. And that was wrong, too - you don't get that so much with water vapor. It's the mark of organic solvent fumes, with their different density and refractive index.

And so it was. TJ had indeed grabbed the wrong Erlenmeyer. Not the one he'd just mixed up the HCl in, but one from another part of the bench that contained ethyl acetate from a big chromatography run the night before. Ethyl acetate is a pretty poor substitute for hydrochloric acid, most of the time, when you stop to think about it.

Then the overhead stirrer began to bog down, which takes a mighty thick mixture to achieve. I hadn't added up what had happened at this point, but I knew that things were going wrong in all directions at once. I pulled the glass hood sash down some more, saying "I think you better stand back -" WHOOOOMPH!

And there it went! The whole reaction went up in a big fireball, which filled a good part of the hood and came roaring out of the gap in the front sash. I felt the heat roll over me, yelled something incoherent, and bolted for the safety shower. I didn't have to run up Toxic Jim's back, either: he was making for the door in championship time. Pulling the chain of the shower dumped a hundred gallons of ice water on me immediately, not that I needed any more waking up.

When I opened my eyes and took inventory, things weren't as bad as I thought. Limbs and appendages all present, head and facial hair still attached - though lightly singed and frizzed - skin not even sunburnt, although it (along with my lab coat) was generously splattered with green. That was what remained of the chromium trioxide. It was now the Cr(III) oxide, having given up three oxidation levels by turning the ethyl acetate into carbon dioxide, most likely. There were a few orange-brown spots of the Cr(VI) stuff, but those were mostly confined to the front of the lab coat, in a vivid line that showed where the hood sash had gotten pulled down to.

My hood wasn't looking its best. There was smoke hanging in the air, although that was getting pulled out. There was a huge stain of the green and brown chromium mixture all over the inside, thickest in the directions of the three open necks of the flask. Which was still intact - if I'd been foolish enough to set this up in a closed system, the whole thing would have gone up as Pyrex shrapnel. Even the ceiling had a line of gunk on it, from the thin gap in the hood sash assembly.

While I was taking this in, wondering what the hell had gone wrong, and wondering what I could possibly do to TJ that was worse than what he'd just gone through, the emergency crews arrived. It was a Saturday morning, but Bob across the hall saw the explosion and immediately dialed 911. In came the fire crews, trying to talk through their breathing apparatus: "Mumph heff deff umphh cafulteff. . " "What?" "We hear there's a casualty up here"

I put my hands on my hips, and gave them the full effect of my green spots, frizzed hair, and soaking wet lab coat: "That would be me."

Comments (30) + TrackBacks (0) | Category: How Not to Do It

2012's New Drugs

Email This Entry

Posted by Derek

Thanks to Lisa Jarvis at C&E News, here's a chart (PDF) of the 39 drugs approved last year by the FDA. Last year was a good year, by almost any measure. The question as we go on will be whether this was a one-time spike, or the start of a long-awaited turnaround. If the latter, I think it will be as much of a regulatory phenomenon as a scientific one (faster reviews, etc.)

Comments (3) + TrackBacks (0) | Category: Regulatory Affairs

PhRMA And Why People Dislike the Drug Industry

Email This Entry

Posted by Derek

John LaMattina takes off after PhRMA's effectiveness here at Forbes. His two points are release of clinical trial data and openness about consultant payments to physicians. And I agree with him on both of those - as I've said here many times, we're not going to regain anyone's trust until we stop giving people reasons to think that we're trying to hide things from them.

The problem, though, as LaMattina shows, is PhRMA (the biggest industry association) doesn't seem especially interested in taken on these issues. Are they stupid, or short-sighted? Possibly. But when I see something like this going on, my assumption is to assume that the people involved are rational actors, who have made an informed decision. And that means that I'm somehow not looking at things the same way that they are.

My guess is that PhRMA's sees the public perception of the drug industry as a comparatively minor problem. The thing is, even if everyone liked the drug industry just fine, sales of prescription drugs would be about the same. They're not purchased because people have good feelings about the companies; they're not all that discretionary. People take medicines grudgingly, for the most part, because they're trying to correct something that's gone wrong.

So what's PhRMA's major concern? Regulatory and legislative affairs. Our industry is absolutely, crucially dependent on government's attitude towards it. We are regulated heavily at every point once we start to close in on an actual drug. So if you're trying to spend your time and effort in the most cost-effective way, you will go to Congress, to the regulatory agencies, to anyone at any level in government who can make and modify the rules that you have to live under. And you will spend your time and money making sure that the rules you like stay in force, that ones that you like even better are on the table, and that ones you don't like get slowed or watered down.

It's true that doing this would be somewhat easier if everyone had a better opinion of us. That's especially true for avoiding the regulations and laws that you don't like; that would be helped if you could go to the people involved with a big groundswell of public support behind you. But trying to influence the public to the point where that would reliably affect legislation is a very large undertaking. The same amount of effort (and money) will have far more impact if applied directly to the legislators and regulators themselves, rather than trying to use public opinion as a lever on them. It's just not cost-effective. This is especially true if you've already worked yourself into a situation where your industry is unpopular; trying to reverse that becomes a bigger and bigger proposition, which makes the alternatives look even more effective. And this is, after all, the way that every other interest group (well, every effective one) works in a highly regulated environment. What else would one expect?

That's my answer, then, to the question of why PhRMA doesn't do more to improve the industry's image. It's not a priority. Thoughts?

Notes: LaMattina's post is also partly a reponse to Ben Goldacre's book "Bad Pharma". I have been meaning to take that one on, but it's also a large undertaking. Book-length arguments are often best addressed at book length, unfortunately. But I do plan to do a big roundup on the subject.

Comments (8) + TrackBacks (0) | Category: Regulatory Affairs | Why Everyone Loves Us

February 8, 2013

The Name of a Cure

Email This Entry

Posted by Derek

Here's an excellent article at Slate on "natural" medicines versus pharmaceuticals. You won't see too many mainstream articles that suddenly break out into chemical structures, but this one does, and to excellent effect.

Comments (27) + TrackBacks (0) | Category: Snake Oil

Snow Versus Scientific Progress

Email This Entry

Posted by Derek

In case anyone's wondering, I'm not even at work today - no one at my company is; they announced last night that they were closing down for the day, which was welcome news. For those of you not living in the Northeast US, it's been snowing merrily along since earlier this morning, and by the time it finishes tomorrow, we look to have about two feet of the stuff.

Research will be slow today in this part of the country. I'm sure to see that in the traffic figures for the site - there's always a spike at lunchtime EST, a slight dip, and then another spike at lunchtime on the west coast. I think that the Central and Pacific zones will win out this time!

The largest single snowfall I've ever experienced was in January 1996, back in New Jersey, where we had 39 inches (one solid meter) in a single storm. I remembering opening one of those doors at the bottom of the apartment-complex building and staring in amazement at a drifted wall of snow that came up past the middle of my chest. This was one of those hunt-for-the-cars kind of storms. And I well remember the winter of 1977-78, which is still a standard in many parts of the country. I experienced that one in high school back in Arkansas, so I didn't get the apocalyptic snow mountains, but it was certainly impressive enough by the standards of the area (complete with a record amount of missed school!)

So for those of you not getting snowed on, well, you have to make up for the rest of us today. I think I'll get everyone to start on the Elements Jigsaw Puzzle, myself. Note: corrected this from the earlier "crossword". If anyone has a periodic table crossword puzzle, though, I'd be glad to hear about it).

Comments (19) + TrackBacks (0) | Category: Blog Housekeeping

All Those Drug-Likeness Papers: A Bit Too Neat to be True?

Email This Entry

Posted by Derek

There's a fascinating paper out on the concept of "drug-likeness" that I think every medicinal chemist should have a look at. It would be hard to count the number of publications on this topic over the last ten years or so, but what if we've been kidding ourselves about some of the main points?

The big concept in this area is, of course, Lipinski criteria, or Rule of Five. Here's what the authors, Peter Kenny and Carlos Montanari of the University of São Paulo, have to say:

No discussion of drug-likeness would be complete without reference to the influential Rule of 5 (Ro5) which is essentially a statement of property distributions for compounds taken into Phase II clinical trials. The focus of Ro5 is oral absorption and the rule neither quantifies the risks of failure associated with non-compliance nor provides guidance as to how sub-optimal characteristics of compliant compounds might be improved. It also raises a number of questions. What is the physicochemical basis of Ro50s asymmetry with respect to hydrogen bond donors and acceptors? Why is calculated octanol/water partition coefficient (ClogP) used to specify Ro50s low polarity limit when the high polarity cut off is defined in terms of numbers of hydrogen bond donors and acceptors? It is possible that these characteristics reflect the relative inability of the octanol/water partitioning system to ‘see’ donors (Fig. 1) and the likelihood that acceptors (especially as defined for Ro5) are more common than donors in pharmaceutically-relevant compounds. The importance of Ro5 is that it raised awareness across the pharmaceutical industry about the relevance of physico- chemical properties. The wide acceptance of Ro5 provided other researchers with an incentive to publish analyses of their own data and those who have followed the drug discovery literature over the last decade or so will have become aware of a publication genre that can be described as ‘retrospective data analysis of large proprietary data sets’ or, more succinctly, as ‘Ro5 envy’.

There, fellow med-chemists, doesn't this already sound like something you want to read? Thought so. Here, have some more:

Despite widespread belief that control of fundamental physicochemical properties is important in pharmaceutical design, the correlations between these and ADMET properties may not actually be as strong as is often assumed. The mere existence of a trend is of no interest in drug discovery and strengths of trends must be known if decisions are to be accurately described as data-driven. Although data analysts frequently tout the statistical significance of the trends that their analysis has revealed, weak trends can be statistically significant without being remotely interesting. We might be confident that the coin that lands heads up for 51 % of a billion throws is biased but this knowledge provides little comfort for the person charged with predicting the result of the next throw. Weak trends can be beaten and when powered by enough data, even the feeblest of trends acquires statistical significance.

So, where are the authors going with all this entertaining invective? (Not that there's anything wrong with that; I'm the last person to complain). They're worried that the transformations that primary drug property data have undergone in the literature have tended to exaggerate the correlations between these properties and the endpoints that we care about. The end result is pernicious:

Correlation inflation becomes an issue when the results of data analysis are used to make real decisions. To restrict values of properties such as lipophilicity more stringently than is justified by trends in the data is to deny one’s own drug-hunting teams room to maneuver while yielding the initiative to hungrier, more agile competitors.

They illustrate this by reference to synthetic data sets, showing how one can get rather different impressions depending on how the numbers are handled along the way. Representing sets of empirical points by using their average values, for example, can cause the final correlations to appear more robust than they really are. That, the authors say, is just what happened in this study from 2006 ("Can we rationally design promiscuous drugs?) and in this one from 2007 ("The influence of drug-like concepts on decision-making in medicinal chemistry"). The complaint is that showing a correlation between cLogP and median compound promiscuity does not imply that there is one between cLogP and compound promiscuity per se. And the authors note that the two papers manage to come to opposite conclusions about the effect of molecular weight, which does make one wonder. The "Escape from flatland" paper from 2009 and the "ADMET rules of thumb" paper from 2008 (mentioned here) also come in for criticism on this point - binning averaged data from a large continuous set and then treated those as real objects for statistic analysis. Ones conclusions depend strongly on how many bins one uses. Here's a specific take on that last paper:

The end point of the G2008 analysis is ‘‘a set of simple interpretable ADMET rules of thumb’’ and it is instructive to examine these more closely. Two classifications (ClogP<4 and MW<400 Da; ClogP>4 or MW>400 Da) were created and these were combined with the four ionization state classifications to define eight classes of compound. Each combination of ADMET property and compound class was labeled according to whether the mean value of the ADMET property was lower than, higher than or not significantly different from the average for all compounds. Although the rules of thumb are indeed simple, it is not clear how useful they are in drug discovery. Firstly, the rules only say whether or not differences are significant and not how large they are. Secondly, the rules are irrelevant if the compounds of interest are all in the same class. Thirdly, the rules predict abrupt changes in ADMET properties going from one class to another. For example, the rules predict significantly different aqueous solubility for two neutral compounds with MW of 399 and 401 Da, provided that their ClogP values do not exceed 4. It is instructive to consider how the rules might have differed had values of logP and MW of 5 and 500 Da (or 3 and 300 Da) had been used to define them instead of 4 and 400 Da.

These problems also occur in graphical representations of all these data, as you'd imagine, and the authors show several of these that they object to. A particular example is this paper from 2010 ("Getting physical in drug discovery"). Three data sets, whose correlations in their primary data do not vary significantly, generate very different looking bar charts. And that leads to this comment:

Both the MR2009 and HY2010 studies note the simplicity of the relationships that the analysis has revealed. Given that drug discovery would appear to be anything but simple, the simplicity of a drug-likeness model could actually be taken as evidence for its irrelevance to drug discovery. The number of aromatic rings in a molecule can be reduced by eliminating rings or by eliminating aromaticity and the two cases appear to be treated as equivalent in both the MR2009 and HY2010 studies. Using the mnemonic suggested in MR2009 one might expect to make a compound more developable by replacing a benzene ring with cyclohexadiene or benzoquinone.

The authors wind up by emphasizing that they're not saying that things like lipophilicity, aromaticity, molecular weight and so on are unimportant - far from it. What they're saying, though, is that we need to be aware of how strong these correlations really are so that we don't fool ourselves into thinking that we're addressing our problems, when we really aren't. We might want to stop looking for huge, universally applicable sets of rules and take what we can get in smaller, local data sets within a given series of compounds. The paper ends with a set of recommendations for authors and editors - among them, always making primary data sets part of the supplementary material, not relying on purely graphical representations to make statistical points, and a number of more stringent criteria for evaluating data that have been partitioned into bins. They say that they hope that their paper "stimulates debate", and I think it should do just that. It's certainly given me a lot of things to think about!

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | The Scientific Literature

February 7, 2013

Addex Cuts Back: An Old Story, Told Again

Email This Entry

Posted by Derek

Addex Therapeutics has been trying to develop allosteric modulators as drugs. That's a worthy goal (albeit a tough one) - "allosteric" is a term that covers an awful lot of ground. The basic definition is a site that affects the activity of its protein, but is separate from the active or ligand-binding site itself. All sorts of regulatory sites, cofactors, protein-protein interaction motifs, and who knows what else can fit into that definition. It's safe to say that allosteric mechanisms account for a significant number of head-scratching assay results, but unraveling them can be quite a challenge.

It's proving to be one for Addex. They've announced that they're going to focus on a few clinical programs, targeting orphan diseases in the major markets, and to do that, well. . .:

In executing this strategy and to maximize potential clinical success in at least two programs over the next 12 months, the company will reduce its overall cost structure, particularly around its early-stage discovery efforts, while maintaining its core competency and expertise in allosteric modulation. The result will be a development-focused company with a year cash runway. In addition, the company will seek to increase its cash position through non-dilutive partnerships by monetizing its platform capability as well as current discovery programs via licensing and strategic transactions.

That is the sound of the hatches being battened down. And that noise can be heard pretty often in the small-company part of the drug business. Too often, it comes down to "We can advance this compound in the clinic, enough to try to get more money from someone, or we can continue to do discovery research. But not both. Not now." Some companies have gone through this cycles several times, laying off scientists and then eventually hiring people back (sometimes some of the same people) when the money starts flowing again. But in the majority of these cases, I'd say that this turns out to be the beginning of the end. The failure rates in the clinic see to that - if you have to have your compounds work there, the very next ones you have, the only things you have on hand in order to survive, then the odds are not with you.

But that's what every small biopharma company faces: something has to work, or the money will run out. A lot of the managing of such an outfit consists of working out strategies to keep things going long enough. You can start from a better position than usual, if that's an option. You can pursue deals with larger companies early on, if you actually have something that someone might want (but you won't get as good a deal as you would have later, if what you're partnering actually works out). You can beat all sorts of bushes to raise cash, and try all sorts of techniques to keep it from being spent so quickly, or on the wrong things (as much as you can tell what those are).

But eventually, something has to work, or the music stops. Ditching everything except the clinical candidates is one of the last resorts, so I wish Addex good luck, which they (and all of us) will need.

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development

DUCTS: Down with Useless Clinical Trial acronymS

Email This Entry

Posted by Derek

I'm not the first person to complain about these things, of course. Even by 2003, there were sixteen different clinical trials in the literature with the acronym HEART. It appears that the cardiovascular field picked up the acronym bug early, probably due to the size and length of their clinical programs. It also may been the first field to think up the jazzy clinical trial name first, and find something half-sensible to match it afterwards. But who can doubt that this is what goes on most of the time now? For those who still want to run the algorithm the other way, there's the Acronym Generator, which, wouldn't you know it, is run out of a cardiac hospital unit in Liverpool.

I wonder if the FDA would ever consider requiring drug companies and other research organizations to tone all this down, in the interest of sanity. If you're studying a drug called, say, kevorkirol (a generic name I invented a few years back, and hereby give freely to the scientific community), couldn't the clinical studies just be named "Kevorkirol Efficacy Trial #1", and "Kevorkirol Expanded Efficacy Trial #2" and so on? That would actually help people to keep them straight, instead of having to make a chart of bizarre trial names and their actual purpose. Anyone up for this idea?

Comments (25) + TrackBacks (0) | Category: Clinical Trials | Regulatory Affairs

How To Enhance Your Online Reputation. Sure.

Email This Entry

Posted by Derek

We will file this one under N, for Nerve, Lots Of. Readers will probably remember the cancer research scandal at Duke a couple of years ago, where Anil Potti turned out to have faked a wide range of results in the clinic. This led to his leaving Duke rather abruptly, with a trail of retracted papers, all sorts of unpleasant complications with the funding agencies and so on. Retraction Watch covered this business extensively, as well they might have, since it's just the sort of thing that site helps to spotlight.

The campus newspaper (the Duke Chronicle) noted at the time that Potti had hired some sort of online reputation management firm. (I should mention in passing that I owe a debt to that newspaper, whose crossword puzzle got me through an electron spin resonance course while I was a grad student in the 1980s. Without it I would have been forced to listen to the lecture material, and who knows what would have become of me then?) It looks like these reputation-polishers are still in business. That's why that link to Retraction Watch goes to their front page instead of one of their posts on the scandal itself.

Those posts have been taken down, you see. Oh, yes. Copyright problems, don't you know - why, one of the most famous news sites in the world, one "Newsbulet.in" turns out to have published all that stuff on its own, and has filed a DMCA takedown notice with Wordpress to have the posts removed.

It must be bovine waste products week around here at In the Pipeline. because that's another big steaming load of the stuff. Here, take a look at the request itself:

Myself Narendra Chatwal Senior editor in NewsBulet.In, a famous news firm in India. All the news we publish are individually researched by our reporters from all over India and then we publish them on our site and our news channel. Recently we found that some one had copied our material from the category Medical Reviews and published them on their site. So we request you to help us in protecting our content and copy right.

Ah, but if you take a look at that domain, you find that it didn't even exist until October 2012, well after all but one of the posts that they're complaining about. And as the commenters to the Ars Technica post on this noticed, the address given in the WhoIs records corresponds to a nightclub in London. Peachy. So not only is this a spurious copyright complaint, it's a stupid, incompetent spurious copyright complaint. Whoever is providing this sort of service to Anil Potti is ripping him off - not that that bothers me much after reading the facts in the Duke case.

And the thing is, this sort of effort is futile. It's the very definition of futile, because getting the internet off of you is impossible. That Duke Chronicle story says (at the time of its writing) that the first page of Google results about Potti contained no mention of the scandal, just social media sites and glowing statements. That sure didn't last long, though - now the front page contains lots of details about the Duke imbroglio, and (as of this morning) several discussions of this current ridiculous DMCA effort.

After reviewing the facts of the earlier case, and these new attempts at reputation-burnishing being done on his behalf, I'm sticking with my earlier statements about Dr. Potti: I would not hire him to mow my lawn. Has Newsbulet.in published that before?

Comments (16) + TrackBacks (0) | Category: The Dark Side

February 6, 2013

Anyone Still Swimming in the Chiral Pool?

Email This Entry

Posted by Derek

My grad school work was chiral-pool synthesis; trying to make a complex natural product from carbohydrate starting materials. There was quite a bit of that around in those days, but you have to wonder about its place in the world by now. It's true that everyone likes to be able to buy their chiral centers, especially if they're from the naturally-occuring series (nobody's keen to use L-glucose as their starting material if they can avoid it!) We certainly love to do that in the drug industry, and we can often get away with such syntheses, since our compounds generally don't have too many chiral centers.

But how developed are the multicenter methods? I certainly did not enjoy manipulating the multiple chiral centers on a sugar molecule, although you can (with care and attention) do some interesting gymnastics on that framework. But I think that asymmetric synthesis, especially catalytic variations, is more widely used today to build things up, rather than starting with a lot of chirality and working it around to what you want. The synthetic difficulties of that latter method often seem to get out of hand, and the methods aren't as general as the build-up-your-chirality ones.

Is my impression correct? And if so, is this the way things should be? My tendency is to say "yes" to both questions, but I'd like to see what the general opinions are.

Comments (17) + TrackBacks (0) | Category: Chemical News | Natural Products

Trouble Hiring Whom, Exactly?

Email This Entry

Posted by Derek

Here's a report on employment in the biopharma industry that will cause some pretty strong emotions in those of us who (still) work there. PriceWaterhouseCoopers (PwC), in their annual CEO survey finds (here's the good news) that:

Nearly three-quarters (72 percent) of executives said their organizations are looking to increase R&D capacity over the next 12 months, and six in 10 intend to increase investments over the next three years to create a more skilled workforce.

So far, so good. But would you like to know what the executives said was one of the biggest problem in doing all this? Honestly, you'll never guess:

The knowledge-intensive pharmaceutical industry had the highest reported difficulty in hiring top talent of the 19 industries featured in PwC's 2012 Global CEO Survey. CEOs identified talent gaps as one of the biggest threats to future growth prospects.

Research conducted by HRI, including a survey of human resource and R&D executives at U.S. biopharmaceutical companies found (that) fifty-one percent of industry executives report that hiring has become increasingly difficult and only 28 percent feel very confident they will have access to top talent.

Well, now. One's first impulse is to refer, with deep feeling, to bovine waste products, but one mustn't jump to conclusions about whether the industry might just possibly have heaved too many people over the side over the last ten years or so. As Pharmalot points out, the people that are allegedly being sought are not always the ones that have already been ditched:

Of course, the workplace is not stagnant and the demand for certain skills is always evolving. Seen this way, the data suggest that pharma execs may want the sort of talent that is not on the sidelines or simply clamoring for a different opportunity. For instance, 34 percent say that developing and managing outside partnerships is the most important skill being sought among scientists. . .

Now, that one I can believe. An uncharitable summary of many of those outside partnership managerial positions would be "Keep track of what all the cheap overseas contract workers are doing". And there is indeed a demand for that relatively thankless task. Another task that appears to be strongly in demand is for scientists who can deal with regulatory affairs. Fine. But what about actual research, not actually in China or beyond? There are possibilities, but things still don't look so good if you're a chemist. Pharmalot again:

As for job growth among scientists, not surprisingly there is only a 4 percent increase forecast for chemists, who were thrown overboard in large masses in recent years, and 13 percent for microbiologists. Conversely, a 62 percent boost is predicted for biomedical engineers and 36 percent for medical scientists. Biochemists and biophysicists trail at 31 percent.

PwC seems to be taking a broad view of biopharma if "biomedical engineers" are the top category. That's a flexible-sounding category, but I'd guess medical devices, at the very least. "Medical scientists" is also the label on a rather large bin, and this gives only a fuzzy picture of where the hiring will supposedly be taking place.

Looking through the PwC material, you can tell that it's addressed largely to HR folks, trying to gear them up for all this talent-searching and position-filling. It spends, for example, some time sharing sympathy for the HR departments who don't, somehow, feel as if they're key parts of the organization on the front lines of discovery. (Which they aren't, usually, but that's another story). But there's some useful advice for them in there, too - see what you make of this:

Scientists want career paths that recognize and reward their passion and commitment to research, not just additional responsibilities. Too often, scientists are pushed out of what they do best – research -- and saddled with management chores that distract them.

Finally, senior executives must act as a powerful motivating force for their people. Companies with decades-long legacies have lost their edge due to repeated layoffs, wearing down the morale of scientific staff.

Ain't that the truth. But how many senior executives are in a position to act as a "powerful motivating force"? Well, OK, some of them have been, but with a negative sign in front, which isn't the idea. In many organizations, the sorts of behavior that the scientists would find motivating on the part of a top-level manager are often not the sorts of behavior that lead people into top managerial positions. So you get people who are, at the very least, rusty on those skills (if they ever had them in the first place). And that leads to things like (in my own experience, some years ago) hearing a high-level guy exhort various research teams while mispronouncing the names of some of their projects. Which neither bred confidence, nor raised morale.

Overall, I find this PwC report irritating, perhaps because of its HR-centric worldview. And the message of "Shortage of top talent!" is rather hard to take, no matter how you spin it. It also brings thoughts of the perennial "America's critical lack of scientists" headlines, which have only slightly abated. I'm waiting for someone to tie those two together into one annoying headline. . .

Note: I'll get back to that out-of-the-science-and-into-the-management topic again; it's come up here before, but it's an important one.

Comments (44) + TrackBacks (0) | Category: Business and Markets

February 5, 2013

Not Working Out So Well at Merck?

Email This Entry

Posted by Derek

Here's a rather grim analysis from the AP of Merck's current status. The company's stock was recently downgraded by two analysts after last Friday's earnings call didn't go very well (links added by me below):

Future sales of Vytorin, a controversial combination drug on sale since 2004 that includes Zocor, and prospects for a crucial experimental osteoporosis drug called odanacatib were thrown into question Friday as Merck announced its fourth-quarter results. Company executives made some cryptic comments, suggesting significant problems with both drugs. . .

Merck said Friday that it won't apply for approval of odanacatib, a new type of osteoporosis drug, until 2014 instead of by this June. Management said it was reviewing safety and efficacy data from one study and now won't apply for approval until they have longer-term data from an extension study.

Executives also said a committee monitoring its 18,000-patient study of Vytorin, called IMPROVE-IT, had requested a new interim analysis of patient data in March. The study is meant to determine whether Vytorin reduces risk of heart attack, stroke and death in heart disease patients — the ultimate purpose of cholesterol drugs — but Merck executives, grilled by analysts on a conference call, wouldn't say that they're confident the study will show that benefit.

I wouldn't, either, if I were in their shoes. The Vytorin story has been long and complex, and that complexity comes from two sources: the drug's unique mechanism of action (at least the ezetimibe part), and the uncertainties of human lipid handling and its relationship to cardiovascular outcomes. Honestly, these things could go any way at all, and the same goes for Merck's high-profile push in CETP. A lot of the company is riding on some very uncertain science.

But I wonder, as I was speculating on in that last link, if that isn't where the whole industry is these days. By now, we've attacked all the things that we believe we really know something solid about. What's left is often big, important, potentially very profitable. . .and risky enough to make you leave fingernail marks in the armrests of your chair. The higher up you sit, and the nicer the material that chair is made of, the more damage is being done to it.

Comments (14) + TrackBacks (0) | Category: Business and Markets | Clinical Trials

Vibrational Scent Wins A Round?

Email This Entry

Posted by Derek

I wrote here and here about Luca Turin's theory that our perception of smell is partly formed by sensing vibrational modes. (Turin is the author of an entertaining book on the subject of olfaction, The Secret of Scent, and also co-author of Perfumes: The A-Z Guide). His theory is still controversial, to say the least, but Turin and co-workers have a new paper out trying to shore it up.

A previous report from Vosshall and Keller at Rockfeller University had shown that human subjects were unable to distinguish acetophenone from its deuterated analog, which is not what you'd expect if we were sensing bond vibrations. Interestingly, this paper confirms this result. (References to all these studies are in the original paper, which is open-access, being in PLoSONE):

In principle, odorant isotopomers provide a possible test of shape vs. vibration mechanisms: replacing, for example, hydrogen with deuterium in an odorant leaves the ground-state conformation of the molecule unaltered while doubling atomic mass and so altering the frequency of all its vibrational modes to a greater or lesser extent. To first order, deuteration should therefore have little or no effect on the smell character of an odorant recognized by shape, whereas deuterated isotopomers should smell different if a vibrational mechanism is involved.

The experimental evidence on this question to date is contradictory. Drosophila appears able to recognize the presence of deuterium in odorant isotopomers by a vibrational mechanism. Partial deuteration of insect pheromones reduces electroantennogram response amplitudes. Fish have been reported to be able to distinguish isotopomers of glycine by smell. However, human trials using commercially available deuterated odorants [benzaldehyde and acetophenone] have yielded conflicting results, both positive and negative. Here, using GC-pure samples and a different experimental technique, we fully confirm Keller and Vosshall’s finding that humans, both naive and trained subjects, are unable to discriminate between acetophenone isotopomers.

But the paper goes on to show that humans apparently are able to discriminate deuterated musk compounds from their H-analogs. Cyclopentadecanone, for example, was deuterated to >95% next to the carbonyl, and to 90% at the other methylenes. It and three other commercial musks were purified and checked versus their native forms:

After silica gel purification, aliquots of the deuterated musks were diluted in ethanol and their odor character assessed on smelling strips. The parent compounds have classic powerful musk odor characters, with secondary perfumer descriptors as follows: animalic [Exaltone], sweet [Exaltolide], oily [Astrotone] and sweet [Tonalid]. In all the deuterated musks, the musk character, though still present was much reduced, and a new character appeared, variously described by the trained evaluators [NR, DG, LT and Christina Koutsoudaki, Vioryl SA] as “burnt,” “roasted,” “toasted,” or “nutty.” Naive subjects most commonly described the additional common character as “burnt.”

They found, by stopping the deuterium exchange early, that this smell showed up even at around 50% D-exchange or less. For more rigorous tests, they went to a "smelling GC", and double-blinded the tests. This gave clean compound peaks, and they were able to diminish the need to keep a memory of the previous smell in mind by capturing the eluted peak vapors in Eppendorf tube for side-by-side comparison.

This protocol showed that people are indeed unable to discriminate deuterated acetophenone from undeuterated - the Keller and Vosshall paper stands up, which will come as a relief to the author of the unusually celebratory editorial in Nature Neuroscience that accompanied it. To be sure, it also makes moot Turin's own objections to their work at the time, which questioned its experimental design and rigor.

But the deuterated musk experiment done this way are quite interesting. I'm going to just quote the entire section here:

All trials were performed with GC-pure catalytically deuterated [D fraction >90%] cyclopentadecanone [See Methods]. Each trial consisted of the assessment of 4 pairs of odorants, one deuterated and one sham-deuterated. The subjects were presented with a deuterated sample and their attention was drawn to the “burnt, nutty, roasted” character of the deuterated compound. Several other sample pairs were presented until the subjects were sure they could tell the difference between the two sample types.

The Eppendorf tubes were heated in a solid heating block to 50C. The samples were arranged in rows according to their type. The experimenter randomized the order of the tubes within the rows by means of two flips of a coin (first flip: first or second two positions, second flip: first or second spot within those). The rows were then mixed randomly by a further coin flip per d/H pair (heads: swap positions, tails leave in situ).

Subjects were first given a training pair and told which was deuterated and which sham-deuterated. The experimenter then left to watch the experiment through a window. Subjects were then presented with the unlabeled, position-randomized pairs of deuterated and sham-deuterated GC-pure samples and asked to say which was which.

The subject, wearing nitrile gloves to avoid contamination, smelled first one and then the other sample. Multiple sniffs at each sample were allowed. The subject was asked to identify the deuterated sample and to place it to one side. After four trials the tubes were placed under the UV light source and identified. The subject was not informed of the outcome. To avoid habituation, the subject then rested for 15 minutes before attempting the next trial.

The results are shown in table 2. Eleven subjects were used. Two subjects tired before reaching the desired number of 12 trials. Two were able to go beyond to 13 and 17 trials respectively. The binomial p values range between 0.109 [6/8 correct] to 7.62×10−6 [17/17 trials]. These are independent trials, and an aggregate probability for all trials [119/132 correct] can be calculated: it is equal to 5.9×10−23.

As it happens, musks are at nearly the top of the molecular weight range for odorant compounds. The paper mentions a rule of thumb among fragrance chemists that compounds with more than 18 carbons rarely have any perceptible odor, even when heated (and different people's noses can top out even before that). Musks tend to smell quite similar even with rather different structures, which suggests that a small number of receptors are involved in their perception. Here's Turin's unified theory of musk:

We suggest therefore that a musk odor is achieved when three conditions are simultaneously fulfilled: First, the molecule is so large that only one or a very few receptors are activated. Second, one or more of these receptors detects vibrations in the 1380–1550 cm-1 range. Third, the molecule has intense bands in that region, caused either by a few nitro groups or, equivalently, many CH2 groups. A properly quantitative account of musk odor will require better understanding of the shape selectivity of the receptors at the upper end of the molecular weight scale, and of the selection rules of a biological IETS spectrometer to calculate the intensity of vibrational modes.

It's safe to say that this controversy is very much alive, no matter what the explanation might be. Leslie Vosshall of Rockefeller has already commented on this latest paper, wondering if compounds might be enzymatically altered in the nose (which would also be expected to show a large difference with deuterated compounds). I'll await the next round with interest!

Comments (27) + TrackBacks (0) | Category: Chemical News

February 4, 2013

Single-Cell NMR? How About Single-Protein NMR?

Email This Entry

Posted by Derek

Two different research teams have reported a completely different way to run NMR experiments, one that looks like it could take the resolution down to cellular (or even large protein) levels. These two papers in Science have the details (and there's an overall commentary here, and more at Nature News).

This is not, as you've probably guessed, just a matter of shrinking down the probe and its detector coil. Our usual method of running NMR spectra doesn't scale down that far; there are severe signal/noise problems, among other things. This new method uses crystal defects just under the surface of diamond crystals - if a nitrogen atom gets in there instead of a carbon, you're left with a negatively charged center with a very useful spin state. It's capable of extraordinarily sensitive detection of magnetic fields; you have a single-atom magnetometer.

And that's been used to detect NMR signals in volumes of a few cubic nanometers. By comparison, erythrocytes (among the smallest of human cells) have a volume of around 100 cubic micrometers. By contrast, a 50 kD protein has a minimal radius of 2.4 nm, giving it a volume of 58 cubic nanometers at the absolute minimum. This is all being done at room temperature, I might add. If this technique can be made more robust, we are potentially looking at MRI imaging of individual proteins, and surely at a detailed intracellular level, which is a bizarre thought. And there's room for improvement:

By implementing different advanced noise suppression techniques, Mamin et al. and Staudacher et al. have succeeded in using near-surface NVs to detect small volumes of proton spins outside of the diamond crystal. Both authors conclude that their observed signals are consistent with a detection volume on the order of (5 cubic nanometers) or less. This sensitivity is comparable to that of the cryogenic MRFM technique and should be adequate for detecting large individual protein molecules. Both groups also project much smaller detection volumes in the future by using NVs closer to the diamond surface. Staudacher et al. expect to improve sensitivity by using the NV to spin-polarize the nuclei. Mamin et al. project that sensitivity may eventually approach the level of single protons, provided that the NV coherence time can be kept long enough.

I love this sort of thing, and I don't mind admitting it. Imagine detecting a ligand binding event by NMR on an individual protein molecule, or following the distribution of a fluorinated drug candidate inside a single cell. I can't wait to see it in action.

Comments (10) + TrackBacks (0) | Category: Analytical Chemistry

A Word We Didn't Know We Needed

Email This Entry

Posted by Derek

I've written here about the strange-sounding conferences that keep sending out invitations to all and sundry. They tend to be in provincial Chinese cities, have grandiose names like the "First International Summit Meeting of Advanced Medical Science", and feature so many sections, sessions, tracks, and breakouts that you wonder if anyone attends who isn't giving a talk. And you get invitations to submit invited talks in fields you've hardly even touched on in your career; that's another reliable sign.

Well, I've had one this morning whose title really rises to the top of the list. I present to you the "1st Annual Symposium of Drug Designology". No, I did not make that up - if I had, I wouldn't tell anyone, believe me. And I'm not about to provide a link to the conference site. If you want more, I'm willing to bet that a search for "drug designology" will yield only highly relevant hits, since I'm not aware of that phrase ever appearing in English until this morning. Here's hoping it submerges again.

Comments (31) + TrackBacks (0) | Category: The Scientific Literature

February 1, 2013

So How Does One Grow Beta-Cells?

Email This Entry

Posted by Derek

The short answer is "by looking for compounds that grow beta cells". That's the subject of this paper, a collaboration between Peter Schulz's group, the Novartis GNF. Schultz's group has already published on cell-based phenotypic screens in this area, where they're looking for compounds that could be useful in restoring islet function in patient with Type I diabetes.

These studies have used a rat beta-cell line (R7T1) that can be cultured, and they do good ol' phenotypic screening to look for compounds that induce proliferation (while not inducing it across the board in other cell types, of course). I'm a big fan of such approaches, but this is a good time to mention their limitations. You'll notice a couple of key words in that first sentence, namely "rat" and "cultured". Rat cells are not human cells, and cell lines that can be grown in vitro are not like primary cells from a living organism, either. If you base your entire approach this way, you run the risk of finding compounds that will, well, only work on rat cells in a dish. The key is to shift to the real thing as quickly as possible, to validate the whole idea.

That's what this paper does. The team has also developed an assay with primary human beta cells (which must be rather difficult to obtain), which are dispersed and plated. The tricky part seems to be keeping the plates from filling up with fibroblast cells, which are rather like the weeds of the cell culture world. In this case, their new lead compound (a rather leggy beast called WS-6) induced proliferation of both rat and human cells.

They took it on to an even more real-world system, mice that had been engineered to have a switchable defect in their own beta cells. Turning these animals diabetic, followed by treatment with the identified molecule (5 mpk, every other day), showed that it significantly lowered glucose levels compared to controls. And biopsies showed significantly increases beta-cell mass in the treated animals - all together, about as stringent a test as you can come up with in Type I studies.

So how does WS6 accomplish this? The paper goes further into affinity experiments with a biotinylated version of the molecule, which pulled down both the kinase IKK-epsilon and another target, Erb3 binding protein-1 (EBP1). An IKK inhibitor had no effect in the cell assay, interestingly, while siRNA experiments for EBP1 showed that knocking it down could induce proliferation. Doing both at the same time, though, had the most robust effect of all. The connection looks pretty solid.

Now, is WS6 a drug? Not at all - here's the conclusion of the paper:

In summary, we have identified a novel small molecule capable of inducing proliferation of pancreatic β cells. WS6 is among a few agents reported to cause proliferation of β cells in vitro or in vivo. While the extensive medicinal chemistry that would be required to improve the selectivity, efficacy, and tolerability of WS6 is beyond the scope of this work, further optimization of WS6 may lead to an agent capable of promoting β cell regeneration that could ultimately be a key component of combinatorial therapy for this complex disease.

Exactly so. This is excellent, high-quality academic research, and just the sort of thing I love to see. It tells us useful, actionable things that we didn't know about an important disease area, and it opens the door for a real drug discovery effort. You can't ask for more than that.

Comments (18) + TrackBacks (0) | Category: Chemical Biology | Diabetes and Obesity | Drug Assays

A Traffic Record

Email This Entry

Posted by Derek

I wanted to thank everyone who comes here for making January the biggest traffic month ever on the site: just over 480,000 page views. That seems like a lot for a site that features no cat videos (not that there's anything wrong with cat videos), no partially clad women (not that there's anything wrong with them, either), and no continually raging flamewars in the comments section. Actually, the comments section here has one of the highest signal-to-noise ratios in the whole blogging world.. Other blog owners have asked me many times how I do that last part, and I just have to tell them honestly that I don't - the people who read the site are responsible. So thanks again to everyone who visits!

Thanks are also due to those who have hit the various Amazon links that I put up from time to time. The affiliate payments those bring in get spent in, among other things, swelling the book collections around here to even more alarming levels.

Comments (33) + TrackBacks (0) | Category: Blog Housekeeping