Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

May 31, 2013

A Total Synthesis Archive

Email This Entry

Posted by Derek

For those who are into total synthesis of natural products, Arash Soheili has a Twitter account (Total_Synthesis) that keeps track of all the reports in the major journals. He's emailed me with a link to a searchable database of all these, which brings a lot of not-so-easily-collated information together into one place. Have a look! (Mostly, when I see these, I'm very glad that I'm not still doing them, but that's just me).

Comments (2) + TrackBacks (0) | Category: Chemical News | Natural Products

Check Out These Molecules

Email This Entry

Posted by Derek

It's molecular imaging week! See Arr Oh and others have sent along this paper from Science, a really wonderful example of atomic-level work. (For those without journal access, Wired and PhysOrg have good summaries).
Arylenes.jpg

As that image shows, what this team has done is take a starting (poly) phenylacetylene compound and let it cyclize to a variety of products. And they can distinguish the resulting frameworks by direct imaging with an atomic force microscope (using a carbon monoxide molecule as the tip, as in this work), in what is surely the most dramatic example yet of this technique's application to small-molecule structure determination. (The first use I know of, from 2010, is here). The two main products are shown, but they pick up several others, including exotica like stable diradicals (compound 10 in the paper).

There are some important things to keep in mind here. For one, the only way to get a decent structure by this technique is if your molecules can lie flat. These are all sitting on the face of a silver crystal, but if a structure starts poking up, the contrast in the AFM data can be very hard to interpret. The authors of this study had this happen with their compound 9, which curls up from the surface and whose structure is unclear. Another thing to note is that the product distribution is surely altered by the AFM conditions: a molecule in solution will probably find different things to do with itself than one stuck face-on to a metal surface.

But these considerations aside, I find this to be a remarkable piece of work. I hope that some enterprising nanotechnologists will eventually make some sort of array version of the AFM, with multiple tips splayed out from each other, with each CO molecule feeding to a different channel. Such an AFM "hand" might be able to deconvolute more three-dimensional structures (and perhaps sense chirality directly?) Easy for me to propose - I don't have to get it to work!

Comments (21) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News

May 30, 2013

Making the Non-Flat, Non-Aromatic Compounds

Email This Entry

Posted by Derek

Here's a question for the organic chemists in the crowd, and not just those in the drug industry, either. Over the last few years, though, there's been a lot of discussion about how drug company compound libraries have too many compounds with too many aromatic rings in them. Here are some examples of just the sort of thing I have in mind. As mentioned here recently, when you look at real day-to-day reactions from the drug labs, you sure do see an awful lot of metal-catalyzed couplings of aryl rings (and the rest of the time seems to be occupied with making amides to link more of them together).

Now, it's worth remembering that some of the studies on this sort of thing have been criticized for stacking the deck. But at the same time, it's undeniable that the proportion of "flat stuff" has been increasing over the years, to the point that several companies seem to be openly worried about the state of their screening collections.

So here's the question: if you're trying to break out of this, and go to more three-dimensional structures with more saturated rings, what are the best ways to do that? The Diels-Alder reaction has come up here as an example of the kind of transformation that doesn't get run so often in drug research, and it has to be noted that it provides you with instant 3-D character in the products. What we could really use are reactions that somehow annulate pyrrolidines or tetrahydropyrans onto other systems in one swoop, or reliably graft on spiro systems where there was a carbonyl, say.

I know that there are some reactions like these out there, but it would be worthwhile, I think, to hear what people think of when they think of making saturated heterocyclic ring systems. Forget the indoles, the quinolines, the pyrazines and the biphenyls: how do you break into the tetrahydropyrans, the homopiperazines, and the saturated 5,5 systems? Embrace the stereochemistry! (This impinges on the topic of natural-product-like scaffolds, too).

My own nomination, for what it's worth, is to use D-glucal as a starting material. If you hydrogenate that double bond, you now have a chiral tetrahydropyran triol, with differential reactivity, ready to be functionalized. Alternatively, you can go after that double bond to make new fused rings, without falling back into making sugars. My carbohydrate-based synthesis PhD work is showing here, but I'm not talking about embarking on a 27-step route to a natural product here (one of those per lifetime is enough, thanks). But I think the potential for library synthesis in this area is underappreciated.

Comments (32) + TrackBacks (0) | Category: Chemical News | Life in the Drug Labs

Update on Bexarotene for Alzheimer's

Email This Entry

Posted by Derek

Here's a follow-up on the news that bexarotene might be useful for Alzheimer's. Unfortunately, what seems to be happening is what happens almost every time that the word "Alzheimer's" is mentioned along with a small molecule. As Nature reports here, further studies are delivering puzzling results.

The original work, from the Landreth lab at Case Western, reported lower concentrations of soluble amyloid, memory improvements in impaired rodents, and (quite strikingly), clearance of large amounts of existing amyloid plaque in their brain tissue. Now four separate studies (1, 2, 3, 4) are out in the May 24th issue of Science, and the waters are well muddied. No one has seen the plaque clearance, for one thing. Two groups have noted a lowering of soluble amyloid, though, and one study does report some effects on memory in a mouse model.

So where are we? Here's Landreth himself on the results:

It was our expectation other people would be able to repeat this,” says Landreth about the results of the studies. “Turns out that wasn’t the case, and we fundamentally don’t understand that.” He suggests that the other groups might have used different drug preparations that altered the concentration of bexarotene in the brain or even changed its biological activity.

In a response published alongside the comment articles, Landreth emphasizes that some of the studies affirm two key conclusions of the original paper: the lowering of soluble β-amyloid levels and the reversal of cognitive deficits. He says that the interest in plaques may even be irrelevant to Alzheimer’s disease.

That last line of thought is a bit dangerous. It was, after all, the plaque clearance that got this work all the attention in the first place, so to claim that it might not be that big a deal once it failed to repeat looks like an exercise in goalpost-shifting. There might be something here, don't get me wrong. But chasing it down is going to be a long-term effort. It helps, of course, that bexarotene has already been out in clinical practice for a good while, so we already know a lot about it (and the barriers to its use are lower). But there's no guarantee that it's the optimum compound for whatever this effect is. We're in for a long haul. With Alzheimer's, we're always in for a long haul, it seems. I wish it weren't so.

Comments (15) + TrackBacks (0) | Category: Alzheimer's Disease

May 29, 2013

Sulfa Side Effects, Decades Later

Email This Entry

Posted by Derek

You'd think that by now we'd know all there is to know about the side effects of sulfa drugs, wouldn't you? These were the top-flight antibiotics about 80 years ago, remember, and they've been in use (in one form or another) ever since. But some people have had pronounced CNS side effects from their use, and it's never been clear why.

Until now, that is. Here's a new paper in Science that shows that this class of drugs inhibits the synthesis of tetrahydrobiopterin, an essential cofactor for a number of hydroxylase and reductase enzymes. And that in turn interferes with neurotransmitter levels, specifically dopamine and serotonin. The specific culprit here seems to be sepiapterin reductase (SPR). Here's a summary at C&E News.

This just goes to show you how much there is to know, even about things that have been around forever (by drug industry standards). And every time something like this comes up, I wonder what else there is that we haven't uncovered yet. . .

Comments (17) + TrackBacks (0) | Category: Infectious Diseases | Toxicology

The Hydrogen Wave Function, Imaged

Email This Entry

Posted by Derek

Here's another one of those images that gives you a bit of a chill down the spine. You're looking at a hydrogen atom, and those spherical bands are the orbitals in which you can find its electron. Here, people, is the wave function. Yikes.Update: true, what you're seeing are the probability distributions as defined by the wave function. But still. . .
H%20atom.jpg
This is from a new paper in Physical Review Letters (here's a commentary at the APS site on it). Technically, what we're seeing here are Stark states, which you get when the atom is exposed to an electric field. Here's more on how the experiment was done:

In their elegant experiment, Stodolna et al. observe the orbital density of the hydrogen atom by measuring a single interference pattern on a 2D detector. This avoids the complex reconstructions of indirect methods. The team starts with a beam of hydrogen atoms that they expose to a transverse laser pulse, which moves the population of atoms from the ground state to the 2s and 2p orbitals via two-photon excitation. A second tunable pulse moves the electron into a highly excited Rydberg state, in which the orbital is typically far from the central nucleus. By tuning the wavelength of the exciting pulse, the authors control the exact quantum numbers of the state they populate, thereby manipulating the number of nodes in the wave function. The laser pulses are tuned to excite those states with principal quantum number n equal to 30.

The presence of the dc field places the Rydberg electron above the classical ionization threshold but below the field-free ionization energy. The electron cannot exit against the dc field, but it is a free particle in many other directions. The outgoing electron wave accumulates a different phase, depending on the direction of its initial velocity. The portion of the electron wave initially directed toward the 2D detector (direct trajectories) interferes with the portion initially directed away from the detector (indirect trajectories). This produces an interference pattern on the detector. Stodolna et al. show convincing evidence that the number of nodes in the detected interference pattern exactly reproduces the nodal structure of the orbital populated by their excitation pulse. Thus the photoionization microscope provides the ability to directly visualize quantum orbital features using a macroscopic imaging device.

n=30 is a pretty excited atom, way off the ground state, so it's not like we're seeing a garden-variety hydrogen atom here. But the wave function for a hydrogen atom can be calculated for whatever state you want, and this is what it should look like. The closest thing I know of to this is the work with field emission electron microscopes, which measure the ease of moving electrons from a sample, and whose resolution has been taken down to alarming levels).

So here we are - one thing after another that we've had to assume is really there, because the theory works out so well, turns out to be observable by direct physical means. And they are really there. Schoolchildren will eventually grow up with this sort of thing, but the rest of us are free to be weirded out. I am!

Comments (17) + TrackBacks (0) | Category: General Scientific News

May 28, 2013

Valeant Versus Genentech: Two Different Worlds

Email This Entry

Posted by Derek

Readers may recall the bracing worldview of Valeant CEO Mike Pearson. Here's another dose of it, courtesy of the Globe and Mail. Pearson, when he was brought in from McKinsey, knew just what he wanted to do:

Pearson’s next suggestion was even more daring: Cut research and development spending, the heart of most drug firms, to the bone. “We had a premise that most R&D didn’t give good return to shareholders,” says Pearson. Instead, the company should favour M&A over R&D, buying established treatments that made enough money to matter, but not enough to attract the interest of Big Pharma or generic drug makers. A drug that sold between $10 million and $200 million a year was ideal, and there were a lot of companies working in that range that Valeant could buy, slashing costs with every purchase. As for those promising drugs it had in development, Pearson said, Valeant should strike partnerships with major drug companies that would take them to market, paying Valeant royalties and fees.

It's not a bad strategy for a company that size, and it sure has worked out well for Valeant. But what if everyone tried to do the same thing? Who would actually discover those drugs for inlicensing? That's what David Shayvitz is wondering at Forbes. He contrasts the Valeant approach with what Art Levinson cultivated at Genentech:

While the industry has moved in this direction, it’s generally been slower and less dramatic than some had expected. In part, many companies may harbor unrealistic faith in their internal R&D programs. At the same time, I’ve heard some consultants cynically suggest that to the extent Big Pharma has any good will left, it’s due to its positioning as a science-driven enterprise. If research was slashed as dramatically as at Valeant, the industry’s optics would look even worse. (There’s also the non-trivial concern that if Valeant’s acquisition strategy were widely adopted, who would build the companies everyone intends to acquire?)

The contrasts between Levinson’s research nirvana and Pearson’s consultant nirvana (and scientific dystopia) could hardly be more striking, and frame two very different routes the industry could take. . .

I can't imagine the industry going all one way or all the other. There will always be people who hope that their great new ideas will make them (and their investors) rich. And as I mentioned in that link in the first paragraph, there's been talk for years about bigger companies going "virtual", and just handling the sales and regulatory parts, while licensing in all the rest. I've never been able to quite see that, either, because if one or more big outfits tried it, the cost of such deals would go straight up - wouldn't they? And as they did, the number would stop adding up. If everyone knows that you have to make deals or die, well, the price of deals has to increase.

But the case of Valeant is an interesting and disturbing one. Just think over that phrase, ". . .most R&D didn't give good return to the shareholders". You know, it probably hasn't. Some years ago, the Wall Street Journal estimated that the entire biotech industry, taken top to bottom across its history, had yet to show an actual profit. The Genentechs and Amgens were cancelled out, and more, by all the money that had flowed in never to be seen again. I would not be surprised if that were still the case.

So, to steal a line from Oscar Wilde (who was no stranger to that technique), is an R&D-driven startup the triumph of hope over experience? Small startups are the very definition of trying to live off returns of R&D, and most startups fail. The problem is, of course, that any Valeants out there need someone to do the risky research for there to be something for them to buy. An industry full of Mike Pearsons would be a room full of people all staring at each other in mounting perplexity and dismay.

Comments (32) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 24, 2013

A New Way to Determine Chirality

Email This Entry

Posted by Derek

There's a new paper out today in Nature on a very unusual way to determine the chirality of organic molecules. It uses an exotic effect of microwave spectroscopy, and I will immediately confess that the physics is (as of this morning, anyway) outside my range.

This is going to be one of those posts that comes across as gibberish to the non-chemists in the audience. Chirality seems to be a concept that confuses people pretty rapidly, even though the examples of right and left shoes or gloves (or right and left-handed screw threads) are familiar from everyday objects, and exactly the same principles apply to molecules. But the further you dig into the concept, the trickier it gets, and when you start dragging the physics of it in, you start shedding your audience quickly. Get a dozen chemists together and ask them how, exactly, chiral compounds rotate plane-polarized light and see how that goes. (I wouldn't distinguish myself by the clarity of my explanation, either).

But this paper is something else again. Here, see how you do:

Here we extend this class of approaches by carrying out nonlinear resonant phase-sensitive microwave spectroscopy of gas phase samples in the presence of an adiabatically switched non-resonant orthogonal electric field; we use this technique to map the enantiomer-dependent sign of an electric dipole Rabi frequency onto the phase of emitted microwave radiation.

The best I can do with this is that the two enantiomers have the same dipole moment, but that the electric field interacts with them in a manner that gives different signs. This shows up in the phase of the emitted microwaves, and (as long as the sample is cooled down, to cut back on the possible rotational states), it seems to give a very clear signal. This is a completely different way to determine chirality from the existing polarized-light ones, or the use of anomalous dispersion in X-ray data (although that one can be tricky).

Here's a rundown on this new paper from Chemistry World. My guess is that this is going to be one of those techniques that will be used rarely, but when it comes up it'll be because nothing else will work at all. I also wonder if, possibly, the effect might be noticed on molecules in interstellar space under the right conditions, giving us a read on chirality from a distance?

Comments (12) + TrackBacks (0) | Category: Chemical News

May 23, 2013

Alkynes and Nitriles

Email This Entry

Posted by Derek

Put this one in the category of "reactions you probably wouldn't have thought of". There's a new paper in Organic Letters on cleaving a carbon-carbon triple bond, yielding the two halves as their own separate nitriles.
Nitriles.png
It seems to be a reasonable reaction, and someone may well find a use for it. I just enjoyed it because it was totally outside the way that I think about breaking and forming bonds. And it makes me wonder about the reverse: will someone find a way to take two nitriles and turn them into a linked alkyne? Formally, that gives off nitrogen, so you'd think that there would be some way to make it happen. . .

Comments (10) + TrackBacks (0) | Category: Chemical News

Another Look At Marketing Vs. R&D In Pharma

Email This Entry

Posted by Derek

FiercePharma has some good figures to back up my posts the other day on R&D spending versus marketing. I mentioned how many people, when they argue that drug companies spend more on marketing than they do on research, are taking the entire SG&A number, and how companies tend to not even break out their marketing numbers at all.

Well, the folks at Fierce had a recent article on marketing budgets in the business, and they take Pfizer's numbers as a test case. That's actually a really good example: Pfizer is known as a mighty marketing machine, and for a long time they had what must have been the biggest sales force in the industry. They also have a lower R&D spend than many of their peers, as a percentage of sales. So if you're looking for the sort of skewed priorities that critics are always complaining about, here's where you'd look.

Pfizer spent $622 million on advertising last year. Man, that's a lot of money. It's so much that it's not even one-tenth of their R&D budget. Ah, you say, but ads are only part of the story, and so they are. But while we don't have a good estimate on that for Pfizer, we do have one for the industry as a whole:

DTC spending is only part of the overall sales-and-marketing budget, of course. Detailing to doctors costs a pretty penny, and that's where drugmakers spend much of their sales budget. Consumer advertising spending dropped by 11.5% in 2012 to $3.47 billion. Marketing to physicians, according to a Johns Hopkins Bloomberg School of Public Health study, amounted to $27.7 billion in 2010; that same year, DTC spending was just over $4 billion.

That's a total for 2010 of more than $31 billion, the best guess-timate we can come up with on short notice. According to FierceBiotech's 2010 R&D spending report, the industry shelled out $67 billion on research that year--more than twice our quick-and-dirty marketing estimate.

So let's try for a Pfizer estimate then. If they stayed at roughly that ratio, then they would have spent seven times as much marketing to physicians as they did on advertising per se. That gives a rough number of $4.3 billion, plus that $622 million, for a nice round five billion dollars of marketing. That's still less than their R&D budget of $7.9 billion, folks, no small sum. (And as for that figure from a couple of years ago about how it only costs $43 million to find a new drug, spare me. Spare everyone. Pfizer is not allocating $7.9 billion dollars for fun, nor are they planning on producing 184 new drugs with that money at $43 million per, more's the pity.)

So let me take a stronger line: Big Pharma does not spend more on marketing than it does on R&D. This is a canard; it's not supported by the data. And let me reiterate a point that's been made here several times: no matter what the amount spent on marketing, it's supposed to bring in more money than is spent. That's the whole point of marketing. Even if the marketing budget was the same as the R&D, even if it were more, it still wouldn't get rid of that point: the money that's being spent in the labs is money that came in because of marketing. Companies aren't just hosing away billions of dollars on marketing because they enjoy it; they're doing it to bring in a profit (you know, that more-money-than-you-spend thing), and if some marketing strategy doesn't look like it's performing, it gets ditched. The response-time loop over there is a lot tighter than it is in research.

There. Now the next time this comes up, I'll have a post to point to, with the numbers, and with the links. It will do no good at all.

Note: I am not saying that every kind of drug company marketing is therefore good. Nor am I saying that I do not cringe and roll my eyes at some of it. And yes indeed, companies can and do cross lines that shouldn't be crossed when they get to selling their products too hard. Direct-to-consumer advertising, although it has brought in the money, has surely damaged the industry from other directions. All this is true. But the popular picture of big drug companies as huge advertising shops with little vestigial labs stuck to them: that isn't.

Comments (29) + TrackBacks (0) | Category: Business and Markets | Why Everyone Loves Us

May 22, 2013

Underappreciated Analytical Techniques

Email This Entry

Posted by Derek

A conversation the other day about 2-D NMR brought this thought to mind. What do you think are the most underused analytical methods in organic chemistry? Maybe I should qualify that, to the most underused (but potentially useful) ones.

I know, for example, that hardly anyone takes IR spectra any more. I've taken maybe one or two in the last ten years, and that was to confirm the presence of things like alkynes or azides, which show up immediately and oddly in the infrared. Otherwise, IR has just been overtaken by other methods for many of its application in organic chemistry, and it's no surprise that it's fallen off so much since its glory days. But I think that carbon-13 NMR is probably underused, as are a lot of 2D NMR techniques. Any other nominations?

Comments (62) + TrackBacks (0) | Category: Analytical Chemistry | Life in the Drug Labs

How Many Binding Pockets Are There?

Email This Entry

Posted by Derek

Just how many different small-molecule binding sites are there? That's the subject of this new paper in PNAS, from Jeffrey Skolnick and Mu Gao at Georgia Tech, which several people have sent along to me in the last couple of days.

This question has a lot of bearing on questions of protein evolution. The paper's intro brings up two competing hypotheses of how protein function evolved. One, the "inherent functionality model", assumes that primitive binding pockets are a necessary consequence of protein folding, and that the effects of small molecules on these (probably quite nonspecific) motifs has been honed by evolutionary pressures since then. (The wellspring of this idea is this paper from 1976, by Jensen, and this paper will give you an overview of the field). The other way it might have worked, the "acquired functionality model", would be the case if proteins tend, in their "unevolved" states, to be more spherical, in which case binding events must have been much more rare, but also much more significant. In that system, the very existence of binding pockets themselves is what's under the most evolutionary pressure.

The Skolnick paper references this work from the Hecht group at Princeton, which already provides evidence for the first model. In that paper, a set of near-random 4-helical-bundle proteins was produced in E. coli - the only patterning was a rough polar/nonpolar alternation in amino acid residues. Nonetheless, many members of this unplanned family showed real levels of binding to things like heme, and many even showed above-background levels of several types of enzymatic activity.

In this new work, Skolnick and Gao produce a computational set of artificial proteins (called the ART library in the text), made up of nothing but poly-leucine. These were modeled to the secondary structure of known proteins in the PDB, to produce natural-ish proteins (from a broad structural point of view) that have no functional side chain residues themselves. Nonetheless, they found that the small-molecule-sized pockets of the ART set actually match up quite well with those found in real proteins. But here's where my technical competence begins to run out, because I'm not sure that I understand what "match up quite well" really means here. (If you can read through this earlier paper of theirs at speed, you're doing better than I can). The current work says that "Given two input pockets, a template and a target, (our algorithm) evaluates their PS-score, which measures the similarity in their backbone geometries, side-chain orientations, and the chemical similarities between the aligned pocket-lining residues." And that's fine, but what I don't know is how well it does that. I can see poly-Leu giving you pretty standard backbone geometries and side-chain orientations (although isn't leucine a little more likely than average to form alpha-helices?), but when we start talking chemical similarities between the pocket-lining residues, well, how can that be?

But I'm even willing to go along with the main point of the paper, which is that there are not-so-many types of small-molecule binding pockets, even if I'm not so sure about their estimate of how many there are. For the record, they're guessing not many more than about 500. And while that seems low to me, it all depends on what we mean by "similar". I'm a medicinal chemist, someone who's used to seeing "magic methyl effects" where very small changes in ligand structure can make big differences in binding to a protein. And that makes me think that I could probably take a set of binding pockets that Skolnick's people would call so similar as to be basically identical, and still find small molecules that would differentiate them. In fact, that's a big part of my job.

But in general, I see the point they're making, but it's one that I've already internalized. There are a finite number of proteins in the human body. Fifty thousand? A couple of hundred thousand? Probably not a million. Not all of these have small-molecule binding sites, for sure, so there's a smaller set to deal with right there. Even if those binding sites were completely different from one another, we'd be looking at a set of binding pockets in the thousands/tens of thousands range, most likely. But they're not completely different, as any medicinal chemist knows: try to make a selective muscarinic agonist, or a really targeted serine hydrolase inhibitor, and you'll learn that lesson quickly. And anyone who's run their drug lead through a big selectivity panel has seen the sorts of off-target activities that come up: you hit someof the other members of your target's family to greater or lesser degree. You hit the flippin' sigma receptor, not that anyone knows what that means. You hit the hERG channel, and good luck to you then. Your compound is a substrate for one of the CYP enzymes, or it binds tightly to serum albumin. Who has even seen a compound that binds only to its putative target? And this is only with the counterscreens we have, which is a small subset of the things that are really out there in cells.

And that takes me to my main objection to this paper. As I say, I'm willing to stipulate, gladly, that there are only so many types of binding pockets in this world (although I think that it's more than 500). But this sort of thing is what I have a problem with:

". . .we conclude that ligand-binding promiscuity is likely an inherent feature resulting from the geometric and physical–chemical properties of proteins. This promiscuity implies that the notion of one molecule–one protein target that underlies many aspects of drug discovery is likely incorrect, a conclusion consistent with recent studies. Moreover, within a cell, a given endogenous ligand likely interacts at low levels with multiple proteins that may have different global structures.

"Many aspects of drug discovery" assume that we're only hitting one target? Come on down and try that line out in a drug company, and be prepared for rude comments. Believe me, we all know that our compounds hit other things, and we all know that we don't even know the tenth of it. This is a straw man; I don't know of anyone doing drug discovery that has ever believed anything else. Besides, there are whole fields (CNS) where polypharmacy is assumed, and even encouraged. But even when we're targeting single proteins, believe me, no one is naive enough to think that we're hitting those alone.

Other aspects of this paper, though, are fine by me. As the authors point out, this sort of thing has implications for drawing evolutionary family trees of proteins - we should not assume too much when we see similar binding pockets, since these may well have a better chance of being coincidence than we think. And there are also implications for origin-of-life studies: this work (and the other work in the field, cited above) imply that a random collection of proteins could still display a variety of functions. Whether these are good enough to start assembling a primitive living system is another question, but it may be that proteinaceous life has an easier time bootstrapping itself than we might imagine.

Comments (16) + TrackBacks (0) | Category: Biological News | In Silico | Life As We (Don't) Know It

May 21, 2013

Promoting STEM Education, Foolishly

Email This Entry

Posted by Derek

Here's a man who says what he thinks about getting students into STEM careers:

The United States spent more than US$3 billion last year across 209 federal programmes intended to lure young people into careers in science, technology, engineering and mathematics (STEM). The money goes on a plethora of schemes at school, undergraduate and postgraduate levels, all aimed at promoting science and technology, and raising standards of science education.

In a report published on 10 April, Congress’s Government Accountability Office (GAO) asked a few pointed questions about why so many potentially overlapping programmes coexist. The same day, the 2014 budget proposal of President Barack Obama’s administration suggested consolidating the programmes, but increasing funding.

What no one asked was whether these many activities actually benefit science and engineering, or society as a whole. My answer to both questions is an emphatic ‘no’.

And I think he's right about that. Whipping and driving people into science careers doesn't seem like a very good way to produce good scientists. In fact, it seems like an excellent way to produce a larger cohort of indifferent ones, which is exactly what we don't need. Or does that depend on the definition of "we"?

The dynamic at work here isn’t complicated. By cajoling more children to enter science and engineering — as the United Kingdom also does by rigging university-funding rules to provide more support for STEM than other subjects — the state increases STEM student numbers, floods the market with STEM graduates, reduces competition for their services and cuts their wages. And that suits the keenest proponents of STEM education programmes — industrial employers and their legion of lobbyists — absolutely fine.

And that takes us back to the subject of these two posts, on the oft-heard complaints of employers that they just can't seem to find qualified people any more. To which add, all too often, ". . .not at the salaries we'd prefer to pay them, anyway". Colin Macilwain, the author of this Nature piece I'm quoting from, seems to agree:

But the main backing for government intervention in STEM education has come from the business lobby. If I had a dollar for every time I’ve heard a businessman stand up and bemoan the alleged failure of the education system to produce the science and technology ‘skills’ that his company requires, I’d be a very rich man.

I have always struggled to recognize the picture these detractors paint. I find most recent science graduates to be positively bursting with both technical knowledge and enthusiasm.

If business people want to harness that enthusiasm, all they have to do is put their hands in their pockets and pay and train newly graduated scientists and engineers properly. It is much easier, of course, for the US National Association of Manufacturers and the British Confederation of British Industry to keep bleating that the state-run school- and university-education systems are ‘failing’.

This position, which was not my original one on this issue, is not universally loved. (The standard take on this issue, by contrast, has the advantage of both flattering and advancing the interests of employers and educators alike, and it's thus very politically attractive). I don't even have much affection for my own position on this, even though I've come to think it's accurate. As I've said before, it does feel odd for me, as a scientist, as someone who values education greatly, and as someone who's broadly pro-immigration, to be making these points. But there they are.

Update: be sure to check the comments section if this topic interests you - there are a number of good ones coming in, from several sides of this issue.

Comments (76) + TrackBacks (0) | Category: Business and Markets | Current Events

May 20, 2013

But Don't Drug Companies Spend More on Marketing?

Email This Entry

Posted by Derek

So drug companies may spend a lot on R&D, but they spend even more on marketing, right? I see the comments are already coming in to that effect on this morning's post on R&D expenditures as a percentage of revenues. Let's take a look at those other numbers, then.

We're talking SG&A, "sales, general, and administrative". That's the accounting category where all advertising, promotion and marketing ends up. Executive salaries go there, too, in case you're wondering. Interestingly, R&D expenses technically go there as well, but companies almost always break that out as a separate subcategory, with the rest as "Other SG&A". What most companies don't do is break out the S part separately: just how much they spend on marketing (and how, and where) is considering more information than they're willing to share with the world, and with their competition.

That means that when you see people talking about how Big Pharma spends X zillion dollars on marketing, you're almost certainly seeing an argument based on the whole SG&A number. Anything past that is a guess - and would turn out to be a lower number than the SG&A, anyway, which has some other stuff rolled into it. Most of the people who talk about Pharma's marketing expenditures are not interested in lower numbers, anyway, from what I can see.

So we'll use SG&A, because that's what we've got. Now, one of the things you find out quickly when you look at such figures is that they vary a lot, from industry to industry, and from company to company inside any given group. This is fertile ground for consultants, who go around telling companies that if they'll just hire them, they can tell them how to get their expenses down to what some of their competition can, which is an appealing prospect.
SG%26A.png
Here you see an illustration of that, taken from the web site of this consulting firm. Unfortunately, this sample doesn't include the "Pharmaceuticals" category, but "Biotechnology" is there, and you can see that SG&A as a percent of revenues run from about 20% to about 35%. That's definitely not one of the low SG&A industries (look at the airlines, for example), but there are a lot of other companies, in a lot of other industries, in that same range.

So, what do the SG&A expenditures look like for some big drug companies? By looking at 2012 financials, we find that Merck's are at 27% of revenues, Pfizer is at 33%, AstraZeneca is just over 31%, Bristol-Myers Squibb is at 28%, and Novartis is at 34% high enough that they're making special efforts to talk about bringing it down. Biogen's SG&A expenditures are 23% of revenues, Vertex's are 29%, Celgene's are 27%, and so on. I think that's a reasonable sample, and it's right in line with that chart's depiction of biotech.

What about other high-tech companies? I spent some time in the earlier post talking about their R&D spending, so here are some SG&A figures. Microsoft spends 25%, Google just under 20%, and IBM spends 21.5%. Amazon's expenditures are about 23%, and have been climbing. But many other tech companies come in lower: Hewlett-Packard's SG&A layouts are 11% of revenues, Intel's are 15%, Broadcom's are 9%, and Apple's are only 6.5%.

Now that's more like it, I can hear some people saying. "Why can't the drug companies get their marketing and administrative costs down? And besides, they spend more on that than they do on research!" If I had a dollar for every time that last phrase pops up, I could take the rest of the year off. So let's get down to what people are really interested in: sales/administrative costs versus R&D. Here comes a list (and note that some of the figures may be slightly off this morning's post - different financial sites break things down slightly differently):

Merck: SG&A 27%, R&D 17.3%
Pfizer: SG&A 33%, R&D 14.2%
AstraZeneca: SG&A 31.4%, R&D 15.1%
BMS: SG&A 28%, R$D 22%
Biogen: SG&A 23%, R&D 24%
Johnson & Johnson: SG&A 31%, R&D 12.5%

Well, now, isn't that enough? As you go to smaller companies, it looks better (and in fact, the categories flip around) but when you get too small, there aren't any revenues to measure against. But jut look at these people - almost all of them are spending more on sales and administration than they are on research, sometimes even a bit more than twice as much! Could any research-based company hold its head up with such figures to show?

Sure they could. Sit back and enjoy these numbers, by comparison:

Hewlett-Packard: SG&A 11%, R&D 2.6%.
IBM: SG&A 21.5%, R&D 5.7%.
Microsoft: SG&A 25%, R&D 13.3%.
3M: SG&A 20.4%, R&D 5.5%
Apple: SG&A 6.5%, R&D 2.2%.
GE: SG&A 25%, R&D 3.2%

Note that these companies, all of whom appear regularly on "Most Innovative" lists, spend anywhere from two to eight times their R&D budgets on sales and administration. I have yet to hear complaints about how this makes all their research into some sort of lie, or about how much more they could be doing if they weren't spending all that money on those non-reseach activities. You cannot find a drug company with a split between SG&A and research spending like there is for IBM, or GE, or 3M. I've tried. No research-driven drug company could survive if it tried to spend five or six times its R&D on things like sales and administration. It can't be done. So enough, already.

Note: the semiconductor companies, which were the only ones I could find with comparable R&D spending percentages to the drug industry, are also outliers in SG&A spending. Even Intel, the big dog of the sector, manages to spend slightly less on that category than it does on R&D, which is quite an accomplishment. The chipmakers really are off on their own planet, financially. But the closest things to them are the biopharma companies, in both departments.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

How Much Do Drug Companies Spend on R&D, Anyway?

Email This Entry

Posted by Derek

How much does Big Pharma spend on R&D, compared to what it takes in? This topic came up during a discussion here last week, when a recent article at The Atlantic referred to these expenditures as "only" 16 cents on the dollar, and I wanted to return to it.

One good source for such numbers is Booz, the huge consulting outfit, and their annual "Global Innovation 1000" survey. This is meant to be a comparison of companies that are actually trying to discover new products and bring them to market (as opposed to department stores, manufacturers of house-brand cat food, and other businesses whose operations consist of doing pretty much the same thing without much of an R&D budget). Even among these 1000 companies, the average R&D budget, as a per cent of sales, is between 1 and 1.5%, and has stayed in that range for years.

Different industries naturally have different averages. The "chemicals and energy" category in the Booz survey spends between 1 and 3% of its sales on R&D. Aerospace and defense companies tend to spend between 3 and 6 per cent. The big auto makers tend to spend between 3 and 7% of their sales on research, but those sales figures are so large that they still account for a reasonable hunk (16%) of all R&D expenditures. That pie, though, has two very large slices representing electronics/computers/semiconductors and biopharma/medical devices/diagnostics. Those two groups account for half of all the industrial R&D spending in the world.

And there are a lot of variations inside those industries as well. Apple, for example, spends only 2.2% of its sales on R&D, while Samsung and IBM come in around 6%. By comparison with another flagship high-tech sector, the internet-based companies, Amazon spends just over 6% itself, and Google is at a robust 13.6% of its sales. Microsoft is at 13% itself.

The semiconductor companies are where the money really gets plowed back into the labs, though. Here's a roundup of 2011 spending, where you can see a company like Intel, with forty billion dollars of sales, still putting 17% of that back into R&D. And the smaller firms are (as you might expect) doing even more. AMD spends 22% of its sales on R&D, and Broadcom spends 28%. These are people who, like Alice's Red Queen, have to run as fast as they can if they even want to stay in the same place.

Now we come to the drug industry. The first thing to note is that some of its biggest companies already have their spending set at Intel levels or above: Roche is over 19%, Merck is over 17%, and AstraZeneca is over 16%. The others are no slouches, either: Sanofi and GSK are above 14%, and Pfizer (with the biggest R&D spending drop of all the big pharma outfits, I should add) is at 13.5%. They, J&J, and Abbott drag the average down by only spending in the 11-to-14% range - I don't think that there's such a thing as a drug discovery company that spends in the single digits compared to revenue. If any of us tried to get away with Apple's R&D spending levels, we'd be eaten alive.

All this adds up to a lot: if you take the top 20 biggest industrial R&D spenders in the world, eight of them are drug companies. No other industrial sector has that many on the list, and a number of companies just missed making it. Lilly, for one, spent 23% of revenues on R&D, and BMS spend 22%, as did Biogen.

And those are the big companies. As with the chip makers, the smaller outfits have to push harder. Where I work, we spent about 50% of our revenues on R&D last year, and that's projected to go up. I think you'll find similar figures throughout biopharma. So you can see why I find it sort of puzzling that someone can complain about the drug industry as a whole "only" spending 16% of its revenues. Outside of semiconductors, nobody spends more

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 17, 2013

A Little Ranbaxy Example

Email This Entry

Posted by Derek

Compare and contrast. Here we have Krishnan Ramalingam, from Ranbaxy's Corporate Communications department, in 2006:

Being a global pharmaceutical major, Ranbaxy took a deliberate decision to pool its resources to fight neglected disease segments. . .Ranbaxy strongly felt that generic antiretrovirals are essential in fighting the world-wide struggle against HIV/AIDS, and therefore took a conscious decision to embark upon providing high quality affordable generics for patients around the world, specifically for the benefit of Least Developed Countries. . .Since 2001, Ranbaxy has been providing antiretroviral medicines of high quality at affordable prices for HIV/AIDS affected countries for patients who might not otherwise be able to gain access to this therapy.

And here we have them in an advertorial section of the South African Mail and Guardian newspaper, earlier this year:

Ranbaxy has a long standing relationship with Africa. It was the first Indian pharmaceutical company to set up a manufacturing facility in Nigeria, in the late 1970s. Since then, the company has established a strong presence in 44 of the 54 African countries with the aim of providing quality medicines and improving access. . .Ranbaxy is a prominent supplier of Antiretroviral (ARV) products in South Africa through its subsidiary Sonke Pharmaceuticals. It is the second largest supplier of high quality affordable ARV products in South Africa which are also extensively used in government programs providing access to ARV medicine to millions.

Yes, as Ranbaxy says on its own web site: "At Ranbaxy, we believe that Anti-retroviral (ARV) therapy is an essential tool in waging the war against HIV/AIDS. . .We estimate currently close to a million patients worldwide use our ARV products for their daily treatment needs. We have been associated with this cause since 2001 and were among the first generic companies to offer ARVs to various National AIDS treatment programmes in Africa. We were also responsible for making these drugs affordable in order to improve access. . ."

And now we descend from the heights. Here, in a vivid example of revealed preference versus stated preference, is what was really going on, from that Fortune article I linked to yesterday:

. . .as the company prepared to resubmit its ARV data to WHO, the company's HIV project manager reiterated the point of the company's new strategy in an e-mail, cc'ed to CEO Tempest. "We have been reasonably successful in keeping WHO from looking closely at the stability data in the past," the manager wrote, adding, "The last thing we want is to have another inspection at Dewas until we fix all the process and validation issues once and for all."

. . .(Dinesh) Thakur knew the drugs weren't good. They had high impurities, degraded easily, and would be useless at best in hot, humid conditions. They would be taken by the world's poorest patients in sub-Saharan Africa, who had almost no medical infrastructure and no recourse for complaints. The injustice made him livid.

Ranbaxy executives didn't care, says Kathy Spreen, and made little effort to conceal it. In a conference call with a dozen company executives, one brushed aside her fears about the quality of the AIDS medicine Ranbaxy was supplying for Africa. "Who cares?" he said, according to Spreen. "It's just blacks dying."

I have said many vituperative things about HIV hucksters like Matthias Rath, who have told patient in South Africa to throw away their antiviral medications and take his vitamin supplements instead. What, then, can I say about people like this, who callously and intentionally provided junk, labeled as what were supposed to be effective drugs, to people with no other choice and no recourse? If this is not criminal conduct, I'd very much like to know what is.

And why is no one going to jail? I'm suggesting jail as a civilized alternative to a barbaric, but more appealingly direct form of justice: shipping the people who did this off to live in a shack somewhere in southern Africa, infected with HIV, and having them subsist as best they can on the drugs that Ranbaxy found fit for their sort.

Comments (43) + TrackBacks (0) | Category: Infectious Diseases | The Dark Side

May 16, 2013

Ranbaxy: Looking Under the Rock

Email This Entry

Posted by Derek

Here's an excellent, detailed look from Fortune at how things went off the rails at Ranbaxy and their generic atorvastatin (Lipitor). The company has been hit by a huge fine, and no wonder. This will give you the idea:

On May 13, Ranbaxy pleaded guilty to seven federal criminal counts of selling adulterated drugs with intent to defraud, failing to report that its drugs didn't meet specifications, and making intentionally false statements to the government. Ranbaxy agreed to pay $500 million in fines, forfeitures, and penalties -- the most ever levied against a generic-drug company. (No current or former Ranbaxy executives were charged with crimes.) Thakur's confidential whistleblower complaint, which he filed in 2007 and which describes how the company fabricated and falsified data to win FDA approvals, was also unsealed. Under federal whistleblower law, Thakur will receive more than $48 million as part of the resolution of the case. . .

. . .(he says that) they stumbled onto Ranbaxy's open secret: The company manipulated almost every aspect of its manufacturing process to quickly produce impressive-looking data that would bolster its bottom line. "This was not something that was concealed," Thakur says. It was "common knowledge among senior managers of the company, heads of research and development, people responsible for formulation to the clinical people.

Lying to regulators and backdating and forgery were commonplace, he says. The company even forged its own standard operating procedures, which FDA inspectors rely on to assess whether a company is following its own policies. Thakur's team was told of one instance in which company officials forged and backdated a standard operating procedure related to how patient data are stored, then aged the document in a "steam room" overnight to fool regulators.

Company scientists told Thakur's staff that they were directed to substitute cheaper, lower-quality ingredients in place of better ingredients, to manipulate test parameters to accommodate higher impurities, and even to substitute brand-name drugs in lieu of their own generics in bioequivalence tests to produce better results."

You name it, it's probably there. Good thing the resulting generic drugs were cheap, eh? And I suppose these details render inoperative, as the Nixon staff used to say, the explanations that the company used to have about talk of such problems, that it was all the efforts of their big pharma competitors and some unscrupulous stock market types. (Whenever you see a company's CEO going on about a conspiracy to depress his company's share price, you should worry).

The whole article is well worth reading - your eyebrows are guaranteed to go up a few times. This whole affair has been a damaging blow to the whole offshore generics business, India's in particular, and does not help them wear their "Low cost drugs for the poor" halo any better. Not when your pills have glass particles in them along with (or instead of) the active ingredient. . .

Comments (27) + TrackBacks (0) | Category: The Dark Side

The Atlantic on Drug R&D

Email This Entry

Posted by Derek

"Can you respond to this tripe?" asked one of the emails that sent along this article in The Atlantic. I responded that I was planning to, but that things were made more complicated by my being extensively quoted in said tripe. Anyway, here goes.

The article, by Brian Till of the New America Foundation, seems somewhat confused, and is written in a confusing manner. The title is "How Drug Companies Keep Medicine Out of Reach", but the focus is on neglected tropical diseases, not all medicine. Well, the focus is actually on a contested WHO treaty. But the focus is also on the idea of using prizes to fund research, and on the patent system. And the focus is on the general idea of "delinking" R&D from sales in the drug business. Confocal prose not having been perfected yet, this makes the whole piece a difficult read, because no matter which of these ideas you're waiting to hear about, you end up having a long wait while you work your way through the other stuff. There are any number of sentences in this piece that reference "the idea" and its effects, but there is no sentence that begins with "Here's the idea"

I'll summarize: the WHO treaty in question is as yet formless. There is no defined treaty to be debated; one of the article's contentions is that the US has blocked things from even getting that far. But the general idea is that signatory states would commit to spending 0.01% of GDP on neglected diseases each year. Where this money goes is not clear. Grants to academia? Setting up new institutes? Incentives to commercial companies? And how the contributions from various countries are to be managed is not clear, either: should Angola (for example) pool its contributions with other countries (or send them somewhere else outright), or are they interested in setting up their own Angolan Institute of Tropical Disease Research?

The fuzziness continues. You will read and read through the article trying to figure out what happens next. The "delinking" idea comes in as a key part of the proposed treaty negotiations, with the reward for discovery of a tropical disease treatment coming from a prize for its development, rather than patent exclusivity. But where that money comes from (the GDP-linked contributions?) is unclear. Who sets the prize levels, at what point the money is awarded, who it goes to: hard to say.

And the "Who it goes to" question is a real one, because the article says that another part of the treaty would be a push for open-source discovery on these diseases (Matt Todd's malaria efforts at Sydney are cited). This, though, is to a great extent a whole different question than the source-of-funds one, or the how-the-prizes-work one. Collaboration on this scale is not easy to manage (although it might well be desirable) and it can end up replacing the inefficiencies of the marketplace with entirely new inefficiencies all its own. The research-prize idea seems to me to be a poor fit for the open-collaboration model, too: if you're putting up a prize, you're saying that competition between different groups will spur them on, which is why you're offering something of real value to whoever finishes first and/or best. But if it's a huge open-access collaboration, how do you split up the prize, exactly?

At some point, the article's discussion of delinking R&D and the problems with the current patent model spread fuzzily outside the bounds of tropical diseases (where there really is a market failure, I'd say) and start heading off into drug discovery in general. And that's where my quotes start showing up. The author did interview me by phone, and we had a good discussion. I'd like to think that I helped emphasize that when we in the drug business say that drug discovery is hard, that we're not just putting on a show for the crowd.

But there's an awful lot of "Gosh, it's so cheap to make these drugs, why are they so expensive?" in this piece. To be fair, Till does mention that drug discovery is an expensive and risky undertaking, but I'm not sure that someone reading the article will quite take on board how expensive and how risky it is, and what the implications are. There's also a lot of criticism of drug companies for pricing their products at "what the market will bear", rather than as some percentage of what it cost to discover or make them. This is a form of economics I've criticized many times here, and I won't go into all the arguments again - but I will ask:what other products are priced in such a manner? Other than what customers will pay for them? Implicit in these arguments is the idea that there's some sort of reasonable, gentlemanly profit that won't offend anyone's sensibilities, while grasping for more than that is just something that shouldn't be allowed. But just try to run an R&D-driven business on that concept. I mean, the article itself details the trouble that Eli Lilly, AstraZeneca, and others are facing with their patent expirations. What sort of trouble would they be in if they'd said "No, no, we shouldn't make such profits off our patented drugs. That would be indecent." Even with those massive profits, they're in trouble.

And that brings up another point: we also get the "Drug companies only spend X pennies per dollar on R&D". That's the usual response to pointing out situations like Lilly's; that they took the money and spent it on fleets of yachts or something. The figure given in the article is 16 cents per dollar of revenue, and it's prefaced by an "only". Only? Here, go look at different industries, around the world, and find one that spends more. By any industrial standard, we are plowing massive amounts back into the labs. I know that I complain about companies doing things like stock buybacks, but that's a complaint at the margin of what is already pretty impressive spending.

To finish up, here's one of the places I'm quoted in the article:

I asked Derek Lowe, the chemist and blogger, for his thoughts on the principle of delinking R&D from the actual manufacture of drugs, and why he thought the industry, facing such a daunting outlook, would reject an idea that could turn fallow fields of research on neglected diseases into profitable ones. "I really think it could be viable," he said. "I would like to see it given a real trial, and neglected diseases might be the place to do it. As it is, we really already kind of have a prize model in the developed countries, market exclusivity. But, at the same time, you could look at it and it will say, 'You will only make this amount of money and not one penny more by curing this tropical disease.' Their fear probably is that if that model works great, then we'll move on to all the other diseases."

What you're hearing is my attempt to bring in the real world. I think that prizes are, in fact, a very worthwhile thing to look into for market failures like tropical diseases. There are problems with the idea - for one thing, the prize payoff itself, compared with the time and opportunity cost, is hard to get right - but it's still definitely worth thinking about. But what I was trying to tell Brian Till was that drug companies would be worried (and rightly) about the extension of this model to all other disease areas. Wrapped up in the idea of a research-prize model is the assumption that someone (a wise committee somewhere) knows just what a particular research result is worth, and can set the payout (and afterwards, the price) accordingly. This is not true.

There's a follow-on effect. Such a wise committees might possibly feel a bit of political pressure to set those prices down to a level of nice and cheap, the better to make everyone happy. Drug discovery being what it is, it would take some years before all the gears ground to a halt, but I worry that something like this might be the real result. I find my libertarian impulses coming to the fore whenever I think about this situation, and that prompts me to break out an often-used quote from Robert Heinlein:

Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

This is known as "bad luck."

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Prices | Why Everyone Loves Us

May 15, 2013

And The Award For Clinical Futility Goes To. . .

Email This Entry

Posted by Derek

I was talking with someone the other day about the most difficult targets and therapeutic areas we knew, and that brought up the question: which of these has had the greatest number of clinical failures? Sepsis was my nomination: I know that there have been several attempts, all of which have been complete washouts. And for mechanisms, defined broadly, I nominate PPAR ligands. The only ones to make it through were the earliest compounds, discovered even before their target had been identified. What other nominations do you have?

Comments (32) + TrackBacks (0) | Category: Clinical Trials | Drug Industry History

GSK's Published Kinase Inhibitor Set

Email This Entry

Posted by Derek

Speaking about open-source drug discovery (such as it is) and sharing of data sets (such as they are), I really should mention a significant example in this area: the GSK Published Kinase Inhibitor Set. (It was mentioned in the comments to this post). The company has made 367 compounds available to any academic investigator working in the kinase field, as long as they make their results publicly available (at ChEMBL, for example). The people at GSK doing this are David Drewry and William Zuercher, for the record - here's a recent paper from them and their co-workers on the compound set and its behavior in reporter-gene assays.

Why are they doing this? To seed discovery in the field. There's an awful lot of chemical biology to be done in the kinase field, far more than any one organization could take on, and the more sets of eyes (and cerebral cortices) that are on these problems, the better. So far, there have been about 80 collaborations, mostly in Europe and North America, all the way from broad high-content phenotypic screening to targeted efforts against rare tumor types.

The plan is to continue to firm up the collection, making more data available for each compound as work is done on them, and to add more compounds with different selectivity profiles and chemotypes. Now, the compounds so far are all things that have been published on by GSK in the past, obviating concerns about IP. There are, though, a multitude of other compounds in the literature from other companies, and you have to think that some of these would be useful additions to the set. How, though, does one get this to happen? That's the stage that things are in now. Beyond that, there's the possibility of some sort of open network to optimize entirely new probes and tools, but there's plenty that could be done even before getting to that stage.

So if you're in academia, and interested in kinase pathways, you absolutely need to take a look at this compound set. And for those of us in industry, we need to think about the benefits that we could get by helping to expand it, or by starting similar efforts of our own in other fields. The science is big enough for it. Any takers?

Comments (22) + TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Chemical News | Drug Assays

May 14, 2013

A Specific Crowdfunding Example

Email This Entry

Posted by Derek

I mentioned Microryza in that last post. Here's Prof. Michael Pirrung, at UC Riverside, with an appeal there to fund the resynthesis of a compound for NCI testing against renal cell carcinoma. It will provide an experienced post-doc's labor for a month to prepare an interesting natural-product-derived proteasome inhibitor that the NCI would like to take to their next stage of evaluation. Have a look - you might be looking at the future of academic research funding, or at least a real part of it.

Comments (14) + TrackBacks (0) | Category: Cancer | General Scientific News

Crowdfunding Research

Email This Entry

Posted by Derek

Crowdfunding academic research might be changing, from a near-stunt to an widely used method of filling gaps in a research group's money supply. At least, that's the impression this article at Nature Jobs gives:

The practice has exploded in recent years, especially as success rates for research-grant applications have fallen in many places. Although crowd-funding campaigns are no replacement for grants — they usually provide much smaller amounts of money, and basic research tends to be less popular with public donors than applied sciences or arts projects — they can be effective, especially if the appeals are poignant or personal, involving research into subjects such as disease treatments.

The article details several venues that have been used for this sort of fund-raising, including Indiegogo, Kickstarter, RocketHub, FundaGeek, and SciFund Challenge. I'd add Microryza to that list. And there's a lot of good advice for people thinking about trying it themselves, including how much money to try for (at least at first), the timelines one can expect, and how to get your message out to potential donors.

Overall, I'm in favor of this sort of thing, but there are some potential problems. This gives the general pubic a way to feel more connected to scientific research, and to understand more about what it's actually like, both of which are goals I feel a close connection to. But (as that quote above demonstrates), some kinds of research are going to be an easier sell than others. I worry about a slow (or maybe not so slow) race to the bottom, with lab heads overpromising what their research can deliver, exaggerating its importance to immediate human concerns, and overselling whatever results come out.

These problems have, of course, been noted. Ethan Perlstein, formerly of Princeton, used RocketHub for his crowdfunding experiment that I wrote about here. And he's written at Microryza with advice about how to get the word out to potential donors, but that very advice has prompted a worried response over at SciFund Challenge, where Jai Ranganathan had this to say:

His bottom line? The secret is to hustle, hustle, hustle during a crowdfunding campaign to get the word out and to get media attention. With all respect to Ethan, if all researchers running campaigns follow his advice, then that’s the end for science crowdfunding. And that would be a tragedy because science crowdfunding has the potential to solve one of the key problems of our time: the giant gap between science and society.

Up to a point, these two are talking about different things. Perlstein's advice is focused on how to run a successful crowdsourcing campaign (based on his own experience, which is one of the better guides we have so far), while Ranganathan is looking at crowdsourcing as part of something larger. Where they intersect, as he says, is that it's possible that we'll end up with a tragedy of the commons, where the strategy that's optimal for each individual's case turns out to be (very) suboptimal for everyone taken together. He's at pains to mention that Ethan Perlstein has himself done a great job with outreach to the public, but worries about those to follow:

Because, by only focusing on the mechanics of the campaign itself (and not talking about all of the necessary outreach), there lurks a danger that could sink science crowdfunding. Positive connections to an audience are important for crowdfunding success in any field, but they are especially important for scientists, since all we have to offer (basically) is a personal connection to the science. If scientists omit the outreach and just contact audiences when they want money, that will go a long way to poisoning the connections between science and the public. Science crowdfunding has barely gotten started and already I hear continuous complaints about audience exasperation with the nonstop fundraising appeals. The reason for this audience fatigue is that few scientists have done the necessary building of connections with an audience before they started banging the drum for cash. Imagine how poisonous the atmosphere will become if many more outreach-free scientists aggressively cold call (or cold e-mail or cold tweet) the universe about their fundraising pleas.

Now, when it comes to overpromising and overselling, a cynical observer might say that I've just described the current granting system. (And if we want even more of that sort of thing, all we have to do is pass a scheme like this one). But the general public will probably be a bit easier to fool than a review committee, at least, if you can find the right segment of the general public. Someone will probably buy your pitch, eventually, if you can throw away your pride long enough to keep on digging for them.

That same cynical observer might say that I've just described the way that we set up donations to charities, and indeed Ranganathan makes an analogy to NPR's fundraising appeals. That's the high end. The low end of the charitable-donation game is about as low as you can go - just run a search for the words "fake" and "charity" through Google News any day, any time, and you can find examples that will make you ashamed that you have the same number of chromosomes as the people you're reading about. (You probably do). Avoiding this state really is important, and I'm glad that people are raising the issue already.

What if, though, someone were to set up a science crowdfunding appeal, with hopes of generating something that could actually turn a profit, and portions of that to be turned over to the people who put up the original money? We have now arrived at the biopharma startup business, via a different road than usual. Angel investors, venture capital groups, shareholders in an IPO - all of these people are doing exactly that, at various levels of knowledge and participation. The pitch is not so much "Give us money for the good of science", but "Give us money, because here's our plan to make you even more". You will note that the scale of funds raised by the latter technique make those raised by the former look like a roundoff error, which fits in pretty well with what I take as normal human motivations.

But academic science projects have no such pitch to make. They'll have to appeal to altruism, to curiosity, to mood affiliation, and other nonpecuniary motivations. Done well, that can be a very good thing, and done poorly, it could be a disaster.

Comments (20) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | General Scientific News

May 13, 2013

Astellas Closing the OSI and Perseid Sites?

Email This Entry

Posted by Derek

I've heard this morning that Astellas is closing the OSI site in Farmingdale, NY, and the Perseid Therapeutics site in Redwood City, CA. More details as I hear them (and check the comments section; people with more direct knowledge may be showing up in there).

Comments (12) + TrackBacks (0) | Category: Business and Markets

Pyrrolidines, Not the Usual Way

Email This Entry

Posted by Derek

I wanted to mention a new reaction that's come out in a paper in Science. It's from the Betley lab at Harvard, and it's a new way to make densely substituted saturated nitrogen heterocycles (pyrrolidines, in particular).
Iron%20cat.png
You start from a four-carbon chain with an azide at one end, and you end up with a Boc-protected pyrrolidine, by direct activation/substitution of the CH bond at the other end of the chain. Longer chains give you mixtures of different ring sizes (4, 5, and 6), depending on where the catalyst feel like inserting the new bond. I'd like to see how many other functional groups this chemistry is compatible with (can you have another tertiary amine in there somewhere, or a hydroxy?) But we have a huge lack of carbon-hydrogen functionalization reactions in this business, and this is a welcome addition to a rather short list.

There was a paper last year from the Groves group at Princeton on fluorination of aliphatic CH bonds using a manganese porphyrin complex. These two papers are similar in my mind - they're modeling themselves on the CYP enzymes, using high-valent metals to accomplish things that normally we wouldn't think of being able to do easily. The more of this sort of thing, the better, as far as I'm concerned: new reactions will make us think of entirely new things

Comments (9) + TrackBacks (0) | Category: Chemical News

Another Big Genome Disparity (With Bonus ENCODE Bashing)

Email This Entry

Posted by Derek

I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.

Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)

But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.

That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?

Comments (18) + TrackBacks (0) | Category: Biological News

May 10, 2013

Why Not Share More Bioactivity Data?

Email This Entry

Posted by Derek

The ChEMBL database of compounds has been including bioactivity data for some time, and the next version of it is slated to have even more. There are a lot of numbers out in the open literature that can be collected, and a lot of numbers inside academic labs. But if you want to tap the deepest sources of small-molecule biological activity data, you have to look to the drug industry. We generate vast heaps of such; it's the driveshaft of the whole discovery effort.

But sharing such data is a very sticky issue. No one's going to talk about their active projects, of course, but companies are reluctant to open the books even to long-dead efforts. The upside is seen as small, and the downside (though unlikely) is seen as potentially large. Here's a post from the ChEMBL blog that talks about the problem:

. . .So, what would your answer be if someone asked you if you consider it to be a good idea if they would deposit some of their unpublished bioactivity data in ChEMBL? My guess is that you would be all in favour of this idea. 'Go for it', you might even say. On the other hand, if the same person would ask you what you think of the idea to deposit some of ‘your bioactivity data’ in ChEMBL the situation might be completely different.

First and foremost you might respond that there is no such bioactivity data that you could share. Well let’s see about that later. What other barriers are there? If we cut to the chase then there is one consideration that (at least in my experience) comes up regularly and this is the question: 'What’s in it for me?' Did you ask yourself the same question? If you did and you were thinking about ‘instant gratification’ I haven’t got a lot to offer. Sorry, to disappoint you. However, since when is science about ‘instant gratification’? If we would all start to share the bioactivity data that we can share (and yes, there is data that we can share but don’t) instead of keeping it locked up in our databases or spreadsheets this would make a huge difference to all of us. So far the main and almost exclusive way of sharing bioactivity data is through publications but this is (at least in my view) far too limited. In order to start to change this (at least a little bit) the concept of ChEMBL supplementary bioactivity data has been introduced (as part of the efforts of the Open PHACTS project, http://www.openphacts.org).

There's more on this in an article in Future Medicinal Chemistry. Basically, if an assay has been described in an open scientific publication, the data generated through it qualifies for deposit in ChEMBL. No one's asking for companies to throw open their books, but even when details of a finished (or abandoned) project are published, there are often many more data points generated than ever get included in the manuscript. Why not give them a home?

I get the impression, though, that GSK is the only organization so far that's been willing to give this a try. So I wanted to give it some publicity as well, since there are surely many people who aren't aware of the effort at all, and might be willing to help out. I don't expect that data sharing on this level is going to lead to any immediate breakthroughs, of course, but even though assay numbers like this have a small chance of helping someone, they have a zero chance of helping if they're stuck in the digital equivalent of someone's desk drawer.

What can be shared, should be. And there's surely a lot more that falls into that category than we're used to thinking.

Comments (18) + TrackBacks (0) | Category: Drug Assays | The Scientific Literature

May 9, 2013

An Anticoagulant Antidote

Email This Entry

Posted by Derek

Here's a drug-discovery problem that you don't often have to think about. The anticoagulant field is a huge one, with Plavix, warfarin, and plenty of others jostling for a share of a huge market (both for patients to take themselves, and for hospital use). The Factor Xa inhibitors are a recent entry into this area, with Bayer's Xarelto (rivaroxaban) as the key example so far.

But there's a problem with any Xa inhibitor: there's no antidote for them. Blood clotting therapies have a narrow window to work in - anything effective enough to be beneficial will be effective enough to be trouble under other circumstances. Anticoagulants need a corresponding way to cancel out their effects, in case of overdose or other trouble. (Vitamin K is the answer for warfarin). We don't often have to consider this issue, but it's a big one in this case.

Portola Therapeutics has developed a Factor Xa mimic that binds the inhibitors, and thus titrates their effects. They have their own Xa inhibitor coming along (bextrixaban), but if this protein makes it through, they'll have done the whole field a favor as well as themselves.

Comments (8) + TrackBacks (0) | Category: Cardiovascular Disease

Merck's Liptruzet: A Cause For Shame?

Email This Entry

Posted by Derek

Vytorin's been discussed several times around here. The combination of Zetia (ezetimibe), the cholesterol absorption inhibitor discovered at Schering-Plough, with Merck's simvastatin looked as if it should be a very effective cholesterol-lowering medication, but the real-world data have been consistentlypuzzling. There's a big trial going on that people are hoping will clarify things, but so far it's had the opposite effect. It's no exaggeration to say that the entire absorption inhibitor/statin combination idea is in doubt, and we may well learn a lot about human lipidology as we figure out what's happened. It will have been an expensive lesson.

So in the midst of all this, what does Merck do but trot out anotherezetimibe/statin combination? Liptruzet has atorvastatin (generic Lipitor) in it, instead of simavastatin (generic Zocor), and what that is supposed to accomplish is a mystery to me. It's a mystery to Josh Bloom over at the American Council for Science and Health, too, and he's out with an op-ed saying that Merck should be ashamed of itself.

I can't see how he's wrong. What I'm seeing is an attempt by Merck to position itself should the ongoing Vytorin trial actually exonerate the combination idea. Vytorin, you see, doesn't have all that much patent lifetime left; its problems since 2008 have eaten the most profitable years right out of its cycle. So if Vytorin turns out to actually work out, after all the exciting plot twists, Merck will be there to tell people that they shouldn't take it. No, they should take exciting new Liptruzet instead. It's newer.

If anyone can think of a reason why this doesn't make Merck look like shady marketeers, I'd like to hear it. And (as Bloom points out) it doesn't make the FDA look all that great, either, since I'm sure that Liptruzet will count towards the end-of-the-year press release about all the innovative new drugs that the agency has approved. Not this time.

Update: John LaMattina's concerned about that last part, too.

Comments (38) + TrackBacks (0) | Category: Cardiovascular Disease | Why Everyone Loves Us

Your Brain Shifts Gears

Email This Entry

Posted by Derek

Want to be weirded out? Study the central nervous system. I started off my med-chem career in CNS drug discovery, and it's still my standard for impenetrability. There's a new paper in Science, though, that just makes you roll your eyes and look up at the ceiling.

The variety of neurotransmitters is well appreciated - you have all these different and overlapping signaling systems using acetylcholine, dopamine, serotonin, and a host of lesser-known molecules, including such oddities as hydrogen sulfide and even carbon monoxide. And on the receiving end, the various subtypes of receptors are well studied, and those give a tremendous boost to the variety of signaling from a single neurotransmitter type. Any given neuron can have several of these going on at the same time - when you consider how many different axons can be sprawled out from a single cell, there's a lot of room for variety.

That, you might think, is a pretty fair amount of complexity. But note also that the density and population of these receptors can change according to environmental stimuli. That's why you get headaches if you don't have your accustomed coffee in the morning (you've made more adenosine A2 receptors, and you haven't put any fresh caffeine ligand into them). Then there are receptor dimers (homo- and hetero-) that act differently than the single varieties, constituitively active receptors that are always on, until a ligand turns them off (the opposite of the classic signaling mechanism), and so on. Now, surely, we're up to a suitable level of complex function.

Har har, says biology. This latest paper shows, by a series of experiment in rats, that a given population of neurons can completely switch the receptor system it uses in response to environmental cues:

Our results demonstrate transmitter switching between dopamine and somatostatin in neurons in the adult rat brain, induced by exposure to short- and long-day photoperiods that mimic seasonal changes at high latitudes. The shifts in SST/dopamine expression are regulated at the transcriptional level, are matched by parallel changes in postsynaptic D2R/SST2/4R expression, and have pronounced effects on behavior. SST-IR/TH-IR local interneurons synapse on CRF-releasing cells, providing a mechanism by which the brain of nocturnal rats generates a stress response to a long-day photoperiod, contributing to depression and serving as functional integrators at the interface of sensory and neuroendocrine responses.

This remains to be demonstrated in human tissue, but I see absolutely no reason what the same sort of thing shouldn't be happening in our heads as well. There may well be a whole constellation of these neurotransmitter switchovers that can take place in response to various cues, but which neurons can do this, involving which signaling regimes, and in response to what stimuli - those are all open questions. And what the couplings are between the environmental response and all the changes in transcription that need to take place for this to happen, those are going to have to be worked out, too.

There may well be drug targets in there. Actually, there are drug targets everywhere. We just don't know what most of them are yet.

Comments (15) + TrackBacks (0) | Category: The Central Nervous System

May 8, 2013

Total Synthesis in Print

Email This Entry

Posted by Derek

Over at the Baran group's "Open Flask" blog, there's a post on the number of total synthesis papers that show up in the Journal of the American Chemical Society. I'm reproducing one of the figures below, the percentage of JACS papers with the phrase "total synthesis" in their title.
Percent%20total%20synthesis.png
You can see that the heights of the early 1980s have never been reached again, and that post-2000 there has been a marked drought. As the post notes, JACS seems to have begun publishing many more papers in total around that time (anyone notice this or know anything about it?), and it appears that they certainly didn't fill the new pages with total synthesis. 2013, though, already looks like an outlier, and it's only May.

My own feelings about total synthesis are a matter of record, and have been for some time, if anyone cares. So I'm not that surprised to see the trend in this chart, if trend it is.

But that said, it would be worth running the same analysis on a few other likely journal titles. Has the absolute number of total synthesis papers gone down? Or have they merely migrated (except for the really exceptional ones) to the lower-impact journals? Do fewer papers put the phrase "Total synthesis of. . ." in their titles as compared to years ago? Those are a few of the confounding variables I can think of, and there are probably more. But I think, overall, that the statement "JACS doesn't publish nearly as much total synthesis as it used to" seems to be absolutely correct. Is this a good thing, a bad thing, or some of each?

Comments (31) + TrackBacks (0) | Category: Chemical News | The Scientific Literature

Things I Won't Work With: Dimethylcadmium

Email This Entry

Posted by Derek

Cadmium is bad news. Lead and mercury get all the press, but cadmium is just as foul, even if far fewer people encounter it. Never in my career have I had any occasion to use any, and I like it that way. There was an organocadmium reaction in my textbook when I took sophomore organic chemistry, but it was already becoming obsolete, and good riddance, because this one of those metals that's best avoided for life. It has acute toxic effects, chronic toxic effects, and if there are any effects in between those it probably has them, too.

Fortunately, cadmium is not well absorbed from the gut, and even more fortunately, no one eats it. But breathing it, now that's another matter, and if you're a nonchemist wondering how someone can breath metallic elements, then read on. One rather direct way is if someone is careless enough to floof fine powders of them around you. That's how cadmium's toxicity was discovered in the first place, from miners dealing with the dust. But that's only the start. There's a bottom of the list for breathable cadmium, too, which is quite a thought. The general rule is, if you're looking for the worst organic derivatives of any metal, you should hop right on down to the methyl compounds. That's where the most choking vapors, the brightest flames, and the most panicked shouts and heartfelt curses are to be found. Methyl organometallics tend to be small, reactive, volatile, and ready to party.

Dimethyl cadmium, then, represents the demon plunked in the middle of the lowest circle as far as this element is concerned. I'll say only one thing in its favor: it's not quite as reactive as dimethyl zinc, its cousin one row up in the periodic table. No one ever has to worry about inhaling dimethyl zinc; since it bursts into ravenous flames as soon as it hits the air, the topic just never comes up. Then again, when organozincs burn, they turn into zinc oxide, which is inert enough to be used in cosmetics. But slathering your nose with cadmium oxide is not recommended.

Even though dimethylcadmium does not instantly turn into a wall of flame, it can still liven the place up. If you just leave the liquid standing around, hoping it'll go away, there are two outcomes. If you have a nice wide spill of it, with a lot of surface area, you fool, it'll probably still ignite on its own, giving off plenty of poisonous cadmium oxide smoke. If for some reason it doesn't do that, you will still regret your decision: the compound will react with oxygen anyway and form a crust of dimethyl cadmium peroxide, a poorly characterized compound (go figure) which is a friction-sensitive explosive. I've no idea how you get out of that tight spot; any attempts are likely to suddenly distribute the rest of the dimethylcadmium as a fine mist. Water is not the answer. One old literature report says that "When thrown into water, (dimethylcadmium) sinks to the bottom in large drops, which decompose in a series of sudden explosive jerks, with crackling sounds", and you could not ask for a clearer picture of the devil finding work for idle hands. Or idle heads.

Even without all this excitement, the liquid has an alarmingly high vapor pressure, and that vapor is alarmingly well absorbed on inhalation. a few micrograms (yep, millionths of a gram) of it per cubic meter of air hits the legal limits, and I'd prefer to be surrounded by far less. It's toxic to the lungs, naturally, but since it gets into the blood stream so well, it's also toxic to the liver, and to the kidneys (basically, the organs that are on the front lines when it's time to excrete the stuff), and to the brain and nervous system. Cadmium compounds in general have also been confirmed as carcinogenic, should you survive the initial exposure.

After all this, if you still feel the urge to experience dimethylcadmium - stay out of my lab - you can make this fine compound quite easily from cadmium chloride, which I've no particular urge to handle, either, and methyllithium or methyl Grignard reagent. Purifying it away from the ethereal solvents after that route, though, looks like extremely tedious work, which allows you the rare experience of being bored silly by something that's trying to kill you. It is safe to assume that the compound will swiftly penetrate latex gloves, just like deadly and hideous dimethylmercury, so you'll want to devote some time to thinking about how you'll handle the fruits of your labor.

I'm saddened to report that the chemical literature contains descriptions of dimethylcadmium's smell. Whoever provided these reports was surely exposed to far more of the vapor than common sense would allow, because common sense would tell you to stay about a half mile upwind at all times. At any rate, its odor is variously described as "foul", "unpleasant", "metallic", "disagreeable", and (wait for it) "characteristic", which is an adjective that shows up often in the literature with regard to smells, and almost always makes a person want to punch whoever thought it was useful. We can assume that dimethylcadmium is not easily confused with beaujolais in the blindfolded sniff test, but not much more. So if you're working with organocadmium derivatives and smell something nasty, but nasty in a new, exciting way that you've never quite smelled before, then you can probably assume the worst.

Now, as opposed to some of the compounds on my list, you can find people who've handled dimethylcadmium, or even prepared it, worse luck, although it is an (expensive) article of commerce. As mentioned above, it used to be in all the textbooks as a reliable way to form methyl ketones from acid chlorides, but there are far less evil reagents that can do that for you now. It's still used (on a research scale) to make exotic photosensitive and semiconducting materials, but even those hardy folk would love to find an alternative. No, this compound appears to have no fan club whatsoever. Start one at your own risk.

Comments (55) + TrackBacks (0) | Category: Things I Won't Work With

May 7, 2013

Another Germ Theory Victory - Back Pain?

Email This Entry

Posted by Derek

The "New Germ Theory" people may have notched up another one: a pair of reports out from a team in Denmark strongly suggest that many cases of chronic low back pain are due to low-grade bacterial infection. They've identified causative agents (Propionibacterium acnes) by isolating them from tissue, and showed impressive success in the clinic by treating back pain patients with a lengthy course of antibiotics. Paul Ewald is surely smiling about this news, although (as mentioned here) he has some ideas about the drug industry that I can't endorse.

So first we find out that stomach ulcers are not due to over-dominant mothers, and now this. What other hard-to-diagnose infections are we missing? Update - such as obesity, maybe?

Comments (25) + TrackBacks (0) | Category: Infectious Diseases

An Update on Deuterium Drugs

Email This Entry

Posted by Derek

In case you're wondering how the deuterated-drugs idea is coming along, the answer seems to be "just fine", at least for Concert Pharamaceuticals. They've announced their third collaboration inside of a year, this time with Celgene.

And they've got their own compound in development, CTP-499, in Phase II for diabetic nephropathy. That's a deutero analog of HDX (1-((S)-5-hydroxyhexyl)-3,7-dimethylxanthine), which is an active metabolite of the known xanthine drug pentoxifylline (which has also been investigated in diabetic kidney disease). You'd assume that deuteration makes this metabolite hang around longer, rather than being excreted, which is just the sort of profile shift that Concert is targeting.

Long-term, the deuteration idea has now diffused out into the general drug discovery world, and there will be no more easy pickings for it (well, at least not so many, depending on how competently patents are drafted). But if Concert can make a success out of what they have going already, they're already set for a longer term than most startups.

Comments (15) + TrackBacks (0) | Category: Pharmacokinetics

One Case of Plagiarism Down. Two Zillion to Go.

Email This Entry

Posted by Derek

You may remember this case from Chemistry - A European Journal earlier this year, where a paper appeared whose text was largely copy-pasted from a previous JACS paper from another lab. This one has finally been pulled; Retraction Watch has the details.

The most interesting part is that statement "The authors regret this approach", which I don't recall ever seeing in a situation like this. The comments at Retraction Watch build on this, and are quite interesting. There are many countries (and cultures) where it's considered acceptable (or at least a venial sin) to lift passages verbatim from other English-language papers when you're publishing in that language. I can see the attraction - I would hate to have to deliver a scientific manuscript in German, for example, which is the closest thing I have to a second language.

But I still wouldn't do it by copying and pasting big hunks of text, either. Reasons for resorting to that range from wanting to be absolutely sure that things are being expressed correctly in ones third or fourth language, all the way to "Isn't that how it's supposed to be done?" The latter situation obtains in parts of Asia, where apparently there's an emphasis in some schools on verbatim transcription of authoritative sources. There's an interesting cite to Yu Hua's China in Ten Words, where one of those ten words is "copycat" (shanzhai):

As a product of China’s uneven development, the copycat phenomenon has as many negative implications as it has positive aspects. The moral bankruptcy and confusion of right and wrong in China today, for example, and vivid expression in copycatting. As the copycat concept has gained acceptance, plagiarism, piracy, burlesque, parody, slander, and other actions originally seen as vulgar or illegal have been given a reason to exist; and in social psychology and public opinion they have gradually acquired respectability. No wonder that “copycat” has become one of the words most commonly used in China today. All of this serves to demonstrate the truth of the old Chinese saying: “The soil decides the crop, and the vine shapes the gourd.”

Four years ago I saw a pirated edition of [my novel] Brothers for sale on the pedestrian bridge that crosses the street outside my apartment; it was lying there in a stack of other pirated books. When the vendor noticed me running my eyes over his stock, he handed me a copy of my novel, recommending it as a good read. A quick flip through and I could tell at once that it was pirated. “No, it’s not a pirated edition,” he corrected me earnestly. “It’s a copycat.”

This tendency isn't a good fit with a lot of things, but it especially doesn't work out so well with scientific publication. I haven't seen it stated in so many words, but a key assumption is that every scientific paper is supposed to be different. If you take the time to read a new paper, you should learn something new and you should see something that you haven't seen before. It might be trivial, it might well be useless, but it should be at least slightly different from any other paper you've read or could find.

Now, as the Retraction Watch comments mention, some of these plagiarism cases are examples of "templating", where original (or sort of original) work was done, but the presentation of it was borrowed from an existing paper. That's not as bad as faking up results completely, of course, but you still have to wonder about the value of your work if you can lift big swaths of someone else's paper to describe it. Even when the manuscript itself has been written fresh from the ground up, there's plenty of stuff out in the literature like this. Someone gets an interesting reaction with a biphenyl and a zinc catalyst, and before you know it, there are all these quickie communications where someone else says "Hey, we got that with a napthyl", or "Hey, we got that with a boron halide catalyst". Technically, yes, these are different, but we're in the land of least publishable units now, where the salami is sliced so thinly that you can read a newspaper through it.

So the authors regret this approach, do they? So does everyone else.

Comments (9) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

May 6, 2013

Ken Frazier at Merck: An Assessment

Email This Entry

Posted by Derek

Here's a fine profile of Merck's Ken Frazier at Forbes. Matthew Herper does a good job of showing the hole that Merck has been slowly sliding into over the past few years, and wonders if Frazier is going to be able to drag the company out of it:

But it is clear that Frazier still views himself through the prism of his lawyerly training–he has not yet grown into a commanding and decisive chief executive. He’s scrupulous about not making anyone else look bad–working almost too hard in interviews to be clear that Perlmutter’s predecessor was not fired–and seems to be afraid to be seen as making too many big changes. “I am a person who does not subscribe to the hero-CEO school of thought,” he says. His persona is the culmination of the careful lessons he learned from his long climb to the top and his masterful legal defense against the lawsuits related to the pain pill Vioxx, which saved Merck and got him the top job. In order to be a great leader, he’s going to have to unlearn them.

I don't subscribe much to the hero-CEO school, either, at least not for a company the size of Merck. But even for a huge company, I think a rotten CEO can do a lot more harm than a good one can help (there's some thermodynamic way to express that, I'm sure). Frazier is certainly not in that category, and I've enjoyed some of the things he's had to say in the past (although I've also wondered about the follow-through). I wonder, though: how much of what Merck needs is in Frazier's power to do anything about? Or any one person's?

Update: here's David Shaywitz at Forbes, wondering about similar issues and what biopharma CEOs can actually do about them.

Comments (14) + TrackBacks (0) | Category: Business and Markets

The Medical Periodic Table

Email This Entry

Posted by Derek

Here's the latest "medical periodic table", courtesy of this useful review in Chemical Communications. Element symbols in white are known to be essential in man. The ones with a blue background are found in the structures of known drugs, the orange ones are used in diagnostics, and the green ones are medically useful radioisotopes. (The paper notes that titanium and tantalum are colored blue due to their use in implants).
Medical%20periodic%20table.png
I'm trying to figure out a couple of these. Xenon I've heard of as a diagnostic (hyperpolarized and used in MRI of lung capacity), but argon? (The supplementary material for the paper says that argon plasms has been used locally to control bleeding in the GI tract). And aren't there marketed drugs with a bromine atom in them somewhere? At any rate, the greyed-out elements end up that way through four routes, I think. Some of them (francium, and other high-atomic-number examples) are just too unstable (and thus impossible to obtain) for anything useful to be done with them. Others (uranium) are radioactive, but have not found a use that other radioisotopes haven't filled already. Then you have the "radioactive but toxic) category, the poster child of which is plutonium. (That said, I'm pretty sure that popular reports of its toxicity are exaggerated, but it still ain't vanilla pudding). Then you have the nonradioactive but toxic crowd - cadmium, mercury, beryllium and so on. (There's another question - aren't topical mercury-based antiseptics still used in some parts of the world? And if tantalum gets on the list for metal implants, what about mercury amalgam tooth fillings?) Finally, you have elements that are neither hot not poisonous, but that no one has been able to find any medical use for (scandium, niobium, hafnium). Scandium and beryllium, in fact, are my nominees for "lowest atomic-numbered elements that many people have never heard of", and because of nonsparking beryllium wrenches and the like, I think scandium might win out. I've never found a use for it myself, either. I have used a beryllium-copper wrench (they're not cheap) in a hydrogenation room.

The review goes on to detail the various classes of metal-containing drugs, most prominent of them being, naturally, the platinum anticancer agents. There are ruthenium complexes in the clinic in oncology, and some work has been done with osmium and iridium compounds. Ferrocenyl compounds have been tried several times over the years, often put in place of a phenyl ring, but none of them (as far as I know) have made it into the general pharmacopeia. What I didn't know what that titanocene dichloride has been into the clinic (but with disappointing results). And arsenic compounds have a long (though narrow) history in medicinal chemistry, but have recently made something of a comeback. The thioredoxin pathway seems to be a good fit for exotic elements - there's a gadolinium compound in development, and probably a dozen other metals have shown activity of one kind or another, both in oncology and against things like malaria parasites.

Many of these targets, though, are in sort of a "weirdo metal" category in the minds of most medicinal chemists, and that might not reflect reality very well. There's no reason why metal complexes wouldn't be able to inhibit more traditional drug targets as well, but that brings up another concern. For example, there have been several reports of rhodium, iridium, ruthenium, and osmium compounds as kinase inhibitors, but I've never quite been able to see the point of them, since you can generally get some sort of kinase inhibitor profile without getting that exotic. But what about the targets where we don't have a lot of chemical matter - protein/protein interactions, for example? Who's to say that metal-containing compounds wouldn't work there? But I doubt if that's been investigated to any extent at all - not many companies have such things in their compound collections, and it still might turn out to be a wild metallic goose chase to even look. No one knows, and I wonder how long it might be before anyone finds out.

In general, I don't think anyone has a feel for how such compounds behave in PK and tox. Actually "in general" might not even be an applicable term, since the number and types of metal complexes are so numerous. Generalization would probably be dangerous, even if our base of knowledge weren't so sparse, which sends you right back into the case-by-case wilderness. That's why a metal-containing compound, at almost any biopharma company, would be met with the sort of raised eyebrow that Mr. Spock used to give Captain Kirk. What shots these things have at becoming drugs will be in nothing-else-works areas (like oncology, or perhaps gram-negative antibiotics), or against exotic mechanisms in other diseases. And that second category, as mentioned above, will be hard to get off the ground, since almost no one tests such compounds, and you don't find what you don't test.

Comments (56) + TrackBacks (0) | Category: Cancer | Odd Elements in Drugs | Toxicology

May 3, 2013

Drug Assay Numbers, All Over the Place

Email This Entry

Posted by Derek

There's a truly disturbing paper out in PLoSONE with potential implications for a lot of assay data out there in the literature. The authors are looking at the results of biochemical assays as a function of how the compounds are dispensed in them, pipet tip versus acoustic, which is the sort of idea that some people might roll their eyes at. But people who've actually done a lot of biological assays may well feel a chill at the thought, because this is just the sort of you're-kidding variable that can make a big difference.

Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets.

Lovely. There have been some alarm bells sounded before about disposable-pipet-tip systems. The sticky-compound problem is always out there, where various substances decide that they like the plastic walls of the apparatus a lot more than they like being in solution. That'll throw your numbers all over the place. And there have been concerns about bioactive substances leaching out of the plastic. (Those are just two recent examples - this new paper has several other references, if you're worried about this sort of thing).

This paper seems to have been set off by two recent AstraZeneca patents on the aforementioned EphB4 inhibitors. In the assay data tables, these list assay numbers as determined via both dispensing techniques, and they are indeed all over the place. One of the authors of this new paper is from Labcyte, the makers of the acoustic dispensing apparatus, and it's reasonable to suppose that their interactions with AZ called their attention to this situation. It's also reasonable to note that Labcyte itself has an interest in promoting acoustic dispensing technology, but that doesn't make the numbers any different. The fourteen compounds shown are invariably less potent via the classic pipet method, but by widely varying factors. So, which numbers are right?

The assumption would be that the more potent values have a better chance of being correct, because it's a lot easier to imagine something messing up the assay system than something making it read out at greater potency. But false positives certainly exist, too, so the authors used the data set to generate a possible pharmacophore for the compound series using both sets of numbers. And it turns out that the one from the acoustic dispensing runs gives you a binding model that matches pretty well with reality, while if you use the pipet data you get something broadly similar, but missing some important contributions from hydrophobic groups. That, plus the fact that the assay data shows a correlation with logP in the acoustic-derived data (but not so much with the pipet-derived numbers) makes it look like the sticky-compound effect might be what's operating here. But it's hard to be sure:

No previous publication has analyzed or compared such data (based on tip-based and acoustic dispensing) using computational or statistical approaches. This analysis is only possible in this study because there is data for both dispensing approaches for the compounds in the patents from AstraZeneca that includes molecule structures. We have taken advantage of this small but valuable dataset to perform the analyses described. Unfortunately it is unlikely that a major pharmaceutical company will release 100's or 1000's of compounds with molecule structures and data using different dispensing methods to enable a large scale comparison, simply because it would require exposing confidential structures. To date there are only scatter plots on posters and in papers as we have referenced, and critically, none of these groups have reported the effect of molecular properties on these differences between dispensing methods.

Acoustic.png
Some of those other references are to posters and meeting presentations, so this seems to be one of those things that floats around in the field without landing explicitly in the literature. One of the paper's authors was good enough to send along the figure shown, which brings some of these data together, and it's an ugly sight. This paper is probably doing a real service in getting this potential problem out into the cite-able world: now there's something to point at.

How many other datasets are hosed up because of this effect? Now there's an important question, and one that we're not going to have an answer for any time soon. For some sets of compounds, there may be no problems at all, while others (as that graphic shows) can be a mess. There are, of course, plenty of projects where the assay numbers seem (more or less) to make sense, but there are plenty of others where they don't. Let the screener beware.

Update: here's a behind-the-scenes look at how this paper got published. It was not an easy path into the literature, by any means.

Second update: here's more about this at Nature Methods.

Comments (47) + TrackBacks (0) | Category: Drug Assays

May 2, 2013

Aveo Gets Bad News on Tivozanib

Email This Entry

Posted by Derek

The kinase inhibitor tivozanib (for renal cell carcinoma) was shot down this morning at an FDA committee hearing. There are going to be a lot of arguments about this decision, because feelings have been running high on both sides of the issue.

And this has been an issue for over a year now. As that FierceBiotech story puts it:

Tivozanib hit its primary endpoint, demonstrating a slim but statistically significant improvement in progression-free-survival of patients with advanced renal cell carcinoma when compared to Nexavar (sorafenib). But the sorafenib arm experienced a slightly better overall survival rate, and Aveo has been trying to explain it away ever since.

The developer had to start in the spring of 2012 at a pre-NDA meeting. According to the review document, "the FDA expressed concern about the adverse trend in overall survival in the single Phase III trial and recommended that the sponsor conduct a second adequately powered randomized trial in a population comparable to that in the US."

The Phase III in question was performed in Eastern Europe, and one of the outcomes of today's decision may be a reluctance to rely on that part of the world for pivotal trials. I'm honestly not sure how much of tivozanib's problems were due to that (if the data had been stronger, no one would be wondering). But if the patient population in the trial was far enough off the intended US market to concern the FDA, then there was trouble coming from a long way away.

Aveo, though, may not have had many options by this time. This is one of those situations where a smaller company has enough resources to barely get something through Phase III, so they try to do it as inexpensively as they can (thus Eastern Europe). By the time things looked dicey, there wasn't enough cash to do anything over, so they took what they had to the FDA and hoped for the best. The agency's suggestion to do a US trial must have induced some despair, since (1) they apparently didn't have the money to do it, and (2) this meant that the chances of approval on the existing data were lower than they'd hoped.

One of the other big issues that this decision highlights is in trial design. This was a "crossover" trial, where patients started out on one medication and then could be switched to another as their condition progressed. So many crossed over to the comparison drug (Nexavar, sorafenib) that it seems to have impaired the statistics of the trial. Were the overall survival numbers slightly better in the eventual Nexavar group because they'd been switched to that drug, or because they'd gotten tivozanib first? That's something you'd hope that a more expensive/well-run Phase III would have addressed, but in the same way that this result casts some doubt on the Eastern European clinical data, it casts some doubt on crossover trial design in this area.

Update: a big problem here was that there were many more patients who crossed over to tivozanib from Nexavar than the other way around. That's a design problem for you. . .

What a mess - and what a mess for Aveo, and their investors. I'm not sure if they've got anything else; it looks like they'd pretty much bet the company on this. Which must have been like coming to the showdown at the poker table with a low three-of-a-kind, knowing that someone else probably has it beat. . .

Comments (27) + TrackBacks (0) | Category: Cancer | Clinical Trials | Regulatory Affairs

E. O. Wilson's "Letters to a Young Scientist"

Email This Entry

Posted by Derek

I've been reading E. O. Wilson's new book, Letters to a Young Scientist. It's the latest addition to the list of "advice from older famous scientists" books, which also includes Peter Medawar's similarly titled Advice To A Young Scientist and what is probably the grandfather of the entire genre, Ramón y Cajal's Advice for a Young Investigator. A definite personal point of view comes across in this one, since its author is famously unafraid to express his strongly held opinions. There's some 100-proof Wilson in this book as well:

. . .Science is the wellspring of modern civilization. It is not just "another way of knowing", to be equated with religion or transcendental meditation. It takes nothing away from the genius of the humanities, including the creative arts. Instead it offers ways to add to their content. The scientific method has been consistent better than religious beliefs in explaining the origin and meaning of humanity. The creation stories of organized religions, like science, propose to explain the origin of the world, the content of the celestial sphere, and even the nature of time and space. These mythic accounts, based mostly on the dreams and epiphanies of ancient prophets, vary from one religion's belief to another. Colorful they are, and comforting to the minds of believers, but each contradicts all the others. And when tested in the real world they have so far proved wrong, always wrong.

And that brings up something else about all the books of this type: they're partly what their titles imply, guides for younger scientists. They're partly memoirs of their authors' lives (Francis Crick's What Mad Pursuit is in this category, although it has a lot of useful advice itself). And they're all attempts to explain what science really is and how it really works, especially to readers who may well not be scientists themselves.

Wilson does some of all three here, although he uses examples from his own life and research mainly as examples of the advice he's giving. And that advice, I think, is almost always on target. He has sections on how to pick areas of research, methods to use for discovery, how to best spend your time as a scientist, and so on. The book is absolutely, explicitly aimed at those who want to make their mark by discovering new things, not at those who would wish to climb other sorts of ladders. (For example, he tells academic scientists "Avoid department-level administration beyond thesis committee chairmanships if at all fair and possible. Make excuses, dodge, plead, trade." If your ambition is to become chairman of the department or a VP of this or that, this is not the book to turn to.

But I've relentlessly avoided being put onto the managerial track myself, so I can relate to a lot of what this book has to say. Wilson spent his life at Harvard, so much of his advice has an academic slant, but the general principles of it come through very clearly. Here's how to pick an area to concentrate on:

I believe that other experienced scientists would agree with me that when you are selecting a domain of knowledge in which to conduct original research, it is wise to look for one that is sparsely inhabited. . .I advise you to look for a chance to break away, to find a subject you can make your own. . .if a subject is already receiving a great deal of attention, if it has a glamorous aura, if its practitioners are prizewinners who receive large grants, stay away from that subject.

One of the most interesting parts of the book for me is its take on two abilities that most lay readers would take as prerequisites for a successful scientist: mathematical ability and sheer intelligence in general. The first is addressed very early in the book, in what may well become a famous section:

. . .If, on the other hand, you are a bit short in mathematical training, even very short, relax. You are far from alone in the community of scientists, and here is a professional secret to encourage you: many of the most successful scientists in the world today are mathematically no more than semiliterate.

He recommends making up this deficiency, as much as you find it feasible to do so, but he's right. The topic has come up around here - I can tell you for certain that the math needed to do medicinal chemistry is not advanced, and mostly consists of being able to render (and understand) data in a variety of graphical forms. If you can see why a log/log plot tends to give you straightened-out lines, you've probably got enough math to do med-chem. You'll also need to understand something about statistics, but (again) mostly in how to interpret it so you aren't fooled by data. Pharmacokinetics gets a bit more mathematical, and (naturally) molecular modeling itself is as math-heavy as anyone could want, but the chemistry end of things is not.

As for intelligence, see what you think about this:

Original discoveries cannot be made casually, not by anyone at any time or anywhere. The frontier of scientific knowledge, often referred to as the cutting edge, is reached with maps drawn by earlier investigators. . .But, you may well ask, isn't the cutting edge a place only for geniuses? No, fortunately. Work accomplished on the frontier defines genius, not just getting there. In fact, both accomplishments along the frontier and the final eureka moment are achieved more by entrepreneurship and hard work than by native intelligence. This is so much the case that in most fields most of the time, extreme brightness may be a detriment. It has occurred to me, after meeting so many successful researchers in so many disciplines, that the ideal scientist is smart only in an intermediate degree: bright enough to see what can be done but not so bright as to become bored doing it.

By "entrepreneurship", he doesn't mean forming companies. That's Wilson's term for opportunistic science - setting up some quick and dirty experiments around a new idea to see what might happen, and being open to odd results as indicators of a new direction to take your work. I completely endorse that, in case anyone cares. As for the intelligence part, you have to keep in mind that this is E. O. Wilson telling you that you don't need to be fearsomely intelligent to be successful, and that his scale for evaluating this quality might be calibrated a bit differently from the usual. As Tom Wolfe put it in his essay in Hooking Up, one of Wilson's defining characteristics has been that you could put him down almost anywhere on Earth and he'd be the smartest person in the room. (I should note that Wolfe's essay overall is not exactly a paean, but he knows not to underestimate the guy).

I think that intelligence falls under the "necessary but not sufficient" heading. And I probably haven't seen that many people operate whom the likes of E. O. Wilson would consider extremely smart, so I can't comment much on what happens at that end of the scale. But the phenomenon of people who score very highly on attempted measures of intelligence, but never seem to make much of themselves, is so common as to be a cliché. You cannot be dumb and make a success of yourself as a research scientist. But being smart guarantees nothing.

As an alternative to mathematical ability and (very) high intelligence, Wilson offers the prescription of hard work. "Scientists don't take vacations", he says, they take field trips. That might work out better if you're a field biologist, but not so well for (say) organic chemistry. And actually, I think that clearing your head with some time off actually can help out a great deal when you're bogged down in some topic. But having some part of your brain always on the case really is important. Breaks aside, long-term sustained attention to a problem is worth a lot, and not everyone is capable of it.

Here's more on the opportunistic side of things:

Polymer chemistry, computer programs of biological processes, butterflies of the Amazon, galactic maps, and Neolithic sites in Turkey are the kinds of subjects worthy of a lifetime of devotion. Once deeply engaged, a steady stream of small discoveries is guaranteed. But stay alert for the main chance that lies to the side. There will always be the possibility of a major strike, some wholly unexpected find, some little detail that catches your peripheral attention that might very well, if followed, enlarge or even transform the subject you have chosen. If you sense such a possibility, seize it. In science, gold fever is a good thing.

I know exactly what he's talking about here, and I think he's completely right. Many, many big discoveries have their beginnings in just this sort of thing. Isaac Asimov was on target when he said that the real sound of a breakthrough was not the cry of "Eureka!" but a puzzled voice saying "Hmm. That's funny. . ."

Well, the book has much more where all this comes from. It's short, which tempts a person to read through it quickly. I did, and found that this slighted some of the points it tries to make. It improved on a second pass, in my case, so you may want to keep this in mind.

Comments (16) + TrackBacks (0) | Category: Book Recommendations | Who Discovers and Why

May 1, 2013

Best Sites for a Medicinal Chemist?

Email This Entry

Posted by Derek

I'm going to be traveling today, mostly through airports without good Wi-Fi (for which read "Wi-Fi that they don't want me to pay $10 for during my 90-minute layover"). But I wanted to put out a question sent in by a reader that I think would be worthwhile:

What are the best web sites for a medicinal chemist to have bookmarked? Resources for medicine and biology, organic chemistry, analytical chemist, and pharma development would be appropriate. There are shorter lists available here and there, but I don't think that there's One Big List that easily findable, and I think that there needs to be one. Suggestions in the comments - that should put together something pretty useful.

Comments (31) + TrackBacks (0) | Category: Blog Housekeeping