Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

April 30, 2012

India's First Drug Isn't India's First Drug

Email This Entry

Posted by Derek

There have been a number of headlines the last few days about Ranbaxy's Synriam, an antimalarial that's being touted as the first new drug developed inside the Indian pharma industry (and Ranbaxy as the first Indian company to do it).

But that's not quite true, as this post from The Allotrope makes clear. (Its author, Akshat Rathi, found one of my posts when he started digging into the story). Yes, Synriam is a mixture of a known antimalarial (piperaquine) and arterolane. And arterolane was definitely not discovered in India. It was part of a joint effort from the US, UK, Australia, and Switzerland, coordinated by the Swiss-based Medicines for Malaria Venture.

Ranbaxy did take on the late-stage development of this drug combination, after MMV backed out due to no-so-impressive performance in the clinic. As Rathi puts it:

Although Synriam does not qualify as ‘India’s first new drug’ (because none of its active ingredients were wholly developed in India), Ranbaxy deserves credit for being the first Indian pharmaceutical company to launch an NCE before it was launched anywhere else in the world.

And that's something that not many countries have done. I just wish that Ranbaxy were a little more honest about that in their press release.

Comments (8) + TrackBacks (0) | Category: Drug Development | Infectious Diseases

AstraZeneca Shuffles the Top Cards

Email This Entry

Posted by Derek

So AstraZeneca's CEO is leaving. This wasn't necessary a voluntary move, but if it was, I don't blame him. I would have some serious thoughts about sticking around, too. (If reports of David Brennan's severance package have any truth in them, though, he'll have some time to figure out what to do next.)The company has major problems in its drug pipeline, has had major problems for a long time now, and no obvious fixes come to mind that won't take years of sustained effort.

Time for a reprise of this chart:
patent%20cliff.jpg


That's the revenue coming in from existing drugs, and there's not much that bids to replace it, either. Note, again, that Eli Lilly appears to be in a very similar fix. I would not expect things there to go smoothly over the next few years, either.

Comments (19) + TrackBacks (0) | Category: Business and Markets

April 27, 2012

How Do Drugs Get Into Cells? A Vicious Debate.

Email This Entry

Posted by Derek

So how do drug molecules (and others) get into cells, anyway? There are two broad answers: they just sort of slide in through the membranes on their own (passive diffusion), or they're taken up by pores and proteins built for bringing things in (active transport). I've always been taught (and believed) that both processes can be operating in most situations. If the properties of your drug molecule stray too far out of the usual range, for example, your cell activity tends to drop, presumably because it's no longer diffusing past the cell membranes. There are other situations where you can prove that you're hitching a ride on active transport proteins, by administering a known inhibitor of one of these systems to cells and watching your compound suddenly become inactive, or by simply overloading and saturating the transporter.

There's another opinion, though, that's been advanced by Paul Dobson and Douglas Kell at Manchester, and co-workers. Their take is that carrier-mediated transport is the norm, and that passive diffusion is hardly important at all. This has been received with varying degrees of belief. Some people seem to find it a compelling idea, while others regard it as eccentric at best. The case was made a few years ago in Nature Reviews Drug Discovery, and again more recently in Drug Discovery Today:

All cells necessarily contain tens, if not hundreds, of carriers for nutrients and intermediary metabolites, and the human genome codes for more than 1000 carriers of various kinds. Here, we illustrate using a typical literature example the widespread but erroneous nature of the assumption that the ‘background’ or ‘passive’ permeability to drugs occurs in the absence of carriers. Comparison of the rate of drug transport in natural versus artificial membranes shows discrepancies in absolute magnitudes of 100-fold or more, with the carrier-containing cells showing the greater permeability. Expression profiling data show exactly which carriers are expressed in which tissues. The recognition that drugs necessarily require carriers for uptake into cells provides many opportunities for improving the effectiveness of the drug discovery process.

That's one of those death-or-glory statements: if it's right, a lot of us have been thinking about these things the wrong way, and missing out on some very important things about drug discovery as well. But is it? There's a rebuttal paper out in Drug Discovery Today that makes the case for the defense. It's by a long list of pharmacokinetics and pharmacology folks from industry and academia, and has the air of "Let's get this sorted out once and for all" about it:

Evidence supporting the action of passive diffusion and carrier-mediated (CM) transport in drug bioavailability and disposition is discussed to refute the recently proposed theory that drug transport is CM-only and that new transporters will be discovered that possess transport characteristics ascribed to passive diffusion. Misconceptions and faulty speculations are addressed to provide reliable guidance on choosing appropriate tools for drug design and optimization.

Fighting words! More of those occur in the body of the manuscript, phrases like "scientifically unsound", "potentially misleading", and "based on speculation rather than experimental evidence". Here's a rundown of the arguments, but if you don't read the paper, you'll miss the background noise of teeth being ground together.

Kell and Dobson et al. believe that cell membrane have more protein in them, and less lipid, than is commonly thought, which helps make their case for lots of protein transport/not a lot of lipid diffusion. But this paper says that their figures are incorrect and have been misinterpreted. Another K-D assertion is that artificial lipid membranes tend to have many transient aqueous pores in them, which make them look more permeable than they really are. This paper goes to some length to refute this, citing a good deal of prior art with examples of things which should have then crossed such membranes (but don't), and also find fault with the literature that K-D used to back up their own proposal.

This latest paper then goes on to show many examples of non-saturatable passive diffusion, as opposed to active transport, which can always be overloaded. Another big argument is over the agreement between different cell layer models of permeability. Two of the big ones are Caco-2 cells and MDCK cells, but (as all working medicinal chemists know) the permeability values between these two don't always agree, either with each other or with the situation in living systems. Kell and Dobson adduce this as showing the differences between the various transporters in these assays, but this rebuttal points out that there are a lot of experimental differences between literature Caco-2 and MDCK assays that can kick the numbers around. Their take is that the two assays actually agree pretty well, all things considered, and that if transporters were the end of the story that the numbers would be still farther apart.

The blood-brain barrier is a big point of contention between these two camps. This latest paper cites a large pile of literature showing that sheer physical properties (molecular weight, logP) account for most successful approaches to getting compounds into the brain, consistent with passive diffusion, while examples of using active transport are much more scarce. That leads into one of the biggest K-D points, which seems to be one of the ones that drives the existing pharmacokinetics community wildest: the assertion that thousands of transport proteins remain poorly characterized, and that these will come to be seen as the dominant players compared to passive mechanisms. The counterargument is that most of these, as far as we can tell to date, are selective for much smaller and more water-soluble substances than typical drug molecules (all the way from metal ions to things like glycerol and urea), and are unlikely to be important for most pharmaceuticals.

Relying on as-yet-uncharacterized transporters to save one's argument is a habit that really gets on the nerves of the Kell-Dobson critics as well - this paper calls it "pure speculation without scientific basis or evidence", which is about as nasty as we get in the technical literature. I invite interested readers to read both sides of the argument and make up their own minds. As for me, I fall about 80% toward the critics' side. I think that there are probably important transporters that are messing with our drug concentrations and that we haven't yet appreciated, but I just can't imagine that that's the whole story, nor that there's no such thing as passive diffusion. Thoughts?

Comments (37) + TrackBacks (0) | Category: Drug Assays | Pharma 101 | Pharmacokinetics

Different Worlds: A Last DHFR Paper Thought

Email This Entry

Posted by Derek

Inspired by a discussion with a colleague, I'm going to take one more crack at the recent discussion here about the J. Med. Chem. DHFR paper. Those of you with an interest in the topic, read on. Those whose interest has waned, or who never had much interest to start with, take heart: other topics are coming.

It's clear that many people were disappointed with my take on this paper, and my handling of the whole issue. Let me state again that I mishandled the biology aspects of this one thoroughly, through carelessness, and I definitely owe this apology to the authors of the paper (and the readers of this site) for that.

Of course, that's not the only arguable thing about the way I handled this one. As I spent paragraphs rambling on about in yesterday's post, there's a chemical aspect to the whole issue as well, and that's what caught my eye to start with. I think one of the things that got me into trouble with this one is two different ways of looking at the world. I'll explain what I mean, and you can judge for yourself if I'm making any sense.

The authors of the paper (and its reviewer who commented here) are interested in D67 dihydrofolate reductase, from a biological/enzymological perspective. From this viewpoint - and it's a perfectly tenable one - the important thing is that D67 DHFR is an unusual and important enzyme, a problem in bacterial resistance, interesting in its own right as a protein with an odd binding site, and for all that, still has no known selective inhibitors. Anything that advances the understanding of the enzyme and points toward a useful inhibitor of it is therefore a good thing, and worth publishing in J. Med. Chem., too.

I come in from a different angle. As someone who's done fragment-based drug discovery and takes a professional interest in it, I'll take a look at any new paper using the technique. In this case, I gave the target much too cursory a look, and filed it as "DHFR, bacterial enzyme, soluble, X-ray structures known". In other words, a perfectly reasonable candidate for FBDD as we know it. Once I'd decided that this was a mainstream application of something I already have experience with, I turned my attention to how the fragment work was done. By doing so, I missed out on the significance of the DHFR enzyme, which means, to people in the first camp, that I whiffed on the most important part of the entire thing. I can understand their frustration as I brushed that off like a small detail and went on to what (to them) were secondary matters.

But here's where my view of the world comes in. As a drug discovery guy, when I read a paper in J. Med. Chem., I'd like to see progress in, well, the medicinal chemistry of the topic. That was the thrust of my blog post yesterday: that I found the med-chem parts of the paper uncompelling, and that the application of fragment-based techniques seemed to me to have gone completely off track. (I havne't mentioned the modeling and X-ray aspects of the paper, as Teddy Z did at Practical Fragments, but I also found those parts adding nothing to the worth of the manuscript as a whoel). The most potent compounds in the paper seem, to me, to be the sort that are very unlikely to lead to anything, and are unlikely to show selectivity in a cellular environment. If the paper's starting fragment hits are real (which is not something that's necessarily been proven, as I mentioned in yesterday's post), then it seems to me that everything interesting and useful about them is being thrown away as the paper goes on. From the other point of view, things are basically the opposite - the paper gets better and better as the compounds get more potent.

But here's where, perhaps, the two viewpoints I spoke of earlier might find something in common. If you believe that the important thing is that selective inhibitors of D67 DHFR have finally been discovered, then you should want these to be as potent and selective as possible, and as useful as possible in a variety of assays. This, I think, is what's in danger of being missed. I think that a fragment-based effort should have been able to deliver much more potent chemical matter than these compounds, with less problematic structures, which are more likely to be useful as tools.

I'll finish up by illustrating the different angles as starkly as I can. The authors of this paper have, in one view of the world, completed the first-ever fragment screen against an important enzyme, discovered the first-ever selective inhibitors of it, and have published these results in a prestigious journal: a success by any standard. From my end, if I were to lead a drug discovery team against the same enzyme, I might well see the same fragment hits the authors did, since I know that some of these are in the collections I use. But if I proceeded in the same fashion they did, prosecuting these hit compounds in the same way, I would, to be completely honest about it, face some very harsh questioning. And if I persevered in the same fashion, came up with the same final compounds, and presented them as the results of my team's work, I would run the serious risk of being fired. Different worlds.

Update: Prof. Pelletier sends the following:

I certainly have been following this with interest, and learning much from it – not just science.

Throughout the week, I have appreciated your civil tone – many thanks. I willingly accept your apology, just as I accept the constructive criticism that will improve our future work. I think your ‘two-worlds’ point of view smacks of truth. The bottom line from my point of view is that I’m open to collaboration with a real fragment library: if anyone is interested in making this better, they should contact me. I’d be delighted to work with more than what can be scavenged from neighbouring labs in an academic setting.

Your bloggers’ response to this come-and-go was fascinating: the process was admired to an extent that surprised me. A number of responders point out that there are currently few occurrences of open exchange on these blogs and – sorry to disappoint hard-core bloggers – it does not endear me to the blogging process. I don’t blog because I can’t stand anonymous, frequently disrespectful and sometimes poorly researched comments. I nonetheless hope that this will open the door to a more transparent blogging process in the long run.

For any who care, I am brave, not at all desperate, and definitely a woman. ; )

If you feel any of this would be of interest for your blog, please feel free to post. Thanks for seeing this through rather than shaking it off.

Comments (21) + TrackBacks (0) | Category: Academia (vs. Industry) | The Scientific Literature

April 26, 2012

Elsevier Picks Up the Pace

Email This Entry

Posted by Derek

So perhaps I should rethink all those nasty things I've been saying about Elsevier journals. Here's someone who submitted a paper to Nuclear Instruments and Methods on a Friday evening, and got it accepted - with two referee reports, yet - on Monday morning. How is that possible, you say? That's what this author is wondering, too. . .

Comments (8) + TrackBacks (0) | Category: The Scientific Literature

April 25, 2012

DHFR Inhibitors Revisited: A Word From the Authors (and Reviewers)

Email This Entry

Posted by Derek

The other day, I had some uncomplimentary things to say about a recent J. Med. Chem. paper on fragment-based dihydrofolate reductase inhibitors. Well, I know that I don't say these things into a vacuum, by any means, but in this case the lead author has written me about the work, and a reviewer of the paper has showed up in the comments. So perhaps this is a topic worth revisiting?

First, I'll give Prof. Joelle Pelletier of U. Montreal the floor to make the case for the defense. Links added are mine, for background; I take responsibility for those, and I hope they're helpful.

I was informed of your recent blog entitled ‘How do these things get published’. I am corresponding author of that paper. I would like to bring to your attention a crucial point that was incorrectly presented in your analysis: the target enzyme is not that which you think it is, i.e.: it is not a DHFR that is part of ‘a class of enzymes that's been worked on for decades’.

Indeed, it would make no sense to report weak and heavy inhibitors against ‘regular’ DHFRs (known as ‘type I DHFRs’), considering the number of efficient DHFR inhibitors we already know. But this target has no sequence or structural homology with type I DHFRs. It is a completely different protein that offers an alternate path to production of tetrahydrofolate (see top of second page of the article). It has apparently evolved recently, as a bacterial response to trimethoprim being introduced into the environment since the ‘60’s. Because that protein is evolutionarily unrelated to regular DHFRs, it doesn’t bind trimethoprim and is thus intrinsically trimethoprim resistant; it isn’t inhibited by other inhibitors of regular DHFRs either. There have been no efforts to date to inhibit this drug resistance enzyme, despite its increasing prevalence in clinical and veterinary settings, and in food and wastewater (see first page of article). As a result, we know nothing about how to prevent it from providing drug resistance. Our paper is thus the first foray into inhibiting this new target – one which presents both the beauty and the difficulty of complex symmetry.

Regular (type I) DHFRs are monomeric enzymes with an extended active-site cleft. They are chromosomally-encoded in all living cells where they are essential for cellular proliferation. Our target, type II R67 DHFR, is carried on a plasmid, allowing rapid dissemination between bacterial species. It is an unusual homotetrameric, doughnut-style enzyme with the particularity of having a single active site in the doughnut hole. That’s unusual because multimeric enzymes typically have the same number of active sites as they do monomers. The result is that the active site tunnel, shown in Figure 4 a, has 222 symmetry. Thus, the front and back entrances to the active site tunnel are identical. And that’s why designing long symmetrical molecules makes sense: they have the potential of threading through the tunnel, where the symmetry of the inhibitor would match the symmetry of the target. If they don’t string through but fold up into a ‘U”, it still makes sense: the top and bottom of the tunnel are also alike, again allowing a match-up of symmetry. Please note that this symmetry does create a bit of a crystallographer’s nightmare at the center of the tunnel where the axes of symmetry meet; again, it is an unusual system.

You have referred to our ‘small, poorly documented library of fragment compounds’. As for the poor documentation, the point is that we have very little prior information on the ligands of this new target, other than its substrates. We cast as wide a net as we could within a loosely defined chemical class, using the chemicals we have access to. Unfortunately, I don’t have access to a full fragment library, but am open to collaboration.

As a result of extending the fragments, the ligand efficiency does take a beating… so would it have been better not to mention it? No, that would have been dishonest. In addition, it is not a crucial point at this very early stage in discovery: this is a new target, and it IS important to obtain information on tighter binding, even if it comes at the cost of heavier molecules. In no way do we pretend that these molecules are ripe for application; we have presented the first set of crude inhibitors to ‘provide inspiration for the design of the next generation of inhibitors’ (last sentence of the paper).

Your blog is widely read and highly respected. In this case, it appears that your analysis was inaccurate due to a case of mistaken identity. I did appreciate your calm and rational tone, and hope that you will agree that there is redeeming value to the poor ligand efficiency, because of the inherent novelty of this discovery effort. I am appealing to you to reconsider the blog’s content in light of the above information, and respectfully request that you consider revising it.

Well, as for DHFRs, I'm guilty as charged. The bacterial ones really are way off the mammalian ones - it appears that dihydro/tetrahydrofolate metabolism is a problem that's been solved a number of different ways and (as is often the case) the bacteria show all kinds of diversity compared to the rest of the living world. And there really aren't any good D67 DHFR inhibitors out there, not selective ones, anyway, so a molecule of that type would definitely be a very worthwhile tool (as well as a potential antibiotic lead).

But that brings us to the fragments, the chemical matter in the paper. I'm going to stand my my characterization of the fragment library. 100 members is indeed small, and claiming lack of access to a "full fragment collection" doesn't quite cover it. Because of the amount of chemical space that can be covered at these molecular weights, a 200-member library can be significantly more useful than a 100-member one, and so on. (Almost anything is more useful than a 100-member library). There aren't more compounds of fragment size on the shelves at the University of Montreal?

More of a case could be made for libraries this small if they covered chemical space well. Unfortunately, looking over the list of compounds tested (which is indeed in the Supplementary Material), it's not, at first glance, a very good collection. Not at all. There are some serious problems, and in a collection this small, mistakes are magnified. I have to point out, to start with, that compounds #59 and #81 are duplicates, as are compounds #3 and #40, and compounds #7 and #14. (There may be others; I haven't made a complete check).

The collection is heavily biased towards carboxylic acids (which is a problem for several reasons, see below). Nearly half the compounds have a COOH group by my quick count, and it's not a good idea to have any binding motif so heavily represented. I realize that you intentionally biased your screening set, but then, an almost featureless hydrophobic compound like #46 has no business in there. Another problem is that some of the compounds are so small that they're unlikely to be tractable fragment hits - I note succinimide (#102) and propyleneurea (#28) as examples, but there are others. At the other end of the scale, compounds such as the Fmoc derivative #25 are too large (MW 373), and that's not the only offender in the group (nor the only Fmoc derivative). The body of the manuscript mentions the molecular weights of the collections as being from 150 to 250, but there are too many outliers. This isn't a large enough collection for this kind of noise to be in it.

There are a number of reactive compounds in the list, too, and while covalent inhibitors are a very interesting field, this was not mentioned as a focus of your efforts or as a component of the screening set. And even among these, compounds such as carbonyldiimidazole (#26), the isocyanate #82, and disuccinimidylcarbonate (#36) are really pushing it, as far as reactivity and hydrolytic stability. The imine #110 is also very small and likely to have hydrolytic stability problems. Finally, the fragment #101 is HEPES, which is rather odd, since HEPES is the buffer for the enzyme assays. Again, there isn't room for these kinds of mistakes. It's hard for me to imagine that anyone who's ever done fragment screening reviewed this manuscript.

The approach to following up these compounds also still appears inadequate to me. As Dan Erlanson pointed out in a comment to the Practical Fragments post, small carboxylic acids like the ones highlighted are not always legitimate hits. They can, as he says, form aggregates, depending on the assay conditions, and the most straightforward way of testing that is often the addition of a small amount of detergent, if the assay can stand it. The behavior of such compounds is also very pH-dependent, as I've had a chance to see myself on a fragment effort, so you need to make sure that you're as close to physiological conditions as you can get. I actually have seen some of your compounds show up as hits in fragment screening efforts, and they've been sometimes real, sometimes not.

But even if we stipulate that these compounds are actually hits, they need more work than they've been given. The best practice, in most cases when a fragment hit is discovered and confirmed, is to take as many closely related single-atom changes into the assay as possible. Scan a methyl group around the structure, scan a fluoro, make the N-for-C switches - at these molecular weights, these changes can make a big difference, and you may well find an even more ligand-efficient structure to work from.

Now, as for the SAR development that actually was done: I understand the point about the symmetry of the enzyme, and I can see why this led to the idea of making symmetrical dimer-type compounds. But, as you know, this isn't always a good idea. Doing so via flexible alkyl or alkyl ether chains is not a good idea, though, since such compounds will surely pay an entropic penalty in binding.

And here's one of the main things that struck both me and Teddy Z in his post: if the larger compounds were truly taking advantage of the symmetry, their ligand efficiency shouldn't go down. But in this case it does, and steeply. The size of the symmetical inhibitors (and their hydrophobic regions, such as the featureless linking chains, make it unsurprising that this effort found some micromolar activity. Lots of things will no doubt show micromolar activity in such chemical space. The paper notes that it's surprising that the fragment 4c showed no activity when its structural motif was used to build some of the more potent large compounds, but the most likely hypothesis is that this is because the binding modes have nothing to do with each other.

To be fair, compounds 8 and 9 are referred to as "poorly optimized", which is certainly true. But the paper goes on to say that they are starting points to develop potent and selective inhibitors, which they're not. The fragments are starting points, if they're really binding. The large compounds are dead ends. That's why Teddy Z and I have reacted as strongly as we have, because the path this paper takes is (to our eyes) an example of how not to do fragment-based drug discovery.

But still, I have to say that I'm very glad to hear a direct reply to my criticism of this paper. I hope that this exchange has been useful, and that it might be of use for others who read it.

Comments (24) + TrackBacks (0) | Category: Drug Assays | Infectious Diseases | The Scientific Literature

Breslow's Chirality Paper: More Than Just Alien Dinosaurs

Email This Entry

Posted by Derek

Update: the paper has, for now, been pulled by JACS. More to come, no doubt.

I didn't written anything about the Breslow origins-of-chirality paper that mentioned, as a joking aside, the possibility of intelligent alien dinosaurs. As most readers will know, the ACS publicity office, most cluelessly, decided to make that throwaway line the focus of their press release, and much confusion ensued.

But things have gotten weirder. Stu Cantrill read the Breslow paper, and realized that he'd already read a lot of it before. See these three pictures (1, 2, 3) and realize the extent to which this latest paper was apparently a cut-and-paste job.

I've met Breslow many times (he used to consult for one of my former employers), and I've enjoyed reading much of his work. But this really shouldn't be acceptable - we wouldn't put up with it from some unknown chemist, and we shouldn't put up with it from someone famous. Chembark has an excellent summary of the situation, with recommendations about what the ACS should do next. These range from fixing that idiotic press release, to retracting the paper, to barring Breslow from publishing in JACS for a period.

In retrospect, the alien dinosaurs are becoming my favorite part of the whole paper.

Update: Breslow defends himself to Nature News.

Comments (22) + TrackBacks (0) | Category: The Scientific Literature

Drug Company Culture: It's Not Helping

Email This Entry

Posted by Derek

I wanted to call attention to a piece by Bruce Booth over at Forbes. He starts off from the Scannell paper in Nature Reviews Drug Discovery that we were discussing here recently, but he goes on to another factor. And it's a big one: culture.

Fundamentally, I think the bulk of the last decade’s productivity decline is attributable to a culture problem. The Big Pharma culture has been homogenized, purified, sterilized, whipped, stirred, filtered, etc and lost its ability to ferment the good stuff required to innovate. This isn’t covered in most reviews of the productivity challenge facing our industry, because its nearly impossible to quantify, but it’s well known and a huge issue.

You really should read the whole thing, but I'll mention some of his main points. One of those is "The Tyranny of the Committee". You know, nothing good can ever be decided unless there are a lot of people in the room - right? And then that decision has to move to another room full of people who give it a different working-over, with lots more PowerPoint - right? And then that decision moves up to a group of higher-level people, who look at the slides again - or summaries of them - and make a collective decision. That's how it's supposed to work - uh, right?

Another is "Stagnation Through Risk Avoidance". Projects go on longer, and keep everyone busy, if the nasty issues aren't faced too quickly. And everyone has room to deflect blame when things go wrong, if plenty of work has been poured into the project, from several different areas, before the bad news hits. Most of the time, you know, some sort of bad news is waiting out there, so you want to have yourself (and your career) prepared beforehand - right? After all, several high-level committees signed off on this project. . .

And then there's "Organizational Entropy", which we've discussed around here, too. When the New, Latest, Really-Going-to-Work reorganization hits, as it does every three years or so, things slow down. They have to. And a nice big merger doesn't just slow things down, it brings everything to a juddering halt. The cumulative effect of these things can be deadly.

As Booth says, there are other factors as well. I'd add a couple to the list, myself: the tendency to think that If This Was Any Good, Someone Else Would Be Doing It (which is another way of being able to run for cover if things don't work out), and the general human sunk-cost fallacy of We've Come This Far; We Have to Get Something Out of This. But his main point stands, and has stood for many years. The research culture in many big drug companies stands in the way of getting things done. More posts on this to follow.

Comments (36) + TrackBacks (0) | Category: Drug Industry History | Life in the Drug Labs | Who Discovers and Why

Merck Serono Cuts Back

Email This Entry

Posted by Derek

In Geneva, the (former) headquarters of Serono. They're transferring some of the jobs, but at least 500 are gone - and the people holding them are being put out into what (from here) looks like a very unfriendly jobs market. . .

Comments (1) + TrackBacks (0) | Category: Business and Markets

April 24, 2012

That's Some Fine Editorial Work There

Email This Entry

Posted by Derek

Since I've mentioned the scientific publication business today, I thought I'd include this story from Retraction Watch as an example of what you're paying for when you pay Elsevier for quality control. When's the last time you saw a paper withdrawn because it "contains no scientific content"? Or noticed that the lead author's email contract address was at the domain "budweiser.com"?

Comments (12) + TrackBacks (0) | Category: The Scientific Literature

AstraZeneca Buys Ardea. And Who Else?

Email This Entry

Posted by Derek

Since I've mentioned Ardea Pharmaceuticals and their gout drug RDEA594 (lesinurad) here a couple of times, I should note that AstraZeneca has decided to put up over a billion dollars to buy the entire company.

It's not surprising to see AZ getting the wallet out, considering the company's overall problems, which are also, naturally, the source of all the relentless cutbacks they've been announcing. I will not, however, endorse the following statement from the company's head of R&D:

"We’re building some momentum here in R&D,” Martin Mackay, head of research and development, said in a telephone interview today. “I would be disappointed if we didn’t announce further deals by the end of this year. We’ve taken our hits but we’re turning a corner.”

Comments (13) + TrackBacks (0) | Category: Business and Markets

Harvard's Had Enough

Email This Entry

Posted by Derek

Several readers sent along this memo from Harvard's library: they see the current price structure of scientific journals as unsustainable, and they're asking the faculty to help them do something about it.

The Faculty Advisory Council to the Library, representing university faculty in all schools and in consultation with the Harvard Library leadership, reached this conclusion: major periodical subscriptions, especially to electronic journals published by historically key providers, cannot be sustained: continuing these subscriptions on their current footing is financially untenable. Doing so would seriously erode collection efforts in many other areas, already compromised.

They're asking faculty members to try to "move prestige" to open-access journals by favoring them for new publications, and in general to do whatever they can to get away from the current scientific publishing model. And that ties in with this post over at the Guardian, saying that not only are the current publishers causing a financial burden, but other burdens that may be even more of a problem:

Research, especially scientific research, thrives in an atmosphere that allows the free exchange of ideas and information: open discussion and debate are essential if the scientific method is to operate properly. Before the arrival of the internet, academic publishers provided a valuable service that was a real benefit to the scientific community. Not any more. . .

. . .But open access isn't just about the end products of research. It's the entire process of scientific enquiry, including the collection and processing of data, scrutiny of the methods used in the analysis, questioning of assumptions, and discussion of alternative interpretations. In particular, it's about access to scientific data.

I believe all data resulting from publicly funded research should be in the public domain, for two reasons. First, it's public money that funds us, so we scientists have a moral responsibility to be as open as possible with the public. Second, the scientific method only works when analyses can be fully scrutinised and, if necessary, replicated by other researchers. In other words, to seek to prevent your data becoming freely available is plain unscientific.

We'll see how far this gets. But this is already the biggest upheaval that I can remember in the scientific literature, and it show no signs of slowing down. . .

Comments (19) + TrackBacks (0) | Category: The Scientific Literature

April 23, 2012

Making Their Own ALS Drug

Email This Entry

Posted by Derek

We should expect to see more of this sort of thing. The Wall Street Journal headline says it all: "Frustrated ALS Patients Concoct Their Own Drug". In this case, the drug appears to be sodium chlorite, which is under investigation as NP001 by Neuraltus Pharmaceuticals in Palo Alto. (Let's hope that isn't one of their lead structures at the top of their web site).

It is an accepted part of scientific lore that scientists sometimes use themselves in experiments, and cancer patients and others with life-threatening illnesses are known to self-medicate using concoctions of vitamins, special teas, and off-label medications. But the efforts of patients with ALS to come up with a home-brewed version of a drug still in early-stage clinical trials and not approved by the FDA is one of the most dramatic examples of how far the phenomenon of do-it-yourself science has gone.

A number of patients who have been involved in the Phase II trials of NP001 have been sharing information about it, and they and others have dug into the literature enough to be pretty sure that what Neuraltus is investigating is, indeed, some formulation of sodium chlorite. Here's one of them:

Mr. Valor first read about NP001 in a news release. He tracked down published papers that led him to believe the compound was sodium chlorite, a chemical that in various forms is used in municipal water treatment plants. A friend found online the scientists' patent filings. He also consulted an engineer in water treatment to learn more and read environmental reports to get insight into toxicity levels. The chemical is easy to order online and is inexpensive. He estimates he has spent less than $150 total.

Mixed in distilled water, the sodium chlorite is delivered through Mr. Valor's feeding tube three days a week, one week per month. He says he cautions participants that the chemical isn't as efficacious as NP001 and "that this is only to buy time until NP001 is available to all."

This case is the prefect situation for something like this to happen: a terrible disease, with an unfortunately fast clinical course, rare enough for a good fraction of the patient population to be very organized, along with an easily-available active agent. If NP001 were some sort of modified antibody, we wouldn't be having this discussion (although eventually, who knows?) And as much as I agree that Phase II and Phase III trials are necessary to find out if something really works or not, if I had ALS myself, I'd be doing what these people are doing, and if it were a family member affected, I'd be helping them mix the stuff up. With a condition like ALS, honestly, the risk/benefit ratio is pretty skewed.

If NP001 progresses, look for comment along the lines of "How can this little company get a patent on the use of this common chemical for this dread disease?" But as the WSJ article reports, the sodium chlorite mixtures that people are whipping up in their kitchens don't seem to be as effective as whatever NP001 is, for one thing. And Neuraltus is basically much of their existence on whether it works or not; they're taking on the risk and trouble of a proper investigation, and good for them. But it's true that many people who have ALS right now will not be around to see the end of a Phase III trial, and I can't blame them at all for doing whatever they can to try to get some of the benefits of this research in the interim.

Comments (23) + TrackBacks (0) | Category: Clinical Trials | The Central Nervous System

April 20, 2012

Buckyball Longevity - There's A Problem

Email This Entry

Posted by Derek

We have a problem here. The paper I blogged about yesterday, on life extension effects with C60 fullerene ("buckyballs") has a duplicated figure. This was first spotted by commenter "Flatland" yesterday. I was traveling all day, and when I came home in the evening I saw the comment and immediately realized that he was right. I've made the animation below (via Picasion) to illustrate the point:
picasion.com_26765994708ac1b7c3377a9ac9258d60.gif
These are from the part of the paper where they showed protective effects of C60 on animals that were being dosed with (toxic) carbon tetrachloride, and these are supposed to be the water/carbon tet control animal dosed by oral gavage (GAog) and by intraperitoneal injection (GAip). In other words, these are supposed to be separate animals, but as you can see, these are, in fact, the exact same histology slide. I've scaled the GAip image up about 120% and moved the two to correct the offset, but otherwise, I've done no image processing at all. The originals are screen shots from the PDF of the paper, the top two images of Figure 4.

This is, at the very least, very sloppy work, on both the part of the authors and the editorial staff at Biomaterials. I didn't catch this one myself, true - but I wasn't asked to review the paper, either, and I can assure you that I spend more time critically studying the figures in a paper under review than one I'm writing a quick blog entry about. Under normal reading conditions, most of us don't look at histology slides in a paper while constantly asking ourselves "Is this right? Or is this just a duplicate of another image that's supposed to be something else?"

And while this image duplication does not directly bear on the most surprising and interesting results of the paper - life extension in rodents - it does not inspire confidence in those results, either. I'm emailing the editorial staff at Biomaterials and the corresponding author of the paper with this blog entry. We'll see what happens.

Comments (54) + TrackBacks (0) | Category: Aging and Lifespan

April 18, 2012

Buckyballs Prolong Life? Really?

Email This Entry

Posted by Derek

I'm really, really not sure what to make of this paper (PDF). It's from a team that was studying the long-term toxicology of C60 (fullerene, "buckyballs") by giving them to rats as a solution in olive oil. The control groups were water and olive oil without C60. The compound has already been shown to have no noticeable short-term toxic effects, so they probably didn't expect anything dramatic in the lower-dose long-term mode.

Wrong. What they found was that the fullerene/olive oil group had their life spans extended by some 90%, which would make this mixture perhaps the most efficacious life-extended treatment ever seen in a rodent model. This is a very odd and interesting result.

There's nothing bizarre about the pharmacokinetics, anyway. A reasonable amount of the C60 is absorbed after an oral dose (they did both oral gavage and intraperitoneal dosing), with a time course consistent with the very high lipophilicity of the compound. Distribution is still being worked out, but a lot of any given dose ends up in the liver and spleen (although it doesn't accumulate with successive q.d. dosing), with detectable amounts even crossing the blood-brain barrier. It has a long half life, consistent with enterohepatic recirculation and elimination through the bile (no C60 was found in the urine).

The most likely mechanism for the life-extension effects is through oxidative stress and free radical scavenging. There have been several reports of C60 as an antioxidant, although there have also been reports that it can be cytotoxic via lipid peroxidation. (One difference was that that report was with aggregates of C60 in water, versus soluble C60 in oil, but there are other reports that hydrated C60 does the opposite: there's clearly a lot that hasn't been cleared up here). In this study, even at very low doses, C60 appears to protect rodents against carbon tetrachloride-induced liver damage, for example, which is known to involve a radical process. Significantly, it does so while showing protection against glutathione depletion, which also suggests that it's directly scavenging reactive intermediates.

These are reasonable (but unproven) hypotheses, and I very much look forward to seeing this work followed up to see some more light shed on them. The whole life-extension result needs to be confirmed as well, and in other species. I congratulate the authors of this work, though, for giving me the most number of raised eyebrows I've had while reading a scientific paper in quite some time.

Comments (42) + TrackBacks (0) | Category: Aging and Lifespan

Build Your Own Reactive Reactors

Email This Entry

Posted by Derek

I'd be interested in hearing people's thoughts on this technology, from the Cronin group at the University of Glasgow. (Here's a press release, and a piece from Chemistry World if you can't get in to Nature Chemistry).

They're adapting 3-D printing technology to make small reaction vessels out of silicone polymer. The design of these can be changed to directly alter the mixing, timing, and stoichiometry of reactions, and they've also gone as far as incorporating palladium catalyst into the walls of the newly formed reactors, making them active for hydrogenation reactions.

I can see this eventually being useful for multistep flow chemistry, a micro-scale analog of the sorts of systems that Steve Ley's group has published on. Perhaps an array of identical vessels could be used in parallel for scale-up if the design is taking advantage of the small size of the chambers (again, as is done in industrial flow applications). The speed with which new doped polymeric materials could be prototyped seems to be a real advantage as well, which should allow experimentation with immobilized reagents and catalysts which would be incompatible with each other in solution. Other ideas?

Comments (12) + TrackBacks (0) | Category: Chemical News

How Do These Things Get Published?

Email This Entry

Posted by Derek

Update: I've heard from both the lead author of this paper and one of its reviewers, and I've written a follow-up post on this subject, as well as revising this one where shown below.

I've been saved the trouble of demolishing this J. Med. Chem. paper - the Practical Fragments blog has done it for me. I really hate to say such things, but this appears to be one of the worst papers that journal has published in quite a while.

The authors start out with a small, poorly documented (update: the compounds are, in fact, in the paper's supplementary information, but see the follow-up post) library of fragment compounds. They screen these against dihydrofolate reductase, and get a few possible hits - mind you, there's not much correlation between the numbers and any potency against the enzyme, but these aren't potent compounds, and fragment-level hits don't always perform in high-concentration enzyme assays. But what happens next? The authors string these things together into huge dimeric molecules, apparently because they think that this is a good idea, but they get no data to support this hypothesis at all.

Well, their potency goes from low millimolar to low micromolar, but as Teddy Z at Practical Fragments points out, this actually means taking a terrible beating in ligand efficiency. All that extra molecular weight should buy you a lot more potency than this. There's some hand-waving docking of these structures - which the authors themselves refer to as "poorly optimized" - and some inconclusive attempts at X-ray crystallography, leading to uninterpretable data.

And that's it. That's the paper. This on a class of enzymes that's been worked on for decades, yet. (Update: this characterization is completely wrong on my part - see the follow-up post linked to above for more). Again, I hate to be unkind about this, but I cannot imagine what this is doing in J. Med. Chem., or how it made it through the editorial process. When you submit a scientific manuscript for publication, you open yourself to comments from all comers, and those are mine.

Comments (29) + TrackBacks (0) | Category: Infectious Diseases | The Scientific Literature

April 17, 2012

Day Off

Email This Entry

Posted by Derek

I'll be traveling today, so no time to do a blog post this morning. Regular blogging will resume tomorrow - see you then!

Comments (3) + TrackBacks (0) | Category: Blog Housekeeping

April 16, 2012

Phenotypic Screening's Comeback

Email This Entry

Posted by Derek

Here's an excellent overview of phenotypic screening at SciBx. For those outside the field, phenotypic screening is the way things used to be all the time in the drug discovery business, decades ago: (1) give compounds to a living system, and watch what happens. (2) Wait until you find a compound that does what you want, and develop that one if you can.

That's as opposed to target-based drug discovery, which began taking over in the 1970s or so, and has grown ever since as molecular biology advanced. That's where you figure out enough about a biochemical pathway to know what enzyme/receptor/etc. you should try to inhibit, and you screen against that one alone to find your leads. That has worked out very well in some cases, but not as often as people would have imagined back at the beginning.

In fact, I (and a number of other people) have been wondering if the whole molecular-biology target-based approach has been something of a dead end. A recent analysis suggested that phenotypic screens have been substantially more productive in generating first-in-class drugs, and an overemphasis on individual targets has been been suggested as a reason for the lack of productivity in drug discovery.

As that new article makes clear, though, in most cases of modern phenotypic screening, people are going back from their hit compounds and finding out how they work, when possible. That's actually an excellent platform for discoveries in biology, too, as well as for finding medicinally active compounds. I'm glad to see cell- and tissue-based assays making a comeback, and I hope that they can bail us all out a bit.

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

April 13, 2012

More on the Federation of Independent Scientists: Journal Access

Email This Entry

Posted by Derek

Update: just to make things clear, this is only one aspect of the whole problem, but perhaps an easier one to tackle. More on the rest of the proposals to come!

Yesterday's post brought in a lot of welcome comment, and I want to follow up on the ideas in it. The first problem I wanted to tackle was journal access for entrepreneurs, the recently unemployed, and small shops. Here are some of the comments from the first post, consolidated:

A better bet would be to negotiate a group agreement with DeepDyve. "You can rent the article and view it at DeepDyve for 24 hours or more" for $1. The catch? No printing- you have to read it online (hence the renting not purchasing model).

Something like that might be necessary, because others pointed out that:

I like the idea of the journal access, HOWEVER, you'll pay handsomely for such access. It's no different than a large corp or large library. You have 300 people in your organization? You need to buy a group license.

And also:

OK, how you going to limit access? Is it just single, non-employed people who can get access? Perhaps you'd like to expand it to small companies? What size cut off? 10? 100? 500? Perhaps the mid-size biotechs should join up, and drop their own subscriptions. And maybe the smaller colleges?

And from "Mrs. McGreevy" herself, who kicked off the discussion:

Alternatively, if enough people think this is something the ACS should be doing, and start demanding it loudly and frequently, perhaps the ACS will get around to doing it. Maybe they don't know what to do, either. Cheaper access to ACS publications? That would be nice. Perhaps even negotiating group payment rates or even (*gasp*) subsidies for access to other publishers' papers? I wonder if that's even possible, but I'll bet the ACS hasn't even considered it up until now. Perhaps there wouldn't need to be an independent library if the ACS were willing to take on the job. We should ask them. (In fact, why stop at the ACS? Maybe the unemployed biologists would be willing to pony up some time, money and lobbying power as well.)

Another idea would be to subsidize journal access with website ads as well as membership dues. Perhaps start a paid online directory of consultants or CROs, sort of the Ye Olde Yellowe Pages model. Perhaps there could be a small fee for posting an RFP or a project up for bid, sort of a classified ad business model to help clients, CROs and consultants find each other. Various small fees for various small services ==> money for subsidized journal access.

OK, those are the journal thoughts so far (other than a number of people who agree that it's a major problem, as do I!) Any more ideas on this aspect to add to the pile? I'd never heard of DeepDyve myself, and they sound interesting: anyone have any experience with them, and is there anyone else in that market niche? We'll put together some action points after this round of ideas. . .

Comments (25) + TrackBacks (0) | Category: Business and Markets | The Scientific Literature

AstraZeneca Cuts Again

Email This Entry

Posted by Derek

I have word today that AstraZeneca has told the scientists at Reims that the French site will be closed by the end of the year, with oncology being "consolidated" in Alderley Park and the Boston area.

It's a good guess, given the patent situation the company's facing and the pipeline it has to deal with it, that this won't be the last announcement of its kind. . .

Comments (13) + TrackBacks (0) | Category: Business and Markets

April 12, 2012

A Federation of Independent Researchers?

Email This Entry

Posted by Derek

I've had an interesting e-mail from a reader who wants to be signed as "Mrs. McGreevy", and it's comprehensive enough that I'm going to reproduce it in full below.

As everyone but the editorial board of C&E News has noticed, jobs in chemistry are few and far between right now. I found your post on virtual biotechs inspiring, but it doesn't look like anyone has found a good solution for how to support these small firefly businesses until they find their wings, so to speak. Lots of editorials, lots of meetings, lots of rueful headshaking, no real road map forward for unemployed scientists.

I haven't seen this proposed anywhere else, so I'm asking you and your readership if this idea would fly:

What about a voluntary association of independent research scientists?

I'm thinking about charging a small membership fee (for non-profit administration and hard costs) and using group buying power for the practical real-world support a virtual biotech would need:

1. Group rates on health and life insurance.

How many would-be entrepreneurs are stuck in a job they hate because of the the health care plan, or even worse, are unemployed or underemployed and uninsurable, quietly draining their savings accounts and praying no one gets really sick? I have no idea how this would work across state lines, or if it is even possible,but would it hurt to find out? Is anyone else looking?

2. Group rates on access to journals and library services.

This is something I do know a bit about. My M.S. is in library science, and I worked in the Chemistry Library in a large research institution for years during grad school. What if there were one centralized virtual library to which unaffiliated researchers across the country could log in for ejournal access? What if one place could buy and house the print media that start-ups would need to access every so often, and provide a librarian to look things up-- it's not like everyone needs their own print copy of the Canada & US Drug Development Industry & Outsourcing Guide 2012 at $150 a pop. (But if 350 people paid $1 a year for a $350/yr online subscription . . . )

Yes, some of you could go to university libraries and look these things up and print off articles to read at home, but some of you can't. You're probably violating some sort of terms of service agreement the library and publisher worked out anyway. It's not like anyone is likely to bust you unless you print out stacks and stacks of papers, but still. It's one more hassle for a small company to deal with, and everyone will have to re-invent the wheel and waste time and energy negotiating access on their own.

3. How about an online community for support and networking-- places for blogs, reviews, questions, answers, exchanges of best practices, or even just encouragement for that gut-wrenching feeling of going out on your own as a new entrepreneur?

4. What sort of support for grantwriting is out there? Is there a hole that needs to be filled?

5. How about a place to advertise your consulting services or CRO, or even bid for a contract? Virtual RFP posting?

6. Would group buying power help negotiate rates with CROs? How about rates for HTS libraries, for those of you who haven't given up on it completely?

Is there a need for this sort of thing? Would anyone use it if it were available? How much would an unaffiliated researcher be willing to pay for the services? Does anyone out there have an idea of what sort of costs are involved, and what sort of critical mass it would take to achieve the group buying power needed to make this possible?

I'd be happy to spark a discussion on what a virtual biotech company needs besides a spare bedroom and a broadband connection, even if the consensus opinion is that the OP an ill-informed twit with an idea that will never fly. What do you need to get a virtual biotech started? How do we make it happen? There are thousands of unemployed lab scientists, and I refuse to believe that the only guy making a living these days from a small independently-funded lab is Bryan Cranston.

A very worthy topic indeed, and one whose time looks to have come. Thoughts on how to make such a thing happen?

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | General Scientific News | The Scientific Literature

April 11, 2012

A New Journal (With Bonus Elsevier-Bashing)

Email This Entry

Posted by Derek

You know, I think that we really are seeing the breakup of the current model of scientific publishing. Spät kommt er, doch er kommt: it's coming on slowly, and in stages, but it's looking more and more inevitable. Take this news from the Wellcome Trust, one of the largest funding agencies in the world for medical research:

Sir Mark Walport, the director of Wellcome Trust, said that his organisation is in the final stages of launching a high calibre scientific journal called eLife that would compete directly with top-tier publications such as Nature and Science, seen by scientists as the premier locations for publishing. Unlike traditional journals, however, which cost British universities hundreds of millions of pounds a year to access, articles in eLife will be free to view on the web as soon as they are published. . .

Walport, who is a fellow of the Royal Society, Britain's premier scientific academy, said the results of public and charity-funded scientific research should be freely available to anyone who wants to read it, for whatever purpose they need it. His comments echo growing concerns from scientists who baulk at the rising costs of academic journals, particularly in a time of shrinking university budgets.

That journal is being launched with the Max Planck Gesellschaft and the Howard Hughes Medical Institute, so it's definitely something worth taking seriously. And outside of that new journal, they're going as far as considering sanctions in funding for any researchers who don't adhere to their open-access policies (basically, free within six months of publication in a journal). I was looking up some papers from back in the 1980s the other day, and was reminded, by contrast, of the policies of some of the commercial publishers: never free, ever, no matter how old it is, or who funded it. Gold, those journal archives are: the costs are long gone; it's all been digitized for years and sits on the servers. But anyone who wants to look at a thirty-year-old paper in Tet Lett had better get ready to pony up. Patents expire, as they should, but copyright? Hah!

Elsevier says in the article that they're committed to offering their customers "choice", and that gosh, the subscription model is just so darn popular that they don't see how they can go against the wishes of their customers by getting rid of it. I particularly enjoyed this quote:

A spokesperson for Elsevier said the company was open to any "mechanism or business model, as long as they are sustainable and maintain or improve existing levels of quality control".

This is the Elsevier that can't manage to fix their own RSS feeds for months, that set up a whole fake-but-real journal division for the advertising revenue, that solicits good textbook reviews on Amazon in exchange for $25 gift cards, that charged people $4500 a year to read a journal stuffed with the editor's own nonsensical papers, and whose chemistry titles repeatedly let through howlers that even undergraduates could have spotted?

That level of quality control is going to be quite a strain to keep up. And yes, I know that the other publishers hardly have clean records, but they managed not to be quoted about their quality control. This time, anyway.

Comments (28) + TrackBacks (0) | Category: The Scientific Literature

April 10, 2012

Biomarker Caution

Email This Entry

Posted by Derek

After that news of the Stanford professor who underwent just about every "omics" test known, I wrote that I didn't expect this sort of full-body monitoring to become routine in my own lifetime:

It's a safe bet, though, that as this sort of thing is repeated, that we'll find all sorts of unsuspected connections. Some of these connections, I should add, will turn out to be spurious nonsense, noise and artifacts, but we won't know which are which until a lot of people have been studied for a long time. By "lot" I really mean "many, many thousands" - think of how many people we need to establish significance in a clinical trial for something subtle. Now, what if you're looking at a thousand subtle things all at once? The statistics on this stuff will eat you (and your budget) alive.

I can now adduce some evidence for that point of view. The Institute of Medicine has warned that a lot of biomarker work is spurious. The recent Duke University scandal has brought these problems into higher relief, but there are plenty of less egregious (and not even deliberate) examples that are still a problem:

The request for the IOM report stemmed in part from a series of events at Duke University in which researchers claimed that their genomics-based tests were reliable predictors of which chemotherapy would be most effective for specific cancer patients. Failure by many parties to detect or act on problems with key data and computational methods underlying the tests led to the inappropriate enrollment of patients in clinical trials, premature launch of companies, and retraction of dozens of research papers. Five years after they were first made public, the tests were acknowledged to be invalid.

Lack of clearly defined development and evaluation processes has caused several problems, noted the committee that wrote the report. Omics-based tests involve large data sets and complex algorithms, and investigators do not routinely make their data and computational procedures accessible to others who could independently verify them. The regulatory steps that investigators and research institutions should follow may be ignored or misunderstood. As a result, flaws and missteps can go unchecked.

So (Duke aside) the problem isn't fraud, so much as it is wishful thinking. And that's what statistical analysis is supposed to keep in check, but we're got to make sure that that's really happening. But to keep everyone honest, we also have to keep everything out there where multiple sets of eyes can check things over, and this isn't always happening:

Investigators should be required to make the data, computer codes, and computational procedures used to develop their tests publicly accessible for independent review and ensure that their data and steps are presented comprehensibly, the report says. Agencies and companies that fund omics research should require this disclosure and support the cost of independently managed databases to hold the information. Journals also should require researchers to disclose their data and codes at the time of a paper's submission. The computational procedures of candidate tests should be recorded and "locked down" before the start of analytical validation studies designed to assess their accuracy, the report adds.

This is (and has been for some years) a potentially huge field of medical research, with huge implications. But it hasn't been moving forward as quickly as everyone thought it would. We have to resist the temptation to speed things up by cutting corners, consciously or unconsciously.

Comments (14) + TrackBacks (0) | Category: Biological News | Clinical Trials

April 9, 2012

Would I Take Resveratrol? Would You?

Email This Entry

Posted by Derek

I've written many times here about sirtuins, and their most famous associated small molecule, resveratrol. And I've been asked more than once by people outside the med-chem field if I take (or would take) resveratrol, given the available evidence. My reply has been the same for several years: no, not yet.

Why so cautious, for a compound that's found in red grapes and other foods, and to which I've presumably been exposed many times? Several reasons - I'll lay them out and let readers decide how valid they are and how they'd weight these factors themselves.

First off, we can dispose of the "it's in food already, and it's natural, so why worry?" line of thinking. Strychnine is all-natural too, as are any number of other hideous molecules that are capable of terrible effects, so that's no defense at all - it never is. And as for being exposed to it already, that's true - but the dose makes the poison, and the dose makes the drug. I've no idea how much resveratrol I've ingested over the years, but it's safe to say that it's been in small amounts and at irregular intervals. Going from that to regular higher dosages is worth some forethought.

So what do we know about what resveratrol does? A lot, and not nearly enough. Its pharmacology is very complex indeed, and the one thing that you can clearly draw from the (large) scientific literature is that its (a) a very biochemically active compound and (b) we haven't figured out many of those actions yet. Not even close. Even if all it did was act as on one or more sirtuins, that would be enough to tell us that we didn't understand it.

That's because the sirtuins, along with many other enzymes, are involved in epigenetic signaling, a catch-all term for everything in the DNA-to-RNA-to-protein sequence that doesn't depend on just the DNA sequence itself. (And as everyone discovered when the number of human genes came in on the low end of the low estimates, these processes are very important indeed). There are a lot of mechanisms, and it's safe to say that we haven't found them all, either, but the sirtuins modify histones, the proteins that DNA is wrapped around, and thus affect how genes are transcribed. All these transcriptional processes are wildly complex, with hundreds and thousands of genes being up- (and down-) regulated in different tissues, at different times, under different conditions. Anyone that tells you that we're close to unraveling those balls of yarn is not keeping up with the literature, or not understanding what they read.

Of course, one of the controversies about resveratrol (and some of the other sirtuin modulators) is whether they act directly on these enzymes or not. Opinion is very much divided on that, but resveratrol seems to have a number of other effects, mediated through processes that (again!) are best described as "unclear". For example, its metabolic effects seem to be at least partially driven by its actions on an enzyme called AMPK, a key enzyme in a number (brace yourself) of important cellular processes. It might well be that AMPK (activated by resveratrol) is what's having an effect on the sirtuins. A very recent paper implicates another step in the process: resveratrol may well be acting on a set of phosphodiesterase (PDE) enzymes, which affect AMPK, which affect sirtuins. But then again, there's another paper from earlier this year that suggests that resveratrol's activity against sphingosine kinase might be the key. So your guess is as good as mine.

One objection to all this is that there's room to wonder about the mechanisms of a number of drugs. Indeed, there have been many that have made it to market (and stayed there for many years) without anyone knowing their mechanisms at all. We're still finding things out about aspirin; how much can one expect? Well, one response to that is that aspirin has been used widely in the human population for quite a long time now, and resveratrol hasn't. So the question is, what do we know about what resveratrol actually does in living creatures? If it has beneficial effects, why not go ahead and take advantage of them?

Unfortunately, the situation is wildly confusing (for an overview, see here). The first thing that brought resveratrol into the spotlight was life extension in animal models, so you'd think that that would be well worked out by now, but boy, would you be wrong. The confusion extends up to mouse models, where some of the conclusions - all from respectable groups in respectable publications - seem to flatly contradict each other. No, the animal-model work on resveratrol is such a bubbling swamp that I don't see how anyone can safely draw conclusions from it.

How about people, then? There have been some clinical trials reported, with this one the most recent, and these are summed up in this open-access paper. The longest reported trials are on the order of weeks, which is useful, but not necessarily indicative of what might happen out in the real world. But there have been some beneficial metabolic effects seen (although not in all trials), and these constitute some of the biggest arguments for taking resveratrol at all.

One of the things that seems to be possible, from both the animal and human studies, is that the compound might exert these beneficial effects mostly in systems that are already under metabolic stress. Does this translate to people as well? If you're healthy already, which does resveratrol do for (or to) you? No one knows yet, and no one knows how much resveratrol you'd have to take to see things happen. Here's another article (PDF) summarizing the known effects, and here's the way the authors sum up:

"It is no exaggeration to say that the literature on resveratrol is contradictory and confusing. The wide range of concentrations and doses used to achieve the various effects reported for resveratrol in both in vitro cell culture and animal studies raises many questions about the concentrations achievable in vivo. . .

The bottom line? Resveratrol is a very interesting compound, and potentially useful. But the details of its actions aren't clear, and neither, honestly, are the actions themselves. Given the importance of the processes we're talking about - cellular metabolism, which is intimately involved with aging and lifespan, which is intimately involved with defenses against cancer - I don't feel that the situation is clear enough yet to make an intelligent decision. So no, I don't take resveratrol. But I'd be willing to if the fog ever clears.

Comments (73) + TrackBacks (0) | Category: Aging and Lifespan

April 6, 2012

Europe Wants Some of That Molecular Library Action

Email This Entry

Posted by Derek

We've talked about the NIH's Molecular Libraries Initiative here a few times, mostly in the context of whether it reached its goals, and what might happen now that it looks as if it might go away completely. Does make this item a little surprising?

Almost a decade ago, the US National Institutes of Health kicked off its Molecular Libraries Initiative to provide academic researchers with access to the high-throughput screening tools needed to identify new therapeutic compounds. Europe now seems keen on catching up.

Last month, the Innovative Medicines Initiative (IMI), a €2 billion ($2.6 billion) Brussels-based partnership between the European Commission and the European Federation of Pharmaceutical Industries and Associations (EFPIA), invited proposals to build a molecular screening facility for drug discovery in Europe that will combine the inquisitiveness of academic scientists with industry know-how. The IMI's call for tenders says the facility will counter “fragmentation” between these sectors.

I can definitely see the worth in that part of the initiative. Done properly, Screening Is Good. But they'll have to work carefully to make sure that their compound collection is worth screening, and to format the assays so that the results are worth looking at. Both those processes (library generation and high-throughput screening) are susceptible (are they ever) to "garbage in, garbage out" factors, and it's easy to kid yourself into thinking that you're doing something worthwhile just because you're staying so busy and you have so many compounds.

There's another part of this announcement that worries me a bit, though. Try this on for size:

Major pharmaceutical companies have more experience with high-throughput screening than do most academic institutes. Yet companies often limit tests of their closely held candidate chemicals to a fraction of potential disease targets. By pooling chemical libraries and screening against a more diverse set of targets—and identifying more molecular interactions—both academics and pharmaceutical companies stand to gain, says Hugh Laverty, an IMI project manager.

Well, sure, as I said above, Screening Is Good, when it's done right, and we do indeed stand to learn things we didn't know before. But is it really true that we in the industry only look at a "fraction of potential disease targets"? This sounds like someone who's keen to go after a lot of the tough ones; the protein-protein interactions, protein-nucleic acid interactions, and even further afield. Actually, I'd encourage these people to go for it - but with eyes open and brain engaged. The reason that we don't screen against such things as often is that hit rates tend to be very, very low, and even those are full of false positives and noise. In fact, for many of these things, "very, very low" is not distinguishable from "zero". Of course, in theory you just need one good hit, which is why I'm still encouraging people to take a crack. But you should do so knowing the odds, and be ready to give your results some serious scrutiny. If you think that there must be thousands of great things out there that the drug companies are just too lazy (or blinded by the thought of quick profits elsewhere) to pursue, you're not thinking this through well enough.

You might say that what these efforts are looking for are tool compounds, not drug candidates. And I think that's fine; tool compounds are valuable. But if you read that news link in the first paragraph, you'll see that they're already talking about how to manage milestone payments and the like. That makes me think that someone, at any rate, is imagining finding valuable drug candidates from this effort. The problem with that is that if you're screening all the thousands of drug targets that the companies are ignoring, you're by definition working with targets that aren't very validated. So any hits that you do find (and there may not be many, as said above) will still be against something that has a lot of work yet to be done on it. It's a bit early to be wondering how to distribute the cash rewards.

And if you're screening against validated targets, the set of those that don't have any good chemical matter against them already is smaller (and it's smaller for a reason). It's not that there aren't any, though: I'd nominate PTP1B as a well-defined enzymatic target that's just waiting for a good inhibitor to come along to see if it performs as well in humans as it does in, say, knockout mice. (It's both a metabolic target and a potential cancer target as well). Various compounds have been advanced over the years, but it's safe to say that they've been (for the most part) quite ugly and not as selective as they could have been. People are still whacking away at the target.

So any insight into decent-looking selective phosphatase inhibitors would be most welcome. And most unlikely, damn it all, but all great drug ideas are most unlikely. The people putting this initiative together will have a lot to balance.

Comments (20) + TrackBacks (0) | Category: Academia (vs. Industry) | Biological News | Drug Assays

April 5, 2012

What Makes a Beautiful Molecule?

Email This Entry

Posted by Derek

A reader sent along this question for the medicinal chemists in the crowd: we spend a lot of time thinking about what makes a molecule ugly (by our standards). But what about the flip side? What makes a molecule beautiful?

That's a hard one to answer, because, well, eye of the beholder and all that. One answer is that if it works well as a drug, how ugly can it be? (See the recent post here about the ugliest drugs in that light). Then there are all sorts of striking molecular structures that have nothing to do with medicinal chemistry, but for the purposes of today's discussion, I think we should rule those out. So, what makes a drug molecule (or candidate molecule) beautiful?

Size matters, for one thing. It may be my bias towards ligand efficiency, but I'm more impressed with potent, selective molecules that can get the job done with lower molecular weight. And you know that in a huge structure, a lot of the atoms are just scaffolding to get the business end(s) of the molecule in the right place, and I can't see giving points for that.

Points should also go for originality. I enjoy seeing a functional motif that hasn't turned up in a dozen other drugs. That may be because I can imagine that the team that developed the compound probably ran through the more usual stuff first and ended up having to go with the newer-looking group, in spite of their own reservations about what might happen. For similar reasons, I also have a bias towards three-dimensional character. Drug binding pockets are generally 3-D (and chiral), so a compound that takes advantage of those seems more elegant than a completely flat structure. (Although you can argue that a flat structure that works is easier to make, and that's definitely not a trivial consideration).
Escitalopram.png
These tend to lead me, when I look though tables of drugs, to CNS ligands, and perhaps that reflects the influence of my first few years in the industry. But for whatever reason, something like escitalopram just looks like a drug molecule to me. As came up in the "ugly drug" post, though, it's instructive to look over a list of, say, the 200 biggest-selling compounds and realize how many structures a person can find aesthetic fault with. Which shows you how far you can get with aesthetics in this business. . .

Which reminds me: coming soon is a large post with graphics of many of the nominated compounds in the "ugliest drug" category. It'll be worth looking them over, and reflecting that they're out there treating patients and making money.

Comments (30) + TrackBacks (0) | Category: Life in the Drug Labs

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology

April 3, 2012

Information Density

Email This Entry

Posted by Derek

This is a small thing, but nonetheless irritating, at least to me. Can anyone explain why some of the pharma-and-tech news sites (such as Xconomy and FiercePharma) have been redesigning their sites with lots of great, big, headline fonts set in plenty of roomy white space? First it was Gmail going to "you-don't-need-all-that" mode, which makes me wonder if this is some sort of foul trend. I may be an oddball, but I like information-dense pages, at least in a news site. All these newer versions look like the low-calorie versions, a bit of colorful stuff dabbed onto an oversized white plate. OK, /grumble for now. . .

Comments (24) + TrackBacks (0) | Category: Current Events

Bapineuzumab: An Alzheimer's Update

Email This Entry

Posted by Derek

The bapineuzumab saga has been going on for years now. (Every Alzheimer's therapy attempt either has gone on or will go on for years; it's such a slow-moving and heterogeneous disease that the clinical trials are some of the worst in the business). The results so far have not been all that encouraging, but they haven't been discouraging enough (given the state of the field) to give up on, either. Now there's another bit of data, and it's of a piece with the rest.

This Archives of Neurology paper has some results from two small 12-month patient cohorts looking at the antibody's effect on markers for Alzheimer's in the cerebrospinal fluid (CSF). (These were patients from two larger studies who agreed to be sampled for this part). And it's. . .well, you'd hope for better. Two types of tau protein (total and phosphorylated) were monitored, and while they did show an effect compared to the beginning of the study, only the phosphorylated tau was significant versus the placebo group. And what about soluble beta-amyloid? No change at all, looking over several forms (N-terminal modified, etc.)

The paper tries (in my opinion) to put a good face on these numbers, saying that the CSF phosphorylated tau levels have correlated with brain pathology in other studies, and that the amyloid levels may well reflect other clearance pathways or binding to bapineuzumab itself, and should thus be interpreted with caution. (If there had been any trend in the numbers, though, we probably wouldn't be acting so cautiously). But as the paper says, "An important question remains whether such changes in CSF biomarkers correlate with clinical benefit". That it does, and we're going to have to wait for the phase III results in order to say anything. That has been one long-running and expensive trial, for sure, and I hope that there's something worthwhile waiting at the end of it. Alzheimer's patients (and their families) really need something to give them some hope. Maybe the ApoE4 connection will help; I understand that the Phase III trials are focusing on that. But in any case, I'm hoping for a surprise, to be honest, because my expectations aren't high.

Comments (9) + TrackBacks (0) | Category: Alzheimer's Disease

April 2, 2012

"Taking the Ax to the Scientists Is Probably a Mistake"

Email This Entry

Posted by Derek

So says Matthew Herper in Forbes, and I'm certainly not going to argue with him. His point is what he calls lack of appreciation for the human capital in drug discovery:

An ideal drug company would follow all sorts of crazy ideas in early research, with the goal of selecting those where there was a high probability of believing they would actually prove effective in clinical development. It would bulk up on scientists, and try to limit the number of large clinical trials it conducted to those where some kind of test — blood levels of some protein, perhaps — led researchers to think they had a high probability of success. (Novartis, the most successful company in terms of getting new drugs to market, has moved in this direction.) But the tendency of the shutdowns has been to shut laboratories, too. Look at Merck’s stance toward the old Organon labs or Pfizer’s decision to shut the Michigan labs where Lipitor was invented. Taking the ax to the scientists is probably a mistake.

There's always been a disconnect between the business end and the scientific end, but the stresses of the last few years have opened it up wider than ever. The business of making money from drug discovery has never been trickier (or more expensive), and the scientists themselves have never felt more threatened. I can see it in the comments here on this site, whenever the topic of layoffs or top-management incompetence comes up. There are a lot of hard feelings out there - and, really, given the way things have been going, why wouldn't there be?

But at the risk of collecting some thrown bricks myself, I see where the business people are coming from. Our current cost structures are unsustainable. And although I don't agree with the solution of laying everyone off, I don't know what I would do instead. For many companies, it would have been better to have started adjusting years ago, although there's hindsight bias to keep in mind when you think that way. Many companies did try to start adjusting years ago, only to be overwhelmed by even worse than they'd counted on. Then there are a few organizations that just look unfixable by any means anyone can think up.

But I think it's safe to say that relations between the two lobes of the drug R&D enterprise, the financial one and the scientific one, have probably never been worse. It's nothing that some success and hiring couldn't fix, but those are thin on the ground these days.

Comments (84) + TrackBacks (0) | Category: Business and Markets | Drug Industry History