Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

August 26, 2014

A New Look at Phenotypic Screening

Email This Entry

Posted by Derek

There have been several analyses that have suggested that phenotypic drug discovery was unusually effective in delivering "first in class" drugs. Now comes a reworking of that question, and these authors (Jörg Eder, Richard Sedrani, and Christian Wiesmann of Novartis) find plenty of room to question that conclusion.

What they've done is to deliberately focus on the first-in-class drug approvals from 1999 to 2013, and take a detailed look at their origins. There have been 113 such drugs, and they find that 78 of them (45 small molecules and 33 biologics) come from target-based approaches, and 35 from "systems-based" approaches. They further divide the latter into "chemocentric" discovery, based around known pharmacophores, and so on, versus pure from-the-ground-up phenotypic screening, and the 33 systems compounds then split out 25 to 8.

As you might expect, a lot of these conclusions depend on what you classify as "phenotypic". The earlier paper stopped at the target-based/not target-based distinction, but this one is more strict: phenotypic screening is the evaluation of a large number of compounds (likely a random assortment) against a biological system, where you look for a desired phenotype without knowing what the target might be. And that's why this paper comes up with the term "chemocentric drug discovery", to encompass isolation of natural products, modification of known active structures, and so on.

Such conclusions also depend on knowing what approach was used in the original screening, and as everyone who's written about these things admits, this isn't always public information. The many readers of this site who've seen a drug project go from start to finish will appreciate how hard it is to find an accurate retelling of any given effort. Stuff gets left out, forgotten, is un- (or over-)appreciated, swept under the rug, etc. (And besides, an absolutely faithful retelling, with every single wrong turn left in, would be pretty difficult to sit through, wouldn't it?) At any rate, by the time a drug reaches FDA approval, many of the people who were present at the project's birth have probably scattered to other organizations entirely, have retired or been retired against their will, and so on.

But against all these obstacles, the authors seem to have done as thorough a job as anyone could possibly do. So looking further at their numbers, here are some more detailed breakdowns. Of those 45 first-in-class small molecules, 21 were from screening (18 of those high-throughput screening, 1 fragment-based, 1 in silico, and one low-throughput/directed screening). 18 came from chemocentric approaches, and 6 from modeling off of a known compound.

Of the 33 systems-based drugs, those 8 that were "pure phenotypic" feature one antibody (alemtuzumab) which was raised without knowledge of its target, and seven small molecules: sirolimus, fingolimod, eribulin, daptomycin, artemether–lumefantrine, bedaquiline and trametinib. The first three of those are natural products, or derived from natural products. Outside of fingolimod, all of them are anti-infectives or antiproliferatives, which I'd bet reflects the comparative ease of running pure phenotypic assays with those readouts.

Here are the authors on the discrepancies between their paper and the earlier one:

At first glance, the results of our analysis appear to sig­nificantly deviate from the numbers previously pub­lished for first­-in­-class drugs, which reported that of the 75 first-­in-­class drugs discovered between 1999 and 2008, 28 (37%) were discovered through phenotypic screening, 17 (23%) through target-­based approaches, 25 (33%) were biologics and five (7%) came from other approaches. This discrepancy occurs for two reasons. First, we consider biologics to be target­-based drugs, as there is little philosophical distinction in the hypothesis­ driven approach to drug discovery for small­-molecule drugs versus biologics. Second, the past 5 years of our analysis time frame have seen a significant increase in the approval of first-­in-­class drugs, most of which were discovered in a target­-based fashion.

Fair enough, and it may well be that many of us have been too optimistic about the evidence for the straight phenotypic approach. But the figure we don't have (and aren't going to get) is the overall success rate for both techniques. The number of target-based and phenotypic-based screening efforts that have been quietly abandoned - that's what we'd need to have to know which one has the better delivery percentage. If 78/113 drugs, 69% of the first-in-class approvals from the last 25 years, have come from target-based approaches how does that compare with the total number of first-in-class drug projects? My own suspicion is that target-based drug discovery has accounted for more than 70% of the industry's efforts over that span, which would mean that systems-based approaches have been relatively over-performing. But there's no way to know this for sure, and I may just be coming up with something that I want to hear.

That might especially be true when you consider that there are many therapeutic areas where phenotypic screening basically impossible (Alzheimer's, anyone?) But there's a flip side to that argument: it means that there's no special phenotypic sauce that you can spread around, either. The fact that so many of those pure-phenotypic drugs are in areas with such clear cellular readouts is suggestive. Even if phenotypic screeningwere to have some statistical advantage, you can't just go around telling people to be "more phenotypic" and expect increased success, especially outside anti-infectives or antiproliferatives.

The authors have another interesting point to make. As part of their analysis of these 113 first-in-class drugs, they've tried to see what the timeline is from the first efforts in the area to an approved drug. That's not easy, and there are some arbitrary decisions to be made. One example they give is anti-angiogenesis. The first report of tumors being able to stimulate blood vessel growth was in 1945. The presence of soluble tumor-derived growth factors was confirmed in 1968. VEGF, the outstanding example of these, was purified in 1983, and was cloned in 1989. So when did the starting pistol fire for drug discovery in this area? The authors choose 1983, which seems reasonable, but it's a judgment call.

So with all that in mind, they find that the average lead time (from discovery to drug) for a target-based project is 20 years, and for a systems-based drug it's been 25 years. They suggest that since target-based drug discovery has only been around since the late 1980s or so, that its impact is only recently beginning to show up in the figures, and that it's in much better shape than some would suppose.

The data also suggest that target-­based drug discovery might have helped reduce the median time for drug discovery and development. Closer examination of the differences in median times between systems­-based approaches and target­-based approaches revealed that the 5-­year median difference in overall approval time is largely due to statistically significant differences in the period from patent publication to FDA approval, where target-­based approaches (taking 8 years) took only half the time as systems­-based approaches (taking 16 years). . .

The pharmaceutical industry has often been criticized for not being sufficiently innovative. We think that our analysis indicates otherwise and perhaps even suggests that the best is yet to come as, owing to the length of time between project initiation and launch, new technologies such as high­-throughput screening and the sequencing of the human genome may only be starting to have a major impact on drug approvals. . .

Now that's an optimistic point of view, I have to say. The genome certainly still has plenty of time to deliver, but you probably won't find too many other people saying in 2014 that HTS is only now starting to have an impact on drug approvals. My own take on this is that they're covering too wide a band of technologies with such statements, lumping together things that have come in at different times during this period and which would be expected to have differently-timed impacts on the rate of drug discovery. On the other hand, I would like this glass-half-full view to be correct, since it implies that things should be steadily improving in the business, and we could use it.

But the authors take pains to show, in the last part of their paper, that they're not putting down phenotypic drug discovery. In fact, they're calling for it to be strengthened as its own discipline, and not (as they put it) just as a falling back to the older "chemocentric" methods of the 1980s and before:

Perhaps we are in a phase today similar to the one in the mid­-1980s, when systems-­based chemocentric drug discovery was largely replaced by target­-based approaches. This allowed the field to greatly expand beyond the relatively limited number of scaffolds that had been studied for decades and to gain access to many more pharmacologically active compound classes, pro­viding a boost to innovation. Now, with an increased chemical space, the time might be right to further broaden the target space and open up new avenues. This could well be achieved by investing in phenotypic screening using the compound libraries that have been established in the context of target-­based approaches. We therefore consider phenotypic screening not as a neoclassical approach that reverts to a supposedly more successful systems­-based method of the past, but instead as a logical evolution of the current target­-based activi­ties in drug discovery. Moreover, phenotypic screening is not just dependent on the use of many tools that have been established for target-­based approaches; it also requires further technological advancements.

That seems to me to be right on target: we probably are in a period just like the mid-to-late 1980s. In that case, though, a promising new technology was taking over because it seemed to offer so much more. Today, it's more driven by disillusionment with the current methods - but that means, even more, that we have to dig in and come up with some new ones and make them work.

Comments (7) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

August 21, 2014

Fragonomics, Eh?

Email This Entry

Posted by Derek

Edward Zartler ("Teddy Z" of the Practical Fragments blog) has a short piece in the latest ACS Medicinal Chemistry Letters on fragment-based drug discovery. He applies the term "fragonomics" to the field (more on this in a moment), and provides a really useful overview of how it should work.

One of his big points is that fragment work isn't so much about using smaller-than-usual molecules, as it is using molecules that make only good interactions with the target.. It's just that smaller molecules are far more likely to achieve that - a larger one will have some really strong interactions, along with some things that actually hurt the binding. You can start with something large and hack pieces of it off, but that's often a difficult process (and you can't always recapitulate the binding mode, either). But if you have a smaller piece that only makes a positive interaction or two, then you can build out from that, tiptoeing around the various landmines as you go. That's the concept of "ligand efficiency", without using a single equation.

He also emphasizes that having a simpler molecule to work on means that the SAR can be tested and expanded quickly, often without anyone hitting the lab bench at all. You can order things up from the vendors or raid your own screening collection for close analogs. This delays the entry of the medicinal chemists to the project, which (considering that their time is always in demand) is a feature to be happy about.

The article ends up by saying that "Fragonomics has won the field. . .The age of the medchemist is over; now is the time of the biophysicist." I don't know if that's quite the way to win friends and influence people, though. Medicinal chemists are rather sensitive to threats to their existence (with good reason), so my worry is that coming on like this will make chemists who haven't tried it leery of fragment-based drug design in general. I'm also not thrilled with "fragonomics" as a term (just as I'm not thrilled with most of the newly-coined "omics" terms). The word doesn't add anything; it's just a replacement for having to say "fragment-based drug discovery" or "FBDD" all the time. It's not that we don't need a replacement for the unwieldy phrase - it's just that I think that many people might (by now) be ready to dismiss anything that's had "omics" slapped on it. I wish I had something better to offer, but I'm coming up blank myself.

Comments (39) + TrackBacks (0) | Category: Drug Assays

August 19, 2014

Don't Optimize Your Plasma Protein Binding

Email This Entry

Posted by Derek

Here's a very good review article in J. Med. Chem. on the topic of protein binding. For those outside the field, that's the phenomenon of drug compounds getting into the bloodstream and then sticking to one or more blood proteins. Human serum albumin (HSA) is a big player here - it's a very abundant blood protein that's practically honeycombed with binding sites - but there are several others. The authors (from Genentech) take on the disagreements about whether low plasma protein binding is a good property for drug development (and conversely, whether high protein binding is a warning flag). The short answer, according to the paper: neither one.

To further examine the trend of PPB for recently approved drugs, we compiled the available PPB data for drugs approved by the U.S. FDA from 2003 to 2013. Although the distribution pattern of PPB is similar to those of the previously marketed drugs, the recently approved drugs generally show even higher PPB than the previously marketed drugs (Figure 1). The PPB of 45% newly approved drugs is >95%, and the PPB of 24% is >99%. These data demonstrate that compounds with PPB > 99% can still be valuable drugs. Retrospectively, if we had posed an arbitrary cutoff value for the PPB in the drug discovery stage, we could have missed many valuable medicines in the past decade. We suggest that PPB is neither a good nor a bad property for a drug and should not be optimized in drug design.

That topic has come up around here a few times, as could be expected - it's a standard med-chem argument. And this isn't even the first time that a paper has come out warning people that trying to optimize on "free fraction" is a bad idea: see this 2010 one from Nature Reviews Drug Discovery.

But it's clearly worth repeating - there are a lot of people who get quite worked about about this number - in some cases, because they have funny-looking PK and are trying to explain it, or in some cases, just because it's a number and numbers are good, right?

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

July 24, 2014

Phenotypic Assays in Cancer Drug Discovery

Email This Entry

Posted by Derek

The topic of phenotypic screening has come up around here many times, as indeed it comes up very often in drug discovery. Give your compounds to cells or to animals and look for the effect you want: what could be simpler? Well, a lot of things could, as anyone who's actually done this sort of screening will be glad to tell you, but done right, it's a very powerful technique.

It's also true that a huge amount of industrial effort is going into cancer drug discovery, so you'd think that there would be a natural overlap between these: see if your compounds kill or slow cancer cells, or tumors in an animal, and you're on track, right? But there's a huge disconnect here, and that's the subject of a new paper in Nature Reviews Drug Discovery. (Full disclosure: one of the authors is a former colleague, and I had a chance to look over the manuscript while it was being prepared). Here's the hard part:

Among the factors contributing to the growing interest in phenotypic screening in drug discovery in general is the perception that, by avoiding oversimplified reductionist assumptions regarding molecular targets and instead focusing on functional effects, compounds that are discovered in phenotypic assays may be more likely to show clinical efficacy. However, cancer presents a challenge to this perception as the cell-based models that are typically used in cancer drug discovery are poor surrogates of the actual disease. The definitive test of both target hypotheses and phenotypic models can only be carried out in the clinic. The challenge of cancer drug discovery is to maximize the probability that drugs discovered by either biochemical or phenotypic methods will translate into clinical efficacy and improved disease control.

Good models in living systems, which are vital to any phenotypic drug discovery effort, are very much lacking in oncology. It's not that you can't get plenty of cancer cells to grow in a dish - they'll take over your other cell cultures if they get a chance. But those aren't the cells that you're going to be dealing with in vivo, not any more. Cancer cells tend to be genetically unstable, constantly throwing off mutations, and the in vitro lines are adapted to living in cull culture. That's true even if you implant them back into immune-compromised mice (the xenograft models). The number of drugs that look great in xenograft models and failed out in the real world is too large to count.

So doing pure phenotypic drug discovery against cancer is very difficult - you go down a lot of blind alleys, which is what phenotypic screening is supposed to prevent. The explosion of knowledge about cellular pathways in tumor cells has led to uncountable numbers of target-driven approaches instead, but (as everyone has had a chance to find out), it's rare to find a real-world cancer patient who can be helped by a single-target drug. Gleevec is the example that everyone thinks of, but the cruel truth is that it's the exceptional exception. All those newspaper articles ten years ago that heralded a wonderful era of targeted wonder drugs for cancer? They were wrong.

So what to do? This paper suggests that the answer is a hybrid approach:

For the purpose of this article, we consider ‘pure’ phenotypic screening to be a discovery process that identifies chemical entities that have desirable biological (phenotypic) effects on cells or organisms without having prior knowledge of their biochemical activity or mode of action against a specific molecular target or targets. However, in practice, many phenotypically driven discovery projects are not target-agnostic; conversely, effective target-based discovery relies heavily on phenotypic assays. Determining the causal relationships between target inhibition and phenotypic effects may well open up new and unexpected avenues of cancer biology.

In light of these considerations, we propose that in practice a considerable proportion of cancer drug discovery falls between pure PDD and TDD, in a category that we term ‘mechanism-informed phenotypic drug discovery’ (MIPDD). This category includes inhibitors of known or hypothesized molecular targets that are identified and/or optimized by assessing their effects on a therapeutically relevant phenotype, as well as drug candidates that are identified by their effect on a mechanistically defined phenotype or phenotypic marker and subsequently optimized for a specific target-engagement MOA.

I've heard these referred to as "directed phenotypic screens", and while challenging, it can be a very fruitful way to go. Balancing the two ways of working is the tricky part: you don't want to slack up on the model just so it'll give you results, if those results aren't going to be meaningful. And you don't want to be so dogmatic about your target ideas that you walk away from something that could be useful, but doesn't fit your scheme. If you can keep all these factors in line, you're a real drug discovery scientist, and no mistake.

How hard this is can be seen from the paper's Table 1, where they look over the oncology approvals since 1999, and classify them by what approaches were used for lead discovery and lead optimization. There's a pile of 21 kinase inhibitors (and eight other compounds) over in the box where both phases were driven by inhibition of a known target. And there are ten compounds whose origins were in straight phenotypic screening, with various paths forward after that. But the "mechanism-informed phenotypic screen" category is the shortest list of the three lead discovery approaches: seven compounds, optimized in various ways. (The authors are upfront about the difficulties of assembling this sort of overview - it can be hard to say just what really happened during discovery and development, and we don't have the data on the failures).

Of those 29 pure-target-based drugs, 18 were follow-ons to mechanisms that had already been developed. At this point, you'd expect to hear that the phenotypic assays, by contrast, delivered a lot more new mechanisms. But this isn't the case: 14 follow-ons versus five first-in-class. This really isn't what phenotypic screening is supposed to deliver (and has delivered in the past), and I agree with the paper that this shows how difficult it has been to do real phenotypic discovery in this field. The few assays that translate to the clinic tend to keep discovering the same sorts of things. (And once again, the analogy to antibacterials comes to mind, because that's exactly what happens if you do a straight phenotypic screen for antibacterials. You find the same old stuff. That field, too, has been moving toward hybrid target/phenotypic approaches).

The situation might be changing a bit. If you look at the drugs in the clinic (Phase II and Phase III), as opposed to the older ones that have made it all the way through, there are still a vast pile of target-driven ones (mostly kinase inhibitors). But you can find more examples of phenotypic candidates, and among them an unusually high proportion of outright no-mechanism-known compounds. Those are tricky to develop in this field:

In cases where the efficacy arises from the engagement of a cryptic target (or mechanism) other than the nominally identified one, there is potential for substan- tial downside. One of the driving rationales of targeted discovery in cancer is that patients can be selected by pre- dictive biomarkers. Therefore, if the nominal target is not responsible for the actions of the drug, an incorrect diagnostic hypothesis may result in the selection of patients who will — at best — not derive benefit. For example, multiple clinical trials of the nominal RAF inhibitor sorafenib in melanoma showed no benefit, regardless of the BRAF mutation status. This is consistent with the evidence that the primary target and pharmacodynamic driver of efficacy for sorafenib is actually VEGFR2. The more recent clinical success of the bona fide BRAF inhibitor vemurafenib in melanoma demonstrates that the target hypothesis of BRAF for melanoma was valid.

So, if you're going to do this mechanism-informed phenotypic screening, just how do you go about it? High-content screening techniques are one approach: get as much data as possible about the effects of your compounds, both at the molecular and cellular level (the latter by imaging). Using better cell assays is crucial: make them as realistic as you can (three-dimensional culture, co-culture with other cell types, etc.), and go for cells that are as close to primary tissue as possible. None of this is easy, or cheap, but the engineer's triangle is always in effect ("Fast, Cheap, Good: Pick Any Two").

Comments (22) + TrackBacks (0) | Category: Cancer | Drug Assays | Drug Development

July 22, 2014

Put Them in Cells and Find Out

Email This Entry

Posted by Derek

So, when you put some diverse small molecules into cellular assays, how many proteins are they really hitting? You may know a primary target or two that they're likely to interact with, or (if you're doing phenotypic screening), you may not have any idea at all. But how many proteins (or other targets) are there that bind small molecules at all?

This is a question that many people are interested in, but hard data to answer it are not easily obtained. There have been theoretical estimates via several techniques, but (understandably) not too much experimental evidence. Now comes this paper from Ben Cravatt's group, and it's one of the best attempts yet.

What they've done is to produce a library of compounds, via Ugi chemistry, containing both a photoaffinity handle and an alkyne (for later "click" tagging). They'd done something similar before, but the photoaffinity group in that case was a benzophenone, which is rather hefty. This time they used a diazirine, which is both small and the precursor to a very reactive carbene once it's irradiated. (My impression is that the diazirine is the first thing to try if you're doing photoaffinity work, for just those reasons). They made a small set of fairly diverse compounds (about 60), with no particular structural biases in mind, and set out to see what these things would label.

They treated PC-3 cells (human prostate-cancer derived) with each member of the library at 10 µM, then hit them with UV to do the photoaffinity reaction, labeled with a fluorescent tag via the alkyne, and fished for proteins. What they found was a pretty wide variety, all right, but not in the nonselective shotgun style. Most compounds showed distinct patterns of protein labeling, and most proteins picked out distinct SAR from the compound set. They picked out six members of the library for close study, and found that these labeled about 24 proteins (one compound only picked up one target, while the most promiscuous compound labeled nine). What's really interesting is that only about half of these were known to have any small-molecule ligands at all. There were proteins from a number of different classes, and some (9 out of 24) weren't even enzymes, but rather scaffolding and signaling proteins (which wouldn't be expected to have many small-molecule binding possibilities).

A closer look at non-labeled versions of the probe compounds versus more highly purified proteins confirmed that the compounds really are binding as expected (in some cases, a bit better than the non-photoaffinity versions, in some cases worse). So even as small a probe as a diazirine is not silent, which is just what medicinal chemists would have anticipated. (Heck, even a single methyl or fluoro isn't always silent, and a good thing, too). But overall, what this study suggests is that most small molecules are going to hit a number of proteins (1 up to a dozen?) in any given cell with pretty good affinity. It also (encouragingly) suggests that there are more small-molecule binding sites than you'd think, with proteins that have not evolved for ligand responses still showing the ability to pick things up.

There was another interesting thing that turned up: while none of the Ugi compounds was a nonselective grab-everything compound, some of the proteins were. A subset of proteins tended to pick up a wide variety of the non-clickable probe compounds, and appear to be strong, promiscuous binders. Medicinal chemists already know a few of these things - CYP metabolizing enzymes, serum albumin, and so on. This post has some other suggestions. But there are plenty more of them out there, unguessable ones that we don't know about yet (in this case, PTGR and VDAC subtypes, along with NAMPT). There's a lot to find out.

Comments (7) + TrackBacks (0) | Category: Chemical Biology | Drug Assays

July 9, 2014

Outsourced Assays, Now a Cause For Wonder?

Email This Entry

Posted by Derek

Here's a look at Emerald Biotherapeutics (a name that's unfortunately easy to confuse with several other former Emeralds in this space). They're engaged in their own drug research, but they also have lab services for sale, using a proprietary system that they say generates fast, reproducible assays.

On July 1 the company unveiled a service that lets other labs send it instructions for their experiments via the Web. Robots then complete the work. The idea is a variation on the cloud-computing model, in which companies rent computers by the hour from Amazon.com, Google, and Microsoft instead of buying and managing their own equipment. In this case, biotech startups could offload some of their basic tasks—counting cells one at a time or isolating proteins—freeing their researchers to work on more complex jobs and analyze results. To control the myriad lab machines, Emerald has developed its own computer language and management software. The company is charging clients $1 to $100 per experiment and has vowed to return results within a day.

The Bloomberg Businessweek piece profiling them does a reasonable job, but I can't tell if its author knows that there's already a good amount of outsourcing of this type already. Emerald's system does indeed sound fast, though. But rarely does the quickness of an assay turn out to be the real bottleneck in any drug discovery effort, so I'm not sure how much of a selling point that is. The harder parts are the ones that can't be automated: figuring out what sort of assay to run, and troubleshooting it so that it can be reliably run on high-throughput machines are not trivial processes, and they can take a lot of time and effort. Even more difficult is the step before any of that: figuring out what you're going to be assaying at all. What's your target? What are you screening for? What's the great idea behind the whole project? That stuff is never going to be automated at all, and it's the key to the whole game.

But when I read things like this, I wonder a bit:

While pursuing the antiviral therapy, Emerald began developing tools to work faster. Each piece of lab equipment, made by companies including Agilent Technologies (A) and Thermo Fisher Scientific (TMO), had its own often-rudimentary software. Emerald’s solution was to write management software that centralized control of all the machines, with consistent ways to specify what type of experiment to run, what order to mix the chemicals in, how long to heat something, and so on. “There are about 100 knobs you can turn with the software,” says Frezza. Crucially, Emerald can store all the information the machines collect in a single database, where scientists can analyze it. This was a major advance over the still common practice of pasting printed reports into lab notebooks.

Well, that may be common in some places, but in my own experience, that paste-the-printed-report stuff went out a long time ago. Talking up the ability to have all the assay data collected in one place sounds like something from about fifteen or twenty years ago, although the situation can be different for the small startups who would be using Emerald (or their competitors) for outsourced assay work. But I would still expect any CRO shop to provide something better than a bunch of paper printouts!

Emerald may well have something worth selling, and I wish them success with it. Reproducible assays with fast turnaround are always welcome. But this article's "Gosh everything's gone virtual now wow" take on it isn't quite in line with reality.

Comments (13) + TrackBacks (0) | Category: Drug Assays

June 25, 2014

Where's the Widest Variety of Chemical Matter?

Email This Entry

Posted by Derek

A look through some of the medicinal chemistry literature this morning got me to thinking: does anyone have any idea of which drug target has the most different/diverse chemical matter that's been reported against it? I realize that different scaffolds are in the eye of the beholder, so it's going to be impossible to come up with any exact counts. But I think that all the sulfonamides that hit carbonic anhydrase, for example, should for this purpose be lumped together: that interaction with the zinc is crucial, and everything else follows after. Non-sulfonamide CA inhibitors would each form a new class for each new zinc-interacting motif, and any compounds that don't hit the zinc at all (are there any?) would add to the list, too. Then you have allosteric compounds, which are necessarily going to look different than active-site inhibitors.

My guess is that some of the nuclear receptors would turn out to win this competition. They can have large, flexible binding pockets that seem to recognize a variety of chemotypes. So maybe this question should be divided up a bit more:

1. What enzyme is known to have the widest chemical variety of active-site inhibitors?

2. Which GPCR has the widest chemical variety of agonists? Antagonists? (The antagonists are going to win this one, surely).

3. And the the open field question asked above: what drug target of any kind has had the widest variety of molecules reported to act on it, in any fashion?

I don't imagine that we'll come to any definitive answer to any of these, but some people may have interesting nominations.

Update: in response to a query in the comments, maybe we should exempt the drug-metabolizing enzymes from the competition, since their whole reason for living is to take on a wide variety of unknown chemical structures.

Comments (25) + TrackBacks (0) | Category: Chemical News | Drug Assays

June 19, 2014

Dark Biology And Small Molecules

Email This Entry

Posted by Derek

Here's a discussion I got into the other day, in which I expressed some forceful opinions. I wanted to run it past a wider audience to see if I'm grounded in reality, or out on my own island (which has happened before).

Without getting into any details, we were talking about an area of potential drug research that has to do with transcriptional regulation. This one is clearly complicated - what part of transcription isn't complicated? But it's known that you can get things to happen by using things like epigenetic tool compounds (bromodomains, HDAC inhibitors, methyltransferases), and nuclear receptor ligands. None of these give you everything you want to see, by any means, but you do see some effect.

My take on this was that an effort to follow up with more epigenetic compounds and nuclear receptor ligands might well be a case of the classic "looking under the lamp-post because that's where the light is" syndrome. We don't have many small-molecule handles for affecting transcription, went my reasoning, and although such things are bromodomains, HDAC inhibition, and nuclear receptor signaling are wide-ranging, there's a lot more than the compounds in these spaces surely don't cover. In fact, given the wide range of these mechanisms, seeing a little tickling of any given transcriptional mechanism is about what I would expect from almost any of them, applied to almost anything. But that, to my mind, didn't necessarily mean that it was a lead worth following up.

My recommendation was for a phenotypic screen, if a good one could be worked up. There must be plenty of stuff going on with this system that we don't have any idea about, went my thinking. In the same way that the matter we can see through a telescope is only a tiny fraction of what appears to be really out there in the universe, I think that there's a vast amount of "dark biology" that we don't know much of. And the overwhelming majority of it has to be considered dark if we only consider the parts that we can light up with small molecules. For something that has to involve a huge array of protein-protein interactions, protein-nucleic acid interactions, and who knows what ancillary enzymes and binding sites, I wondered, what are the odds that the things that we happen know how to do with small molecules are the real answer?

So if you're going to dive into such waters (and many of you out there might be swimming around in them right now), by all means test whatever epigenetic and nuclear receptor compounds you might have around. Maybe you'll get a strong response. But if it all comes back as a little bit of this and a tiny bit of that, I'd say that these are unlikely to be convertible into robust drug mechanisms - the odds are that if there even is a robust drug mechanism out there, that you haven't hit it yet and that it will announce itself a bit more clearly if you manage to. A well-designed phenotypic screen might well be the best way to find such things, always keeping in mind that a badly designed phenotypic screen is the tar pit itself, the worst of both worlds.

So, am I too gloomy? Too jaded? Or simply a well-meaning realist? Thoughts welcome.

Comments (28) + TrackBacks (0) | Category: Drug Assays

June 13, 2014

Med-Chem, Automated?

Email This Entry

Posted by Derek

See what you think about this PDF: Cyclofluidics is advertising the "Robot Medicinal Chemist". It's an integrated microfluidics synthesis platform, assay/screening module, with software to decide what the next round of analogs should be:

Potential lead molecules are synthesised, purified and screened in fast serial mode, incorporating activity data from each compound as it is generated before selecting the next compound to make.

To ensure data quality, each compound is purified by integrated high pressure liquid chromatography (HPLC), its identity confirmed by mass spectrometry and the
concentration entering the assay determined in real time by evaporative light scattering detection (ELSD). The compound's IC50 is then measured in an on-line biochemical assay and this result fed into the design software before the algorithm selects the next compound to make – thus generating structure-activity relationship data. The system is designed to use interchangeable design algorithms, assay formats and chemistries and at any stage a medicinal chemist can intervene in order to adjust the design strategy.

I can see where this might work, but only in special cases. The chemistry part would seem to require a "core with substituents" approach, where a common intermediate gets various things hung off of it. (That's how a lot of medicinal chemistry gets done anyway). Flow chemistry has improved to where many reactions would be possible, but each ne