Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

June 23, 2014

The Virtual Clinical Trial: Not Quite Around the Corner

Email This Entry

Posted by Derek

Here's one of those "Drug Discovery of. . .the. . .Future-ure-ure-ure" articles in the popular press. (I need a reverb chamber to make that work property). At The Atlantic, they're talking with "medical futurists" and coming up with this:

The idea is to combine big data and computer simulations—the kind an engineer might use to make a virtual prototype of a new kind of airplane—to figure out not just what's wrong with you but to predict which course of treatment is best for you. That's the focus of Dassault Systèmes, a French software company that's using broad datasets to create cell-level simulations for all different kinds of patients. In other words, by modeling what has happened to patients like you in previous cases, doctors can better understand what might happen if they try certain treatments for you—taking into consideration your age, your weight, your gender, your blood type, your race, your symptom, any number of other biomarkers. And we're talking about a level of precision that goes way beyond text books and case studies.

I'm very much of two minds about this sort of thing. On the one hand, the people at Dassault are not fools. They see an opportunity here, and they think that they might have a realistic chance at selling something useful. And it's absolutely true that this is, broadly, the direction in which medicine is heading. As we learn more about biomarkers and individual biochemistry, we will indeed be trying to zero in on single-patient variations.

But on that ever-present other hand, I don't think that you want to make anyone think that this is just around the corner, because it's not. It's wildly difficult to do this sort of thing, as many have discovered at great expense, and our level of ignorance about human biochemistry is a constant problem. And while tailoring individual patient's therapies with known drugs is hard enough, it gets really tricky when you talk about evaluating new drugs in the first place:

Charlès and his colleagues believe that a shift to virtual clinical trials—that is, testing new medicines and devices using computer models before or instead of trials in human patients—could make new treatments available more quickly and cheaply. "A new drug, a succesful drug, takes 10 to 12 years to develop and over $1 billion in expenses," said Max Carnecchia, president of the software company Accelrys, which Dassault Systèmes recently acquired. "But when it is approved by FDA or other government bodies, typically less than 50 percent of patients respond to that therapy or drug." No treatment is one-size-fits-all, so why spend all that money on a single approach?

Carnecchia calls the shift toward algorithmic clinical trials a "revolution in drug discovery" that will allow for many quick and low-cost simulations based on an endless number of individual cellular models. "Those models start to inform and direct and focus the kinds of clinical trials that have historically been the basis for drug discovery," Carnecchia told me. "There's the benefit to drug companies from reduction of cost, but more importantly being able to get these therapies out into the market—whether that's saving lives or just improving human health—in such a way where you start to know ahead of time whether that patient will actually respond to that therapy."

Speed the day. The cost of clinical trials, coupled with their low success rate, is eating us alive in this business (and it's getting worse every year). This is just the sort of thing that could rescue us from the walls that are closing in more tightly all the time. But this talk of shifts and revolutions makes it sound as if this sort of thing is happening right now, which it isn't. No such simulated clinical trial, one that could serve as the basis for a drug approval, is anywhere near even being proposed. How long before one is, then? If things go really swimmingly, I'd say 20 to 25 years from now, personally, but I'd be glad to hear other estimates.

To be fiar, the article does go on to mentions something like this, but it just says that "it may be a while" before said revolution happens. And you get the impression that what's most needed is some sort of "cultural shift in medicine toward openness and resource sharing". I don't know. . .I find that when people call for big cultural shifts, they're sometimes diverting attention (even their own attention) from the harder parts of the problem under discussion. Gosh, we'd have this going in no time if people would just open up and change their old-fashioned ways! But in this case, I still don't see that as the rate-limiting step at all. Pouring on the openness and sharing probably wouldn't hurt a bit in the quest for understanding human drug responses and individual toxicology, but it's not going to suddenly open up any blocked-up floodgates, either. We don't know enough. Pooling our current ignorance can only take us so far.

Remember there are hundreds of billions of dollars waiting to be picked up off the ground by anyone who can do these things. It's not like there are no incentives to find ways to make clinical trials faster and cheaper. Anything that gives the impression that there's this one factor (lack of cooperation, too much regulation, Evil Pharma Executives, what have you) holding us back from the new era, well. . .that just might be an oversimplified view of the situation.

Comments (15) + TrackBacks (0) | Category: Clinical Trials | In Silico | Regulatory Affairs | Toxicology

June 9, 2014

Hosed-Up X-Ray Structures: A Big Problem

Email This Entry

Posted by Derek

X-ray crystallography is great stuff, no doubt about it. But it's not magic. It takes substantial human input to give a useful structure of a ligand bound to a protein - there are decisions to be made and differences to be split. It's important to emphasize, for those of us who are not crystallographers, that unless you have resolution down below 1Å - and I'll bet you don't - then your X-ray structures are not quite "structures"; they're models. A paper several years ago emphasized these factors for chemists outside the field.

About ten years ago, I wrote about this paper, which suggested that many ligand-bound structures seemed to have strain energy in them that wouldn't have been predicted. One interpretation is that there's more to ligand (and binding site) reorganization than people tend to realize, and that ligands don't always bind in their lowest-energy conformations. And while I still think that's true, the situation is complicated by another problem that's become more apparent over the years: many reported X-ray structures for ligand-bound proteins are just messed up.
3qad.jpg
Here's an editorial in ACS Medicinal Chemistry Letters that shows how bad the problem may well be. Reviews of the crystallographic databases have suggested that there are plenty of poorly refined structures hiding in there. But I didn't realize that they were as poorly refined as some of these. Take a look at the phosphate in 1xqd, and note how squashed-out those oxygens are around the first phosphorus. Or try the olefin in 4g93, which has been yanked 90 degrees out of plane. It's bad that there are such ridiculous structures in the literature, but the larger number of semi-plausible (but still wrong) structures is even worse.

Those structures at the left illustrate what's going on. The top one is an old PDB structure, 3qad, for an IKK inhibitor. It's a mess. Note that there's a tetrahedralish aromatic carbon (not happening), and a piperazine in a boat conformation (only slightly less unlikely). The structure was revised after this was pointed out to the middle version (3rzf), but that one still has some odd features - those two aromatic groups are flat-on in the same plane, and the amine between them and the next aryl is rather odd, too. Might be right, might be wrong - who's to know?

The most recent comprehensive look (from 2012) suggests that about 25% of the reported ligand-bound structures are mangled to the point of being misleading. This new editorial goes on to mention some computational tools that could help to keep this from happening, such as this one. If we're all going to draw conclusions from these things (and that's what they're there for, right?) we'd be better off using the best ones we can.

Comments (20) + TrackBacks (0) | Category: Drug Assays | In Silico

June 4, 2014

Predicting New Targets - Another Approach

Email This Entry

Posted by Derek

So you make a new chemical structure as part of a drug research program. What's it going to hit when it goes into an animal?

That question is a good indicator of the divide between the general public and actual chemists and pharmacologists. People without any med-chem background tend to think that we can predict these things, and people with it know that we can't predict much at all. Even just predicting activity at the actual desired target is no joke, and guessing what other targets a given compound might hit is, well, usually just guessing. We get surprised all the time.

That hasn't been for lack of trying, of course. Here's an effort from a few years ago on this exact question, and a team from Novartis has just published another approach. It builds on some earlier work of theirs (HTS fingerprints, HTSFP) that tries to classify compounds according to similar fingerprints of biological activity in suites of assays, rather than by their structures, and this latest one is called HTSFP-TID (target ID, and I think the acronym is getting a bit overloaded at that point).

We apply HTSFP-TID to make predictions for 1,357 natural products (NPs) and 1,416 experimental small molecules and marketed drugs (hereafter generally referred to as drugs). Our large-scale target prediction enables us to detect differences in the protein classes predicted for the two data sets, reveal target classes that so far have been underrepresented in target elucidation efforts, and devise strategies for a more effective targeting of the druggable genome. Our results show that even for highly investigated compounds such as marketed drugs, HTSFP-TID provides fresh hypotheses that were previously not pursued because they were not obvious based on the chemical structure of a molecule or against human intuition.

They have up to 230 or so assays to pick from, although it's for sure that none of the compounds have been through all of them. They required that any given compound have at least 50 different assays to its name, though (and these were dealt with as standard deviations off the mean, to keep things comparable). And what they found shows some interesting (and believable) discrepancies between the two sets of compounds. The natural product set gave mostly predictions for enzyme targets (70%), half of them being kinases. Proteases were about 15% of the target predictions, and only 4% were predicted GPCR targets. The drug-like set also predicted a lot of kinase interactions (44%), and this from a set where only 20% of the compounds were known to hit any kinases before. But it had only 5% protease target predictions, as opposed to 23% GPCR target predictions.

The group took a subset of compounds and ran them through new assays to see how the predictions came out, and the results weren't bad - overall, about 73% of the predictions were borne out by experiment. The kinase predictions, especially, seemed fairly accurate, although the GPCR calls were less so. They identified several new modes of action for existing compounds (a few of which they later discovered buried in the literature). They also tried a set of predictions based on chemical descriptor (the other standard approach), but found a lower hit rate. Interestingly, though, the two methods tended to give orthogonal predictions, which suggests that you might want to run things both ways if you care enough. Such efforts would seem particularly useful as you push into weirdo chemical or biological space, where we'll take whatever guidance we can get.

Novartis has 1.8 million compounds to work with, and plenty of assay data. It would be worth knowing what some other large collections would yield with the same algorithms: if you used (say) Merck's in-house data as a training set, and then applied it to all the compounds in the CHEMBL database, how similar would the set of predictions for them be? I'd very much like for someone to do something like this (and publish the results), but we'll see if that happens or not.

Comments (11) + TrackBacks (0) | Category: Drug Assays | In Silico

April 4, 2014

Ancient Modeling

Email This Entry

Posted by Derek

I really got a kick out of this picture that Wavefunction put up on Twitter last night. It's from a 1981 article in Fortune, and you'll just have to see the quality of the computer graphics to really appreciate it.

That sort of thing has hurt computer-aided drug design a vast amount over the years. It's safe to say that in 1981, Merck scientists did not (as the article asserts) "design drugs and check out their properties without leaving their consoles". It's 2014 and we can't do it like that yet. Whoever wrote that article, though, picked those ideas up from the people at Merck, with their fuzzy black-and-white monitor shots of DNA from three angles. (An old Evans and Sutherland terminal?) And who knows, some of the Merck folks may have even believed that they were close to doing it.

But computational power, for the most part, only helps you out when you already know how to calculate something. Then it does it for you faster. And when people are impressed (as they should be) with all that processing power can do for us now, from smart phones on up, they should still realize that these things are examples of fast, smooth, well-optimized versions of things that we know how to calculate. You could write down everything that's going on inside a smart phone with pencil and paper, and show exactly what it's working out when it display this pixel here, that pixel there, this call to that subroutine, which calculates the value for that parameter over there as the screen responds to the presence of your finger, and so on. It would be wildly tedious, but you could do it, given time. Someone, after all, had to program all that stuff, and programming steps can be written down.

The programs that drove those old DNA pictures could be written down, too, of course, and in a lot less space. But while the values for which pixels to light up on the CRT display were calculated exactly, the calculations behind those were (and are) a different matter. A very precise-looking picture can be drawn and animated of an animal that does not exist, and there are a lot of ways to draw animals that do not exist. The horse on your screen might look exact in every detail, except with a paisley hide and purple hooves (my daughter would gladly pay to ride one). Or it might have a platypus bill instead of a muzzle. Or look just like a horse from outside, but actually be filled with helium, because your program doesn't know how to handle horse innards. You get the idea.

The same for DNA, or a protein target. In 1981, figuring out exactly what happened as a transcription factor approached a section of DNA was not possible. Not to the degree that a drug designer would need. The changing conformation of the protein as it approaches the electrostatic field of the charged phosphate residues, what to do with the water molecules between the two as they come closer, the first binding event (what is it?) between the transcription factor and the double helix, leading to a cascade of tradeoffs between entropy and enthalpy as the two biomolecules adjust to each other in an intricate tandem dance down to a lower energy state. . .that stuff is hard. It's still hard. We don't know how to model some of those things well enough, and the (as yet unavoidable) errors and uncertainties in each step accumulate the further you go along. We're much better at it than we used to be, and getting better all the time, but there's a good way to go yet.

But while all that's true, I'm almost certainly reading too much into that old picture. The folks at Merck probably just put one of their more impressive-looking things up on the screen for the Fortune reporter, and hey, everyone's heard of DNA. I really don't think that anyone at Merck was targeting protein-DNA interactions 33 years ago (and if they were, they splintered their lance against that one, big-time). But the reporter came away with the impression that the age of computer-designed drugs was at hand, and in the years since, plenty of other people have seen progressively snazzier graphics and thought the same thing. And it's hurt the cause of modeling for them to think that, because the higher the expectations get, the harder it is to come back to reality.

Update: I had this originally as coming from a Forbes article; it was actually in Fortune.

Comments (22) + TrackBacks (0) | Category: Drug Industry History | In Silico

March 17, 2014

Predicting What Group to Put On Next

Email This Entry

Posted by Derek

Here's a new paper in J. Med. Chem. on software that tries to implement matched-molecular-pair type analysis. The goal is a recommendation - what R group should I put on next?

Now, any such approach is going to have to deal with this paper from Abbott in 2008. In that one, an analysis of 84,000 compounds across 30 targets strongly suggested that most R-group replacements had, on average, very little effect on potency. That's not to say that they don't or can't affect binding, far from it - just that over a large series, those effects are pretty much a normal distribution centered on zero. There are also analyses that claim the same thing for adding methyl groups - to be sure, there are many dramatic "magic methyl" enhancement examples, but are they balanced out, on the whole, by a similar number of dramatic drop-offs, along with a larger cohort of examples where not much happened at all?

To their credit, the authors of this new paper reference these others right up front. The answer to these earlier papers, most likely, is that when you average across all sorts of binding sites, you're going to see all sorts of effects. For this to work, you've got a far better chance of getting something useful if you're working inside the same target or assay. Here we get to the nuts and bolts:

The predictive method proposed, Matsy, relies on the hypothesis that a particular matched series tends to have a preferred activity order, for example, that not all six possible orders of [Br, Cl, F] are equally frequent. . .Although a rather straightforward idea, we have been unable to find any quantitative analysis of this question in the literature.

So they go on to provide one, with halogen substituents. There's not much to be found comparing pairs of halogen compounds head to head, but when you go to the longer series, you find that the order Br > Cl > F > H is by far the most common (and that appears to be just a good old grease effect). The next most common order just swaps the bromine and chlorine, but the third most common is the original order, in reverse. The other end of the distribution is interesting, too - for example, the least most common order is Br > H > F > Cl, which is believable, since it doesn't make much sense along any property axis.

They go on to do the same sorts of analyses for other matched series, and the question then becomes, if you have such a matched series in your own SAR, what does that order tell you about what to make next? The idea of "SAR transfer" has been explored, and older readers will remember the Topliss tree for picking aromatic substituents (do younger ones?)

The Matsy algorithm may be considered a formalism of aspects of how a medicinal chemist works in practice. Observing a particular trend, a chemist considers what to make next on the basis of chemical intuition, experience with related compounds or targets, and ease of synthesis. The structures suggested by Matsy preserve the core features of molecules while recommending small modifications, a process very much in line with the type of functional group replacement that is common in lead optimization projects. This is in contrast to recommendations from fingerprint-based similarity comparisons where the structural similarity is not always straightforward to rationalize and near-neighbors may look unnatural to a medicinal chemist.

And there's a key point: prediction and recommendation programs walk a fine line, between "There's no way I'm going out of my way to make that" and "I didn't need this program to tell me this". Sometimes there's hardly any space between those two territories at all. Where do this program's recommendations fall? As companies try this out in-house, some people will be finding out. . .

Comments (13) + TrackBacks (0) | Category: Drug Development | In Silico

February 28, 2014

Computational Nirvana

Email This Entry

Posted by Derek

Wavefunction has a post about this paper from J. Med. Chem. on a series of possible antitrypanosomals from the Broad Institute's compound collection. It's a good illustration of the power of internal hydrogen bonds - in this case, one series of isomers can make the bond, but that ties up their polar groups, making them less soluble but more cell-permeable. The isomer that doesn't form the internal H-bond is more polar and more soluble, but less able to get into cells. Edit - fixed this part.

So if your compound has too many polar functionalities, an internal hydrogen bond can be just the thing to bring on better activity, because it tones things down a bit. And there are always the conformational effects to keep in mind. Tying a molecule up like that is the same as any other ring-forming gambit in medicinal chemistry: death or glory. Rarely is a strong conformational restriction silent in the SAR - usually, you either hit the magic conformer, or you move it forever out of reach.

I particularly noticed Wavefunction's line near the close of his post: "If nothing else they provide a few more valuable data points on the way to prediction nirvana.". I know what he's talking about, and I think he's far from the only computational chemist with eschatological leanings. Eventually, you'd think, we'd understand enough about all the things we're trying to model for the models to, well, work. And yes, I know that there are models that work right now, but you don't know that they're going to work until you've messed with them a while, and there are other models that don't work but look equally plausible at first, etc., and very much etc. "Prediction nirvana" would be the state where you have an idea for a new structure, you enter it into your computational model, and it immediately tells you the right answer, every single time. In theory, I think this is a reachable state of affairs. In practice, it is not yet implemented.

And remember, people have spotted glows on that horizon before and proclaimed the imminent dawn. The late 1980s were such a time, but experiences like those tend to make people more reluctant to immanentize the eschaton, or at least not where anyone can hear. But we are learning more about enthalpic and entropic interactions, conformations, hydrogen bonds, nonpolar interactions, all those things that go into computational prediction of structure and binding interactions. And if we continue to learn more, as seems likely, won't there come a point when we've learned what we need to know? If not true computational nirvana, then surely (shrink those epsilons and deltas) as arbitrarily close an approach as we like?

Comments (8) + TrackBacks (0) | Category: In Silico

February 19, 2014

Ligand Efficiency: A Response to Shultz

Email This Entry

Posted by Derek

I'd like to throw a few more logs on the ligand efficiency fire. Chuck Reynolds of J&J (author of several papers on the subject, as aficionados know) left a comment to an earlier post that I think needs some wider exposure. I've added links to the references:

An article by Shultz was highlighted earlier in this blog and is mentioned again in this post on a recent review of Ligand Efficiency. Shultz’s criticism of LE, and indeed drug discovery “metrics” in general hinges on: (1) a discussion about the psychology of various metrics on scientists' thinking, (2) an assertion that the original definition of ligand efficiency, DeltaG/HA, is somehow flawed mathematically, and (3) counter examples where large ligands have been successfully brought to the clinic.

I will abstain from addressing the first point. With regard to the second, the argument that there is some mathematical rule that precludes dividing a logarithmic quantity by an integer is wrong. LE is simply a ratio of potency per atom. The fact that a log is involved in computing DeltaG, pKi, etc. is immaterial. He makes a more credible point that LE itself is on average non-linear with respect to large differences in HA count. But this is hardly a new observation, since exactly this trend has been discussed in detail by previous published studies (here, here, here, and here). It is, of course, true that if one goes to very low numbers of heavy atoms the classical definition of LE gets large, but as a practical matter medicinal chemists have little interest in extremely small fragments, and the mathematical catastrophe he warns us against only occurs when the number of heavy atoms goes to zero (with a zero in the denominator it makes no difference if there is a log in the numerator). Why would HA=0 ever be relevant to a med. chem. program? In any case a figure essentially equivalent to the prominently featured Figure 1a in the Shultz manuscript appears in all of the four papers listed above. You just need to know they exist.

With regard to the third argument, yes of course there are examples of drugs that defy one or more of the common guidelines (e.g MW). This seems to be a general problem of the community taking metrics and somehow turning them into “rules.” They are just helpful, hopefully, guideposts to be used as the situation and an organization’s appetite for risk dictate. One can only throw the concept of ligand efficiency out the window completely if you disagree with the general principle that it is better to design ligands where the atoms all, as much as possible, contribute to that molecule being a drug (e.g. potency, solubility, transport, tox, etc.). The fact that there are multiple LE schemes in the literature is just a natural consequence of ongoing efforts to refine, improve, and better apply a concept that most would agree is fundamental to successful drug discovery.

Well, as far as the math goes, dividing a log by an integer is not any sort of invalid operation. I believe that [log(x)]/y is the same as saying log(x to the one over y). That is, log(16) divided by 2 is the same as the log of 16 to the one-half power, or log(4). They both come out to about 0.602. Taking a BEI calculation as real chemistry example, a one-micromolar compound that weighs 250 would, by the usual definition, -log(Ki)/(MW/1000), have a BEI of 6/0.25, or 24. By the above rule, if you want to keep everything inside the log function, then say -log(0.0000001) divided by 0.25, that one-micromolar figure should be raised to the fourth power, then you take the log of the result (and flip the sign). One-millionth to the fourth power is one times ten to the minus twenty-fourth, so that gives you. . .24. No problem.

Shultz's objection that LE is not linear per heavy atom, though, is certainly valid, as Reynolds notes above as well. You have to realize this and bear it in mind while you're thinking about the topic. I think that one of the biggest problems with these metrics - and here's a point that both Reynolds and Shultz can agree on, I'll bet - is that they're tossed around too freely by people who would like to use them as a substitute for thought in the first place.

Comments (19) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

February 14, 2014

"It Is Not Hard to Peddle Incoherent Math to Biologists"

Email This Entry

Posted by Derek

Here's a nasty fight going on in molecular biology/bioinformatics. Lior Pachter of Berkeley describes some severe objections he has to published work from the lab of Manolis Kellis at MIT. (His two previous posts on these issues are here and here). I'm going to use a phrase that Pachter hears too often and say that I don't have the math to address those two earlier posts. But the latest one wraps things up in a form that everyone can understand. After describing what does look like a severe error in one of the Manolis group's conference presentations, which Pachter included in a review of the work, he says that:

. . .(they) spun the bad news they had received as “resulting from combinatorial connectivity patterns prevalent in larger network structures.” They then added that “…this combinatorial clustering effect brings into question the current definition of network motif” and proposed that “additional statistics…might well be suited to identify larger meaningful networks.” This is a lot like someone claiming to discover a bacteria whose DNA is arsenic-based and upon being told by others that the “discovery” is incorrect – in fact, that very bacteria seeks out phosphorous – responding that this is “really helpful” and that it “raises lots of new interesting open questions” about how arsenate gets into cells. Chutzpah. When you discover your work is flawed, the correct response is to retract it.

I don’t think people read papers very carefully. . .

He goes on to say:

I have to admit that after the Grochow-Kellis paper I was a bit skeptical of Kellis’ work. Not because of the paper itself (everyone makes mistakes), but because of the way he responded to my review. So a year and a half ago, when Manolis Kellis published a paper in an area I care about and am involved in, I may have had a negative prior. The paper was Luke Ward and Manolis Kellis “Evidence for Abundant and Purifying Selection in Humans for Recently Acquired Regulatory Functions”, Science 337 (2012) . Having been involved with the ENCODE pilot, where I contributed to the multiple alignment sub-project, I was curious wha