Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« GlaxoSmithKline Reviews the Troops | Main | Chronic Fatigue: Enough Energy Left for Death Threats, Anyway »

September 2, 2011

How Many New Drug Targets Aren't Even Real?

Email This Entry

Posted by Derek

So, are half the interesting new results in the medical/biology/med-chem literature impossible to reproduce? I linked earlier this year to an informal estimate from venture capitalist Bruce Booth, who said that this was his (and others') experience in the business. Now comes a new study from Bayer Pharmaceuticals that helps put some backing behind those numbers.

To mitigate some of the risks of such investments ultimately being wasted, most pharmaceutical companies run in-house target validation programmes. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. Talking to scientists, both in academia and in industry, there seems to be a general impression that many results that are published are hard to reproduce. However, there is an imbalance between this apparently widespread impression and its public recognition. . .

Yes, indeed. The authors looked back at the last four years worth of oncology, women's health, and cardiovascular target validation efforts inside Bayer (this would put it right after they combined with Schering AG of Berlin). They surveyed all the scientists involved in early drug discovery in those areas, and had them tally up the literature results they'd acted on and whether they'd panned out or not. I should note that this is the perfect place to generate such numbers, since the industry scientists are not in it for publication glory, grant applications, or tenure reviews: they're interested in finding drug targets that look like they can be prosecuted, in order to find drugs that could make them money. You may or may not find those to be pure or admirable motives (I have no problem at all with them, personally!), but I think we can all agree that they're direct and understandable ones. And they may be a bit orthogonal to the motives that led to the initial publications. . .so, are they? The results:

"We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings. In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects. . ."

So Booth's estimate may actually have been too generous. How does this gap get so wide? The authors suggest a number of plausible reasons: small sample sizes in the original papers, leading to statistical problems, for one. The pressure to publish in academia has to be a huge part of the problem - you get something good, something hot, and you write that stuff up for the best journal you can get it into - right? And it's really only the positive results that you hear about in the literature in general, which can extend so far as (consciously or unconsciously) publishing just on the parts that worked. Or looked like they worked.

But the Bayer team is not alleging fraud - just irreproducibility. And it seems clear that irreproducibility is a bigger problem than a lot of people realize. But that's the way that science works, or is supposed to. When you see some neat new result, your first thought should be "I wonder if that's true?" You may have no particular reason to doubt it, but in an area with as many potential problems as discovery of new drug targets, you don't need any particular reasons. Not all this stuff is real. You have to make every new idea perform the same tricks in front of your own audience, on your own stage under bright lights, before you get too excited.

Comments (51) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development


COMMENTS

1. Innovorich on September 2, 2011 9:06 AM writes...

The implications of this are huge. The upshot is that government research grants are paying to stimulate biotech and pharma companies into wasting money - while at the same time being a waste of money!

As if it's not hard enough already to find validated targets, disease models, tool compounds and biomarkers. We see it all the time - the inability to reproduce results (often obtaining the proof that the results are completely wrong). (Don't get me started on academic crystal structures - although obviously that's less financially damaging - just equally personally irritating and happening for the same reason!)

That is, as you imply, as long as there is an academic incentive to obtain tenure via quanity of publications, instead of the true/direct value, applicability and downstream multiplier affect of your research, this will continue.

Permalink to Comment

2. drongo on September 2, 2011 9:30 AM writes...

This resonates with my experience too. I wish this was better appreciated even by research management. I have worked on several projects where we were unable to reproduce key results from work done in academia or small biotechs. In each case, we go through a similar pattern in which people are initially naively optimistic, then when results are not reproduced there are subtle (or not so subtle) insinuations that we are incompetent -- perhaps even to the extent of assigning another team, and ultimately (and only after quite a bit of acrimony) is there acceptance that the original results weren't "real".

Permalink to Comment

3. Sirtris on September 2, 2011 9:52 AM writes...

Clearly, you synthesized the compound wrong.

Permalink to Comment

4. Anonymous on September 2, 2011 10:08 AM writes...

Derek, I'm seeing a double post. Are you expecting heavy blogging on this issue?

Permalink to Comment

5. Cellbio on September 2, 2011 10:08 AM writes...

This is right in line with my observations over a couple of decades. In one instance, there was outright fraud that came to light, but in the others, it appears to be the product of publication bias: interesting, unexpected results get visibility, failure to observe an effect does not. Over the volume of work being done, this adds up to a considerable amount of a-hah stuff being nonsense.

So, how do you think that business of outsourcing discovery to academia is going to work? I don't think the incentives will change (as Innovorich points out), so I think it will be hard to distill down to the quality stuff as academics will think the value is in showing something new rather than in confirming that it really makes sense in the real world.

Also have experienced management, especially if just hired from academia, tend towards favoring the potential of new projects over those that have known attributes. Amazing how understanding metabolism, PK, therapeutic index can lesson the value of a project in favor of one where none of these attributes are yet knowable but will certainly exist.

Permalink to Comment

6. Anonymous on September 2, 2011 10:19 AM writes...

I feel this stems from too many people doing research with too little funds, and in the end it results in half of those funds (discovery related ones anyways) being pure waste.
We know the funds won't increase, so train fewer researchers.
Competition is so high we are beyond just "inneficiency." If someone can't hire a statistician or only has the money for the fewest replicates (ELISAs, animals, etc.) you get subpar findings. Couple that with investigators holding a degree/publication over the heads of their labor force (predominately grad students and postdocs) you have a BIG incentive for people to "drop outliers." This isn't a service based field; working hard will not put you at your goal; only being RIGHT will.

Permalink to Comment

7. Nick K on September 2, 2011 10:24 AM writes...

Old Fremch proverb: "Trust is the mother of disappointment".

Permalink to Comment

8. gyges on September 2, 2011 10:30 AM writes...

@2 drongo. My experience too.

Permalink to Comment

9. Innovorich on September 2, 2011 10:55 AM writes...

I really like Bruce Booth's idea (bottom of his March blog post) that university tech transfer departments should be applying some of their seed money to validating the research, via independent CROs/academic labs, prior to attempting to spin them out.

Maybe the NIH, MRC, etc., should have "validate our results" grants that industry could apply for!

Permalink to Comment

10. Anon_me on September 2, 2011 12:48 PM writes...

Yes, this sounds about right. Interesting isn't it - industry has known for years about the pitfalls of screening and how many screening hits don't follow up. Now all this is being rediscovered by the NIH, while industry bets its future on dodgy academic results while sacking everyone with the experience to know better. Recipie for success? I think not...

Permalink to Comment

11. Nick K on September 2, 2011 1:02 PM writes...

It's not just academic biomedicine which is dodgy - the same seems to apply in chemistry as well. I've never had any trouble reproducing reactions reported by industrial groups, whereas academic stuff often fails in my hands. At best, it works, but in much poorer yields. Remember Bengu Sezen?

Permalink to Comment

12. Curious Wavefunction on September 2, 2011 1:14 PM writes...

I am not too surprised by this finding. Just like other aspects of biology, target validation can often be messy and your results can lie on the fringe of statistical significance with a generous margin of errors.

The problem is that many academic researchers don't have the inclination to wait around for more meticulous validation and are happy to pitch preliminary or relatively rough data. Nothing wrong in doing this per se, but companies should not then find it surprising if they cannot validate the data.

Permalink to Comment

13. Cellbio on September 2, 2011 1:29 PM writes...

Inno,

Often (almost universally?), tech transfer departments are seen as money sinks, with costs from filings and salaries standing out in comparison to grant funded departments that provide revenue (overhead). tech transfer departments providing seed funding would be extremely rare, I think. It is more appropriate for the VCs to continue to take the risk, but do a better job of vetting the technology. The lack of repeatability is nothing new to those with experience, but often the optimism of the VC world shuts it ears to seasoned advice. It is a tough marriage to bring experience together with the optimism required for early investing.

Permalink to Comment

14. JC on September 2, 2011 1:39 PM writes...

I thought a Validated Target was one that had an animal model & a drug approved in Humans. However other people use the term to mean just a biological assay with some kind of read-out.

Permalink to Comment

15. Robur on September 2, 2011 1:42 PM writes...

The general lack of surprise around these findings is telling and brings to mind the quote by Paul Johnson:

"It is humbling to discover how many of our glib assumptions, which seem to us novel and plausible, have been tested before, not once but many times and in innumerable guises; and discovered to be, at great human cost, wholly false."

What we are witnessing is another generation of R&D managers 'learning on the job' and at great cost to everyone else in the meantime....

Permalink to Comment

16. Anonymous on September 2, 2011 1:48 PM writes...

As an academic cancer biologist involved in drug discovery, I can confirm that these numbers seem about right. Part of the blame lies with the drug companies who tend to outsource new targets from only the highest profile academic labs. It is often the case that those are the labs that care the least about whether a new finding is actually true - their main goal is to publish in a high profile goal to enhance their scientific prestige and then move on.

Permalink to Comment

17. SweetPea on September 2, 2011 1:58 PM writes...

@17 Robur

...and no sooner will those managers be sufficiently humbled by the challenge of succeeding at drug discovery, to actually stand a chance of doing so, then they themselves will be turned over in favour of the next generation of managers.
And so another 'learning' cycle will begin.

Back at you Robur with George Santayana:

"Those who cannot remember the past are condemned to repeat it."

Permalink to Comment

18. RespiSci on September 2, 2011 2:09 PM writes...

Is there an "acceptable" level of irreproducible findings? What would people say.... 10%, 20%, ..50%?? I don't think I have an extraordinary level of scientific integrity yet I never had a former supervisor from my degree/post-doc days come back to me to say that the lab could not reproduce my results. Often, it was the opposite where the new lab member would report that they were able to get the same initial findings as I had and were progressing with the project. As to the interpretation of the findings, that can be expected to change from the original manuscript as the understanding in the field will grow and develop over time. However, the actual data set should have been solid. I can't see that my experience should be unique. Are my expectations unrealistic?

Permalink to Comment

19. Pete on September 2, 2011 2:23 PM writes...

Maybe the publication of negative results needs to be better rewarded. Maybe a special journal for target invalidation studies?

Permalink to Comment

20. Robur on September 2, 2011 2:40 PM writes...

Is it me...

or is the Drug Discovery literature increasingly rediscovering (and re-attributing) what used to be regarded as common sense?

Too many new journals, too few experienced scientists left standing or simply the need for everything to be re-packaged and re-sold?

Maybe someone should write down all this "stuff we know to be true" once and for all, and for everyone.


Permalink to Comment

21. hn on September 2, 2011 2:53 PM writes...

Publishing negative findings or experiment validations seems to be a good use of arXiv.

Permalink to Comment

22. Cellbio on September 2, 2011 3:09 PM writes...

@18

I agree with problem in focusing on high profile labs. Worked for a brief stint in firm investing in start-ups. Idea was to use prestige of founders as value. Problem was this selected for folks that play the game of illustrating "potential" so well that it went hand in hand with higher probability of making sure the work did not actually test the potential. On the contrary, I argued and believe that those less well known have to go further to prove the merit of their work prior to publication (or grant funding, or start-up funding) , and so there was a greater chance for real value.

Permalink to Comment

23. newnickname on September 2, 2011 4:38 PM writes...

Maybe I need my newnickname to be changed to "Validated Target".

When I was in biotech, part of what I did was vet projects being pitched to us by academics. There were some doozies. Most of the doozies came from famous academic labs and the ones that I found more sensible or at least more "do-able" (quickly testable with simple targets and simple assays; cheap) were from less well known, smaller academics. If you remember my previous stories, then it should come as no surprise that every MBA decision went contrary to my scientific but not necessarily correct recommendation. Money flew out the door to the Big Guys ("They're Big and Famous! It's good for business!") and other projects died on the vine. Some of those little projects are STILL good things to try (cheap and easy).

My honest opinions about the science claims in the business plans, our own research plans and projects, etc. did not go over well with the MBAs. I pointed out the difficulty with one of our to-be-licensed cancer projects having too many KNOWN alternative pathways. Nobody cared (noMBA cared), it was a hot area and we had to be in it, too. Today, it's pretty much a dead area.

Those guys were not interested in investing our funds in drug discovery or curing disease. They were interested in investing in getting MORE investments, even though they utilized smoke and mirrors.

Eventually, I became the Validated Target. And I'm sure you know what that means. They didn't find a potent inhibitor; they performed a surgical excision.

Permalink to Comment

24. Zippy on September 2, 2011 5:07 PM writes...

These results are not altogether surprising. Other than human genetic data or clinical data coupled with good mechanism of action studies, other common target ID approaches are at best modest enrichments over random selection of targets. The reliability might look even worse for targets that require behavioral assays. In my experience, the reliability of the more recently adopted industrialized approaches to target ID at larger pharma (genetic screens, siRNA, ect) are less reliable than the academic literature.
Some other considerations.
1. Perhaps, some would view this analysis as a measure of internal Bayer reliability. Their correlation did not improve for targets claimed in multiple publications.
2. Some of the spurious nature of the publications can simply result from regression to the mean. Suppose that five labs are working on a target. One gets a positive signal and publishes and the other four get negative results and do not publish. On repeat by a sixth lab, a negative result seems likely. The bias comes from the nature of the publishing system.
3. This situation underscores the need for standardized assay formats. This could remove at least some variability between labs.

Permalink to Comment

25. Clueless on September 2, 2011 7:21 PM writes...

Because 75-80% of those so-called scientists are fake, , , ,

Permalink to Comment

26. bioguy on September 2, 2011 10:45 PM writes...

This is regression to the mean. So many studies, only publish the 'significant' ones; negative results swept under the rug. This means alot of positive results are published when in fact there's no effect.

Permalink to Comment

27. BiotechTranslated on September 2, 2011 11:48 PM writes...

How much of this is caused by poor documentation?

I know when I was working in the lab, it wasn't unheard of someone being able to successfully repeat a reaction when someone else wasn't able to.

On more than one occasion it was due to some small detail that the scientist assumed wasn't important. Oh, you used THF was that dried over sieves for the reaction? Why wasn't that in your notebook? On, you purified the reagent before using it? Why didn't you say so?

Combine that with the way procedures are written up for journal publication and I assume that a number of those "unreproducible" reactions do actually work, it's just that you don't know exactly what's required to make them work.

Mike

Permalink to Comment

28. Start-Up Package on September 3, 2011 3:02 AM writes...

I am not sure this is purely an academic issue. Last month we (academic lab) resynthesized a compound published by a Boston-area biotech company in a high-profile journal. The biological activity results were underwhelming, and inconsistent with specific reported data. Sorry for the small sample size but it goes both ways. As I hear biotech companies vaguely and brashly present research at conferences (historically in science an important form of publishing which my group takes very seriously), it strikes me that these types of companies use science as marketing. To a point raised above then the guilt of reporting promise of science is commonly if not systematically misappropriated by industry. Those here who follow the literature of a given field long enough rather than dabbling in a target space for 1-2 years as part of an industrial discovery team will realize that academic research fields are highly self-policing, and the discourse regarding irreproducibility in the literature is common, and at meetings standard. There may be nothing we enjoy better than proving our colleagues wrong. It is in fact then much more common to see a report of such inconsistency reported by a competing academic lab than an commercial entity. This is a long way of saying let's see your data.

Permalink to Comment

29. MTK on September 3, 2011 6:43 AM writes...

So this is not my area of expertise, so excuse my naivete on the matter, but isn't it called target validation for a reason?

I guess I thought that the whole reason there was a process called that was because it was generally assumed that a lot of things coming out of target ID were bound to not hold up under greater scrutiny.

What am I missing here? Is it how many don't pan out? Is it how many are claimed to be validated that never really were?

In some ways I'm questioning Innovorich's comment (#1). How is it a waste of money? You ID a target and then you validate it. Some are legit, most are not. Some results are reproducible, many are not. Isn't that the way it's supposed to work? Now I'll grant you that if a result is not reproducible intralab it probably shouldn't be published, but is this what is happening?

Permalink to Comment

30. drug_hunter on September 3, 2011 7:00 AM writes...

@28 ("startup package") - and building on this, we (a medium-sized company) often see that compounds reported by big Pharma don't work as well as reported in the literature. It isn't that the compounds are completely dead in our hands -- rather, they just don't work as well in our (very carefully validated) cell and animal models as they did in whatever models were used by the Pharma. Happens all the time. It always comes back to your understanding of the disease biology, and it is easy to fool yourself into believing that YOUR cell or animal model is recapitulating the disease, when in fact it is mostly giving you misleading info.

Permalink to Comment

31. Cellbio on September 3, 2011 9:01 AM writes...

@28,

There is certainly a similar issue in start-ups. Often founded on one key asset or technology, management at start-ups push their technology hard and yes do market. Newnickname points out his difficulties when pointing out problems during the vetting of ideas, and the fact no one seemed to care (i have seen this too). In the cases like this when funding occurs, a team is selected that is driven to push forward, as if their optimism and "passion" can change reality. I think there was a lot of this prior to the crash, and hoping one small benefit of post-crash investing is doing so on better merit. However, there will always be a drive to keep the company alive when salaries etc depend on it, and in my recent job searches, an apparent desire for "sellers" over scientists is evident from job descriptions and interview comments.

Permalink to Comment

32. tuky tuky on September 3, 2011 9:55 AM writes...

@28 & 30 - On the other hand, we (academic lab) recently prepared one compound from a big pharma patent containing only two examples (!?) and it did better than we could have ever expected.

Permalink to Comment

33. Moneyshot on September 3, 2011 10:05 AM writes...

Right in line for i&i indications as well

Permalink to Comment

34. matt on September 3, 2011 1:33 PM writes...

Seems like another call for a public forum of reproducing results, either externally like sciencecheck.org or hosted by each journal discussing its own published papers. These attempts to reproduce the finding should be public, in some fashion. If this were done, and if out of the noise and chaos some consensus could be formed (or lack of current consensus acknowledged), that could moderate the abuse of rushing to publication.

It would also help for #27 BioTechTranslated's case: it might help propagate the tips and tricks a lab didn't mention.

@#28. Start-Up Package: you say some academic fields are self-policing...but is this public? Do the academics in question continue to publish prominently? They certainly won't lose tenure, and likely won't be hurt with regard to funding. What affect does the "policing" have beyond some internal reputation (which might be hard to distinguish from a social clique)?

Permalink to Comment

35. Chris on September 3, 2011 3:57 PM writes...

@Cellbio. No one seems to care because they would rather look productive and offering ideas, than only have the answer "we are still researching ideas".

Permalink to Comment

36. WideScreen on September 4, 2011 11:55 AM writes...

Re. all the comments on how unsurprising these numbers are - as a Big Pharma researcher, it's still incredibly helpful to have them out in the open, and in a high profile publication.

Much needed ammo for reminding line managers, "Hey, let's be careful out there."

Permalink to Comment

37. alex on September 4, 2011 12:09 PM writes...

It doesn't necesicarily have to mean that the research is fake; all it takes is just forgetting to write up some conditions during work-up, extraction or whatever. Because 'everybody knows you should perform reaction X under argon', so we don't mention it.

Permalink to Comment

38. Pete on September 4, 2011 12:22 PM writes...

Pipeliners on LinkedIn might want to look at the discussion recently started in the Society for Laboratory Automation and screening group.

www(dot)linkedin(dot)com/groups?home&gid=3363923

Permalink to Comment

39. Anonymous BMS Researcher on September 4, 2011 4:17 PM writes...

My colleagues and I have long noticed major issues with the reproducibility of literature findings. Even when something does turn out to be reproducible, it very frequently turns out that making it work as described requires optimizing other aspects of the experimental conditions, which were not mentioned in the publication. I have even said to colleagues, "I wish there were special funding, maybe partly from an industry consortium, to support the validation and debugging of potentially useful findings from academic labs."

Permalink to Comment

40. Kaleberg on September 4, 2011 10:56 PM writes...

I think there are a lot of results on the statistical edge, but there is also the effect of working with complex systems. Didn't Lewis Thomas try to duplicate penicillin toxicity with rabbits, but failed. Turns out it was a seasonal effect. (Or was it guinea pigs?)

Permalink to Comment

41. Anonymous on September 5, 2011 1:29 AM writes...

It seems like there are two categories of problems:
1) data are not reproducible/strong due to experimental and/or statistical problems
2) data are not reproducible because the methods used were not described properly (ranges from "tips and tricks" to outright "actually used a completely different method but referenced that paper from years ago because we thought it was similar enough/whoever wrote up the paper didn't know the method had been changed.")

Category 2 can be problematic partially because the people who "polish up" papers are often removed from the actual work and seem to view M&M as a boring but necessary evil; the more details put in, the greater the liability that someone will spot something which could have done better. Basically, there are incentives to write methods which are slightly vague.

Even better than a public journal for reproducing results, it'd be nice if comments/discussions--with data--could be appended to or otherwise linked with the electronic records of papers. You'd still run into problems with egos getting hurt when lackluster data or methods are questioned, but that's hard to avoid...and might not be as bad in a less formal setting.

Permalink to Comment

42. Middleground Guy on September 5, 2011 10:40 AM writes...

The reason for this is simple, to be published data needs to convince your peers it is interesting, to be a drug target it needs to be right. There is a huge gap between the two. As someone who has worked between academia and industry for many years, I believe there are very few people who deliberately put false data out there. But they do put just enough to get a publication and then a grant. Buyer beware.....

Permalink to Comment

43. jason reeves on September 5, 2011 6:17 PM writes...

It stems from those who pay for research requiring a certain result. Independent research is almost non-existent. It is really scary how much pressure there is on researchers when there is so much money involved, and how difficult it is to get money for research these days.

Jason Reeves

Permalink to Comment

44. MoMo on September 5, 2011 9:28 PM writes...

Ye Gods! Now its found out biology can't always be reproduced while THE CHEMISTS cringe! Targets! Targets! Targets! This is all the good drug designers ever hear, yet biochemists and biologists are now seen to drop and then gold leaf the ball!

Between simpleton science, bad and lenient reviewers and the quest to cure human diseases we are in for a tough ride!

Maybe if we did not publish real science and only patented it we could control it.

Gotta go- I am following a radiolabelled bear in VT.

Permalink to Comment

45. ug on September 5, 2011 9:54 PM writes...

I wonder if this reality will get through to the big pharma SVPs whose big ideas are to lay off more of their headcount in order to free up money to pay to academics to do 'target validation' for them. I have heard rumors of this from a couple -- both of them former big shot famous academics themselves, of course. Pfizer might be moving towards this kind of set up. Oh, well, probably not, if the same lack of account of reality that led to the sirtirs and sirna acquisitions....

As a post doc in a big shot biology lab, I saw an ambitious grad student who had an article all set to go to Nature, just one last figure left to be done. This figure was a ChIP reaction, notoriously unreliable. The guy tried it literally 30 times and it always clearly came back negative. Our boss kept saying "you better get that reaction to work!", which led the rest of us to say "Um, it worked great, it just gave the answer you didn't want."

YOu know the punchline: it "worked" on the 31st try, and they sent out the paper. They never repeated it or tried it again. Victory!

Permalink to Comment

46. cliffintokyo on September 6, 2011 4:02 AM writes...

My version of Santayana, with great respect:
"The only thing we learn from history is that human beings never learn from history."
Formulated in a different context, but probably equivalent, and emphasizes the futility of studying history.
Personally, I prefer powders I can put in a vial over SNPed DNA fragments that can only be proven to exist by a southern blot, or whatever.
(I am, hopefully, not actually this ignorant, only making a point!)

Permalink to Comment

47. Anonymous on September 6, 2011 11:21 AM writes...

I worked in a hot research lab where someone had written on the wall above the door "it's better to be first than right".

Permalink to Comment

48. Anonymous on September 6, 2011 11:37 AM writes...

I worked in a hot research lab where someone had written on the wall above the door "it's better to be first than right".

Permalink to Comment

49. RB Woodweird on September 6, 2011 11:39 AM writes...

44. MoMo said "Gotta go- I am following a radiolabelled bear in VT."

SOP for radiolabelling bear:
1 bear
100 grams 5% Pd/C
100 Curies tritium gas
5 gallons ethanol
1 large reaction vessel
etc.

Permalink to Comment

50. Cialisize Me on September 6, 2011 12:13 PM writes...

This strongly supports the biotech model of research, where you have *in-house* experts that propose, discover, and validate targets. The incentives for the in-house biologists are: 1. That you could discover a successful drug that leads to company prosperity, and maybe a payout to you; and 2. You might actually get to cure a patient. But your data must be real and robust.

Biotech model continues to prove itself in spite of the many over-hyped failures, which in the end are really big pharma's fault for believing the hype. But don't worry, someone in Pharma Bus.Dev. still got big bonuses for signing all of those deals.
C.M.

Permalink to Comment

51. Anonymous on September 9, 2011 4:44 PM writes...

Nature 454, p.682, and the underlying Amyotrophic Lateral Sclerosis 2008, iFirst Article, 1–12 are an interesting case study.

50 studies showing effects in a transgenic mouse; and when you come along, and do the studies properly, none of the treatments work. How does that work ?

trouble is, there are just so many examples that are similar.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
AbbVie and Shire, Quietly
Catalyst Voodoo, Yielding to Spectroscopy?
That Retracted Stressed Stem Cell Work
Happy Fourth of July, 2014
An Early Day Off
All Natural And Chemical Free
Scientific Journals: Who Pays What?
Corrosion Using Selectfluor?