Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Lowe's Law of Diurnal Distribution | Main | Vytorin: Another Round of Nasty Results »

July 21, 2008

Backtracking, Necessary and Unnecessary

Email This Entry

Posted by Derek

One of the things that no one realizes about research (until they’ve done some) is how much time can be spent going back over things. Right now I’m fighting some experiments that should be working, have worked in the past, but have (for some reason) decided not to work at the moment. Irritating stuff. There’s a reason buried in there somewhere, and when I find it things will be that much more robust in the future, but I’d hoped that they were that solid already.

And across the hall, a check is going on of some screening hits. When you get a pile of fresh high-throughput screening data, including some fine-looking starting compounds for a new project, what do you do with it? Well, if you have some experience, the first thing you do is order up fresh samples of all the things you could possibly be interested in, and check every single one of them to make sure that they actually are what they say on the label. Don’t start any major efforts until this is finished.

In fact, you should order up solid samples from the archives along with some of the DMSO stock solution that they used in the screening assay. They might not be the same, not any more. False negatives and false positives are waiting in your data set, depend on it: compounds that should have hit, but didn’t because they decomposed in solution, and compounds that (sad to say) did hit only because they decomposed in solution. You’ll probably never know about the first group, and you can waste large amounts of time on the second unless you check them now.

Getting a project going, then, can seem like trying to get a dozen nine-year-olds into a van for a long trip. Someone’s always popping out again, having forgotten something, which reminds someone else, and your scheduled departure time arrives with everyone running in circles around the driveway.

But nine-year olds can eventually be corralled, as can the variables in most scientific projects. But not always. Where you don’t want to be is the situation people had with the early vacuum-tube computers. Vacuum tubes have not-insignificant failure rates. So if you have, say, twenty thousand of the little gizmos in your ENIAC or whatever, doing the math on mean-time-between-failures shows you that the thing can run for maybe forty-five minutes before blowing a tube (unless you take heroic measures). And the more vacuum tubes you have, the worse the problem gets: make your computer big enough, and it’ll blow right after you throw the switch, every time.

So that’s the other thing you have to watch when troubleshooting: try to make sure that your problems aren’t built into the very structure of what you’re trying to do. In med-chem projects, look out for statements like “we have all the activity we need, now we just need to get past the blood-brain barrier”. Sometimes there’s a way out of those tight spots, but too often the properties that (for example) could get your compound into the brain are just flat incompatible with the ones that gave you that activity in your assay. You’d have been better off approaching that combination the other way around, and better off realizing that months ago. . .

Comments (8) + TrackBacks (0) | Category: Life in the Drug Labs


COMMENTS

1. HelicalZz on July 21, 2008 10:44 AM writes...

I especially hate when the assay just gets started and one of the compounds needs to use the restroom.

Permalink to Comment

2. Dlib on July 21, 2008 11:49 AM writes...

What's also scary is that you don't know which HTS assay technology will give the truest answer. Different technologies give different results on the same system.

Permalink to Comment

3. Hap on July 21, 2008 6:01 PM writes...

AC,

You owe me a new monitor and a new irony meter.

Permalink to Comment

4. Fries with that? on July 21, 2008 8:45 PM writes...

AC,
Ok, I'm waiting for your blog to appear, under your
real name, where can I find it?

Permalink to Comment

5. Anonymous BMS Researcher on July 21, 2008 10:19 PM writes...


This also applies in purely computational areas where I find I need to redo some analysis much more frequently than I would prefer. At least in my case this does not consume any reagents or other consumables -- except of course for the electric power needed to (1) run the computers and (2) air-condition my office. But it does consume lots of my time, which probably costs the company more than all the CPU cycles I consume.

Permalink to Comment

6. John D. on July 21, 2008 10:23 PM writes...

Nothing to do with chemistry, but I read that vacuum tube computers were actually *more* reliable than the relay machines they replaced. When a tube failed, it *stayed* failed. Relays on the other hand might be 95% reliable on average, but that's the average reliability for *each* relay--each one could occasionally fail to flip and then continue normally afterward. In other words, you couldn't tell if it was wrong by looking at it.

The relay-based "Mark I" had to have several runs of an *identical* problem before they trusted the output. With a tube-based computer, if it made it all the way to the end without something blowing, you knew you could trust the results. (Or at least it wouldn't be a hardware issue.)

Permalink to Comment

7. Anonymous BMS Researcher on July 22, 2008 5:04 AM writes...

John D. on July 21, 2008 10:23 PM wrote
...
> The relay-based "Mark I" had to have several
> runs of an *identical* problem before they
> trusted the output. With a tube-based computer,
> if it made it all the way to the end without
> something blowing, you knew you could trust
> the results. (Or at least it wouldn't be a
> hardware issue.)
...

On display at the Smithsonian is a moth that once caused a "bug" by getting stuck in a relay of the Mark I.

As for the vacuum tube machines, in his Memoirs of a Computer Pioneer, EDSAC team leader Maurice Wilkes describes the moment "hesitating on the stairs" when he realized how much of his time in future would be spent "fixing my own mistakes." In other words, he was the FIRST programmer with access to hardware that was more reliable than his own code. A very large fraction of my time is spent debugging...

Permalink to Comment

8. daen on July 22, 2008 7:47 AM writes...

To paraphrase: human ignorance is a gas; it will always find new areas to pervade. Especially when it's anything to do with computers. So relays are replaced by vacuum tubes, giving increased reliability, and vacuum tubes are replaced by transistors ... and so on. And once you've moved your hardware MTBFs into the thousands or tens of thousands of hours, you're left with the Next Great Unknown, which is of course software development. Fred Brooks, in his must-read essay "No Silver Bullet", states that the easy to solve problems in software engineering - so-called accidental complexity - have already been solved, and that we're up against what is known as essential complexity - the hard stuff which requires much applied human thought and caffeine, at the root of all non-trivial computer programs. Which is why we software engineers spend so much time with our noses buried in debugging tools.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Scripps Update
What If Drug Patents Were Written Like Software Patents?
Stem Cells: The Center of "Right to Try"
Speaking of Polyphenols. . .
Dark Biology And Small Molecules
How Polyphenols Work, Perhaps?
More On Automated Medicinal Chemistry
Scripps Merging With USC?