About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Rational Drug Design | Main | Rank and Yank? »

September 23, 2004

Measure for Mismeasure

Email This Entry

Posted by Derek

As we start to slide into fall, companies around the industry start to slide into that slough known as Performance Reviews. Depending on your calendar, this can start hitting you any time from now until February or so - it's particularly joyful when your company changes its system and you have to rank people based on, say, seven months of performance.

Researchers are a nightmare to evaluate at any time of the year. Here's something I wrote on my old site, Lagniappe, which I thought might be relevant:

". . .I should really mention one of the things that managers in research organizations would most love to measure: their employees. How good are they? How productive are they? How do they rank, from one to thirty-eight?

The problem is, there's no good way to measure any of this, not that it stops anyone from trying. Performance reviews are a notorious sinkhole for any industry, of course - every heard of a company where people say that their system works? But it's even harder to do for research employees, because of the dice-rolling feast/famine nature of the work.

Here are a few questions that come up regularly: Who's more valuable - the person who has the idea, or the person who reduces it to practice? What if several people had the idea at about the same time? What if the person who made the best compound in the project did it more or less by accident? What if they did it just because someone else told them to? What should be rated more highly - producing a long list of inactive compounds, or a short list of really good ones? What about someone who does really fine work on a project that disappears due to unexpected toxicity? What's more worthy of a high rating - producing new compounds, or figuring out a crucial step to make enough of the ones you already have?

And so on, and so on. So, how do you rank people? By the number of compounds they produce? That biases it, at best, toward people who (for whatever reason) ended up with a chemical series that was easier to ring variations on. At worst, it tilts the rankings toward people who deliberately banged out piles of easy-to-make compounds, even though they knew that they were unlikely to be worth anything.

OK, how about ranking everyone by the activity of the compounds they made? Well, that biases it toward people who are lucky, not to get too delicate about it. At best, it can reward someone who made some of their own luck, by sticking with a good idea. But it can also reward someone who tripped over a gold nugget on their way to pick up some more lumps of asphalt.

Ranking people by what everyone else thinks of them? That can bias it toward those with outgoing personalities. People on large projects who get more exposure will tend to come out better, too, as will people whose labs are on the way to the cafeteria.

Research is just plain hard to measure, and doing it on a regular, timed basis just exacerbates the problem. We spend long periods in this business being extremely wrong before suddenly being extremely right - try adjusting for that! As far as I've been able to see, any system you use will need exceptions, corrections, qualifications - just the kind of thing that numerical ranking was designed to avoid."

Comments (4) + TrackBacks (0) | Category: Business and Markets | Life in the Drug Labs


1. qetzal on September 23, 2004 11:42 PM writes...

Does big pharma inflict "SMART" objectives on you as well? Or is that something just us poor biotech slobs have to endure?

For the uninitiated, SMART objectives are:

Specific, Measureable, Achievable, Relevant, and Timely.

In other words, you have to say exactly what you plan to accomplish over the next 3, 6, or 12 months (depending on your company's cycle). It is has to be something that can be measured. (Did you make 37 compounds for animal testing? Or only 33?). Of course it should be achievable (assuming you're superman). It has to be relevant, which is always tough at a small biotech, where the company's whole strategy can change three times a quarter. And of course timely. When, exactly, will you be finished with those seven major projects you're supposed to complete?

My favorite part of the whole exercise is when, at review time, you get to sit down and write a dissertation on why you never even worked on 3/4 of the things you originally put down, but it wasn't your fault, because of the 20 unexpected things that came along, so here's what you really worked on, and here's why you deserve a good rating for doing stuff totally unrelated to what you originally said.

Sarcasm aside, I actually do think there's value in the process - mainly in forcing ourselves (or being forced) to think in advance about what we'd like to do and why, and then later looking back on what we really did, and considering whether it was time well spent.

But it's never much fun.

Permalink to Comment

2. Chad Orzel on September 24, 2004 8:19 AM writes...

My favorite evaluation method was the one used at the government lab where I did my Ph.D. thesis work. All employees were asked to write out a year's worth of goals ("Complete measurement of spin-polarized collision rates, submit paper to Physical Review"), and then evaluated on how well they met those goals.

Which is a fine system, but it was made even better by the fact that, due to some quirk of government timekeeping, the "goals" for a given year had to be submitted about a month before the evaluations were done. So when the time came around, you would just write down a list of everything you'd done in the past year, plus one or two things you hoped to accomplish in the next month, and then give yourself a gold star a month later for meeting all your goals.

This, of course, did nothing to make people cynical.

Permalink to Comment

3. The Novice Chemist on September 24, 2004 9:57 AM writes...

Has anyone hear about the big company that has the little blue pill? I've heard that company has a really strange performance review system; here in grad school, we heard that they rank everyone into three categories (top third, middle third and bottom third) and if you get "bottom third" more than a couple times in a row, you're out.

Is that true?

Permalink to Comment

4. SRC on September 24, 2004 12:35 PM writes...


That's what GE does, I understand, and it's the bottom 10%.

The problem with all of these review systems is fundamental: they are attempting to use a "paint-by-the-numbers" algorithm in place of judgment.

People are reluctant to make straight-up judgments about other people, first because most people lack judgment in the first place, second because they don't like looking someone in the face and rendering an honest assessment, and third they fear legal problems if they can't back up an adverse review with numerical data.

So we get "how many compounds?" in industry, or "how many papers?" in academia. It's a lot easier than thinking, and a lot more defensible, too, even if trivial to game.

The extreme example of this "metric" approach has to be the apocryphal story of the Soviet nail factory. When judged by numbers, they produced pins; when judged by weight, they produced railroad spikes.

Permalink to Comment


Email this entry to:

Your email address:

Message (optional):

The Last Post
The GSK Layoffs Continue, By Proxy
The Move is Nigh
Another Alzheimer's IPO
Cutbacks at C&E News
Sanofi Pays to Get Back Into Oncology
An Irresponsible Statement About Curing Cancer
Oliver Sacks on Turning Back to Chemistry