« Generic Prilosec - the Sequel |
| Mismeasure for Mismeasure »
November 7, 2002
Measure for Measure
As a follow-up to my post about over-quantification, I should really mention one of the things that managers in research organizations would most love to measure: their employees. How good are they? How productive are they? How do they rank, from one to thirty-eight?
The problem is, there's no good way to measure any of this, not that it stops anyone from trying. Performance reviews are a notorious sinkhole for any industry, of course - every heard of a company where people say that their system works? But it's even harder to do for research employees, because of the dice-rolling feast/famine nature of the work.
Here are a few questions that come up regularly: Who's more valuable - the person who has the idea, or the person who reduces it to practice? What if several people had the idea at about the same time? What if the person who made the best compound in the project did it more or less by accident? What if they did it just because someone else told them to? What should be rated more highly - producing a long list of inactive compounds, or a short list of really good ones? What about someone who does really fine work on a project that disappears due to unexpected toxicity? What's more worthy of a high rating - producing new compounds, or figuring out a crucial step to make enough of the ones you already have?
And so on, and so on. So, how do you rank people? By the number of compounds they produce? That biases it, at best, toward people who (for whatever reason) ended up with a chemical series that was easier to ring variations on. At worst, it tilts the rankings toward people who deliberately banged out piles of easy-to-make compounds, even though they knew that they were unlikely to be worth anything.
OK, how about ranking everyone by the activity of the compounds they made? Well, that biases it toward people who are lucky, not to get too delicate about it. At best, it can reward someone who made some of their own luck, by sticking with a good idea. But it can also reward someone who tripped over a gold nugget on their way to pick up some more lumps of asphalt.
Ranking people by what everyone else thinks of them? That can bias it toward those with outgoing personalities. People on large projects who get more exposure will tend to come out better, too, as will people whose labs are on the way to the cafeteria.
Research is just plain hard to measure, and doing it on a regular, timed basis just exacerbates the problem. We spend long periods in this business being extremely wrong before suddenly being extremely right - try adjusting for that! As far as I've been able to see, any system you use will need exceptions, corrections, qualifications - just the kind of thing that numerical ranking was designed to avoid.
+ TrackBacks (0) | Category: Business and Markets
- RELATED ENTRIES
- Weirdly, Tramadol Is Not a Natural Product After All
- Thiola, Retrophin, Martin Shkrell, Reddit, and More
- The Most Unconscionable Drug Price Hike I Have Yet Seen
- Clinical Trial Fraud
- Grinding Up Your Reactions
- Peer Review, Up Close and Personal
- Google's Calico Moves Into Reality
- Reactive Groups: Still Not So Reactive