Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Old School - Really Old | Main | Where Are the Drugs? »

October 14, 2008

Impact Factors: Can We Pretend That They Don't Exist?

Email This Entry

Posted by Derek

Science has been writing on and off about scientific publishing, which naturally leads to a discussion of the ways that publication records are evaluated. Fortunately, I haven’t had to deal with this sort of thing myself, but if the reports are accurate, the whole “impact factor” business seems to be well out of control.

Impact factors, for those who haven’t had to worry about them, are an attempt to measure how good different journals are by how often papers in them are cited. The rankings that result are fairly well correlated with the way people have “good” journals ranked in their heads, although review publications get over-ranked by a straight citation count. There have been all sorts of refinements introduced, but the basic principle is the same: to quantify the publication list in someone’s c.v.

And that’s how it’s used in tenure evaluations. There are all sorts of tales of needed at least so-and-so many papers in journals of such-and-such impact factor and above. And in the cases where such things aren’t flatly written down, they’re widely felt to be calculated quietly behind the closed doors. As you’d imagine, not everyone thinks that this is a good thing. One of the letters that came in to Science this time, from Abner Notkins of NIH, says that:

”. . .many scientists are now more concerned about building high-impact factor bibliographies than their science.

The adverse effects of the impact factor culture must be reversed before more damage is done to the orderly process of scientific discovery. Although there may be no way of stopping computer-generated evaluation of journals and published papers, the scientific community certainly can control its use. . .each institution should make it clear, in a written statement, that it will not use the impact factor or the like to evaluate the contributions and accomplishments of its staff. Second, the heads of laboratories should prepare similar written statements and in addition discuss in depth with their fellows the importance of solid step-by-step science. Third, the editors of journals published by professional societies, joined by as many other journal editors as are willing, should indicate that they will not advertise, massage, or even state the impact factor score of their respective journals. By means such as these, it might be possible to put science back on the right track.”

Strong stuff, and to some extent I agree with it. The thing is, there’s nothing wrong per se with publishing in good journals. Aiming your research high is a good thing, as long as good publications are the by-product and not the entire goal. Now, I think that the advertising of impact factors by journals is irritating, especially when they trumpet things down to the sccond decimal place. But I think that a statement that impact factors will not be considered for academic evaluations would be useless. After all, these numbers just put a quantitative coat of paint on a process that everyone engaged in anyway. Papers in Science, Nature, and the like already counted for a lot more on a publication list than did papers in many other journals, and saying that you’re not going to use someone’s numerical rating for them won’t change that. Every scientist in every field has an idea of which journals are harder to publish in (and publish more high-impact work); getting a paper into one of them will always count for more.

As it should. We have to remember what the opposite situation looks like. Everyone’s seen publication lists with page after page of low-quality stuff that’s been turned out for quantity, not quality. Communication after communication in high-acceptance-rate journals, obscure conference proceedings, every poster session noted – you know the sort of thing. It’s supposed to look impressive (why list all this stuff, otherwise?) but ends up looking pathetic. We don’t want to end up rewarding this kind of thing.

So what to do? Perhaps a realistic compromise: tell junior faculty and staff that their publication records will be a part of their evaluations, of course. But tell them that they’re not the most important part, and that a short publication list can be balanced out by other factors (and a long one balanced out in the other direction, too!) Someone who’s doing really good work, but who declines to slice it up into publishable bits, or whose research is just not on a schedule for lots of publications no matter what, should know that they’ll be evaluated with these things in mind. Likewise, someone who runs every single experiment to slot into the next manuscript had better also be running the ones that they’d set up even if journals didn’t exist, and we all still communicated by handwritten letters. Good science is still good science, whether it’s published (or even if it’s published!) in Science or not.

Comments (13) + TrackBacks (0) | Category: The Scientific Literature


COMMENTS

1. Tot. Syn. on October 14, 2008 7:45 AM writes...

I know several academics who have a policy of "if it ain't JACS or Angewandte, it ain't published". This, of course, results in a pretty neat publications table, as long as the results keep coming in. However, there are always projects (and therefore students) that don't quite meet the mark, so the students either have to move on to another project, or settle for a lack of publications. And that sucks if your trying to get a post-doc or a job.

The other problem this selectivity creates is that whilst the 'excellent' and 'really good' science gets published, the 'fairly good' to 'reasonable' doesn't. And that's a pretty rubbish situation.

Ever tried to do a reaction that didn't work? Ever wonder if anyone else tried it before you? As my PhD supervisor used to say to me - 'The journal of failed chemistry would be pretty useful'...

Permalink to Comment

2. RB Woodweird on October 14, 2008 7:53 AM writes...

Great idea, but if you are a graduate student or postdoc, where are you going to go? To the group where publication is not the top priority or the group which cranks them out?

It's like owning electric cars. Who will be the first to drive their 500-pound composite kite on the freeway? Not it.

Permalink to Comment

3. Hap on October 14, 2008 9:32 AM writes...

The NIH/NSF might have more control over judgments by impact factor by lessening the effect of impact factor on their funding decisions. Considering the ravenous desires of schools for funding, schools are likely to judge young faculty by whatever makes the schools the most money, and if they think impact factors are the key to obtaining grants, then it will be the key to tenure decisions, as well.

People want something that makes their judgment calls appear quantitative - having a number is sometimes the end of thought and justification (no matter what assumptions went into generating the number). If the use of a number is good (outcomes based on number correlate well with desired outcomes) then it can encourage better behaviors, or if bad, its use can encourage bad ones. If impact factor is flawed, then it would be good to understand how it is flawed and to try and come up with a better measure.

I think there's a dropped italic tag at the end of the quote.

Permalink to Comment

4. Hap on October 14, 2008 10:04 AM writes...

was a dropped italic tag.

Permalink to Comment

5. Great Molecular Crapshoot on October 14, 2008 3:07 PM writes...

Impact factors alawys remind me of Henry Kissinger's rationale for the bitchiness of academic politics: There's so little at stake.

Permalink to Comment

6. Great Molecular Crapshoot on October 14, 2008 3:23 PM writes...

I do take account of journals when assessing publications on CVs. However I also look at the number of authors listed for the article and whether the author I'm checking out is corresponding author. I generally disregard oral presentations and have been known to cross them out if the CV mixes them in with peer-reviewed stuff. While the standard of science in high impact factor journals tends to be higher, you can still find some real howlers (check the Crapshoot if you don't believe me) without having to work too hard. I think it would help if journals linked post-publication critiques to the articles

Permalink to Comment

7. InfMP on October 14, 2008 6:43 PM writes...

I totally agree with the opinions, good science still needs to get published if it isn't world changing. An example that comes to mind is 2 J Org Chem s from David Collum recently, who studied how BuLi interacts with TMEDA, and why deprotonation on picoline can only lead to 50% yield.
Incredibly executed, well-thought out, and very intensive studies that aren't flashy JACS articles.

Permalink to Comment

8. Anonymous BMS Researcher on October 14, 2008 7:36 PM writes...


A proposal I saw somewhere (I forget where) is ask the applicant to list the five publications on which he or she would like to be evaluated, then have the evaluators read those five.

Permalink to Comment

9. Andrej A. on October 15, 2008 4:46 AM writes...

I think the root of the problem is that impact factor-based evaluation has replaced a careful, thoughtful judgment (if such ever existed...) by the content of (mostly published, of course) research. As one 'Nature' editor answered a question why the "Publish or perish" motto is transforming to "Nature or death":
"Our journal is not responsible for all those lazy panel members who don't bother to print out a single paper and read more than an abstract of it."
Essentially, it is the question "Who and at what cost should review applications?". As long as it is mostly peer reviewers, even paid ones, it means it's done by people way too often tired, overloaded, missing deadlines etc.
To this, you can add an extra problem when a reasonably compact faculty is trying to fill a TT position with a person from a trendy field (stem cells, systems biology)- most likely all of the current members would be only tangentially familiar with the area of a candidate accomplishments - and thus naturally lean on a publication index.
What can be done? I have no answer...

Permalink to Comment

10. srp on October 15, 2008 6:42 PM writes...

It is amazing to me how far academics will go to avoid READING and CONSIDERING the work of their colleagues. They will cook up all kinds of complex but bogus citation metrics, stare fixedly between the lines of recommendation letters in hopes of telepathically forcing a secret meaning to emerge, gossip for hours about a candidate--anything but try to understand and evaluate a colleague's research on its own terms.

Permalink to Comment

11. A nonie mouse on October 15, 2008 11:16 PM writes...

A recent paper relating "Buyer's remorse" to impact factors has a bizarre, but interesting twist to all of this:

PLoS Medicine

Permalink to Comment

12. The Next Phil Baran on October 28, 2008 12:55 PM writes...

sounds like an ad populis fallacy to me. just cause you have more doesn't mean its better. its not a popularity contest, its about objectivity. impact factors be damned

Permalink to Comment

13. Jonadab the Unsightly One on October 30, 2008 5:50 PM writes...

Even ignoring the numbers and going by a journal's reputation, some journals tend to have a stronger reputation than they deserve. Nature springs immediately to mind here, inspiring in some people the kind of awe that probably ought to be reserved for stone tablets carved by the finger of God. But, for all that reverence people bestow on it, if your research is in drug discovery, would Nature ever really be the best choice? Wouldn't a more focused journal dedicated to medicine or chemistry pretty much always be, objectively speaking (and ignoring what it does for your resume), a better venue? Going the other direction, I'm sure there must be journals out there that deserve more consideration than they often get.

I'm also not at all sure that the number of times a publication is cited is a very good measure of it's impact, much less its relevance or quality or merit. To use a non-chemistry example, Soren Kirkegaard is cited with a frequency out of all proportion to any sober evaluation of his work's actual importance.

I don't think there's ever going to be any getting around this basic reality: the things people are most impressed by aren't necessarily always a perfect match for the things that have the greatest merit. That's pretty much just a standard fact of life, part of living in the world and dealing with people. It's so obvious, it hardly needs to be stated, and yet somehow it still manages to catche us by surprise from time to time.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
XKCD on Protein Folding
The 2014 Chemistry Nobel: Beating the Diffraction Limit
German Pharma, Or What's Left of It
Sunesis Fails with Vosaroxin
A New Way to Estimate a Compound's Chances?
Meinwald Honored
Molecular Biology Turns Into Chemistry
Speaking at Northeastern