Science has been writing on and off about scientific publishing, which naturally leads to a discussion of the ways that publication records are evaluated. Fortunately, I haven’t had to deal with this sort of thing myself, but if the reports are accurate, the whole “impact factor” business seems to be well out of control.
Impact factors, for those who haven’t had to worry about them, are an attempt to measure how good different journals are by how often papers in them are cited. The rankings that result are fairly well correlated with the way people have “good” journals ranked in their heads, although review publications get over-ranked by a straight citation count. There have been all sorts of refinements introduced, but the basic principle is the same: to quantify the publication list in someone’s c.v.
And that’s how it’s used in tenure evaluations. There are all sorts of tales of needed at least so-and-so many papers in journals of such-and-such impact factor and above. And in the cases where such things aren’t flatly written down, they’re widely felt to be calculated quietly behind the closed doors. As you’d imagine, not everyone thinks that this is a good thing. One of the letters that came in to Science this time, from Abner Notkins of NIH, says that:
”. . .many scientists are now more concerned about building high-impact factor bibliographies than their science.
The adverse effects of the impact factor culture must be reversed before more damage is done to the orderly process of scientific discovery. Although there may be no way of stopping computer-generated evaluation of journals and published papers, the scientific community certainly can control its use. . .each institution should make it clear, in a written statement, that it will not use the impact factor or the like to evaluate the contributions and accomplishments of its staff. Second, the heads of laboratories should prepare similar written statements and in addition discuss in depth with their fellows the importance of solid step-by-step science. Third, the editors of journals published by professional societies, joined by as many other journal editors as are willing, should indicate that they will not advertise, massage, or even state the impact factor score of their respective journals. By means such as these, it might be possible to put science back on the right track.”
Strong stuff, and to some extent I agree with it. The thing is, there’s nothing wrong per se with publishing in good journals. Aiming your research high is a good thing, as long as good publications are the by-product and not the entire goal. Now, I think that the advertising of impact factors by journals is irritating, especially when they trumpet things down to the sccond decimal place. But I think that a statement that impact factors will not be considered for academic evaluations would be useless. After all, these numbers just put a quantitative coat of paint on a process that everyone engaged in anyway. Papers in Science, Nature, and the like already counted for a lot more on a publication list than did papers in many other journals, and saying that you’re not going to use someone’s numerical rating for them won’t change that. Every scientist in every field has an idea of which journals are harder to publish in (and publish more high-impact work); getting a paper into one of them will always count for more.
As it should. We have to remember what the opposite situation looks like. Everyone’s seen publication lists with page after page of low-quality stuff that’s been turned out for quantity, not quality. Communication after communication in high-acceptance-rate journals, obscure conference proceedings, every poster session noted – you know the sort of thing. It’s supposed to look impressive (why list all this stuff, otherwise?) but ends up looking pathetic. We don’t want to end up rewarding this kind of thing.
So what to do? Perhaps a realistic compromise: tell junior faculty and staff that their publication records will be a part of their evaluations, of course. But tell them that they’re not the most important part, and that a short publication list can be balanced out by other factors (and a long one balanced out in the other direction, too!) Someone who’s doing really good work, but who declines to slice it up into publishable bits, or whose research is just not on a schedule for lots of publications no matter what, should know that they’ll be evaluated with these things in mind. Likewise, someone who runs every single experiment to slot into the next manuscript had better also be running the ones that they’d set up even if journals didn’t exist, and we all still communicated by handwritten letters. Good science is still good science, whether it’s published (or even if it’s published!) in Science or not.