That's what this article at the Chronicle of Higher Education could be called. Instead it's headlined "We Must Stop the Avalanche of Low-Quality Research". Which still gets the point across. Here you have it:
While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.
As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. . .
If anything, this underestimates things. Right next to the never-cited papers are the grievously undercited ones, most of whose referrals come courtesy of later papers published by the same damn lab. One rung further out of the pit are a few mutual admiration societies, where a few people cite each other, but no one else cares very much. And then, finally, you reach a level that has some apparent scientific oxygen in it.
The authors of this article are mostly concerned about the effect this has on academia, since all these papers have to be reviewed by somebody. Meanwhile, libraries find themselves straining to subscribe to all the journals, and working scientists find the literature harder and harder to effectively cover. So why do all these papers get written? One hardly has to ask:
The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn't better. At some point, quality gives way to quantity.
Academic publication has passed that point in most, if not all, disciplines—in some fields by a long shot. For example, Physica A publishes some 3,000 pages each year. Why? Senior physics professors have well-financed labs with five to 10 Ph.D.-student researchers. Since the latter increasingly need more publications to compete for academic jobs, the number of published pages keeps climbing. . .
We can also lay off some blame onto the scientific publishers, who have responded to market conditions by starting new journals as quickly as they can manage to launch them. And while there have been good quality journals launched in the past few years, there have been a bunch of losers, too - and never forget, the advent of a good journal will soak up more of the worthwhile papers, lifting up the ever-expanding pool of mediocre stuff (and worse) by capillary action. You have to fill those pages somehow!
If this problem is driven largely by academia, that's where the solution will have to come from, too. The authors suggest several fixes: (1) limit job applications and tenure reviews to the top five or six papers that a person has to offer. (2) Prorate publication records by the quality of the journals that the papers appeared in. (3) Adopt length restrictions in printed journals, with the rest of the information to be had digitally.
I don't think that those are bad ideas at all - but the problem is, they're already more or less in effect. People should already know which journals are the better ones, and look askance at a publication record full of barking, arf-ing papers from the dog pound. Already, the best papers on a person's list count the most. And as for the size of printed journals, well. . .there are some journals that I read all the time whose printed versions I haven't seen in years.
No, these ideas are worthy, but they don't get to the real problem. It's not like all the crappy papers are coming from younger faculty who are bucking for tenure, you know. Plenty more are emitted by well-entrenched groups who just generate things that no one ever really wants to read. I think we've made it too possible for people to have whole scientific careers of complete mediocrity. I mean, what do you do, as a chemist, when you see another paper where someone found a reagent to dehydrate a primary amide to a nitrile? Did you read it? Of course not. Will you ever come back to it and use it? Not too likely, considering that there are eight hundred and sixty reagents that will already do that for you. We get complaints all the time about me-too drugs, but the me-too reaction problem is a real beast.
Now, I realize that by using the word "mediocrity" I'm in danger of confusing the issue. The abilities of scientists are distributed across a wide range - I doubt if it's a true normal distribution, but there are certainly people who are better and worse at this job. But I'm complaining on the absolute scale, rather than the relative scale. I know that there's always going to be a middle mass of scientific papers, from a middle mass of scientists: I just wish that the whole literature was of higher quality overall. A chunk of what now goes into the mid-tier journals should really be filling up the bottom-tier ones, and most of the stuff that goes into those shouldn't be getting done in the first place.
I suppose what bothers me is the number of people who aren't working up to their potential (although I don't always have the best position to argue that from myself!) Too many academic groups seem to me to work on problems that are beneath them. I know that limits in money and facilities keep some people from working on interesting things, but that's rare, compared to the number who'd just plain rather do something more predictable. And write predictable papers about it. Which no one reads.