Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Lilly's Statin - Yes, It Is 2010 | Main | Fungal Structures to the Rescue »

June 24, 2010

All Those Worthless Papers

Email This Entry

Posted by Derek

That's what this article at the Chronicle of Higher Education could be called. Instead it's headlined "We Must Stop the Avalanche of Low-Quality Research". Which still gets the point across. Here you have it:

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. . .

If anything, this underestimates things. Right next to the never-cited papers are the grievously undercited ones, most of whose referrals come courtesy of later papers published by the same damn lab. One rung further out of the pit are a few mutual admiration societies, where a few people cite each other, but no one else cares very much. And then, finally, you reach a level that has some apparent scientific oxygen in it.

The authors of this article are mostly concerned about the effect this has on academia, since all these papers have to be reviewed by somebody. Meanwhile, libraries find themselves straining to subscribe to all the journals, and working scientists find the literature harder and harder to effectively cover. So why do all these papers get written? One hardly has to ask:

The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn't better. At some point, quality gives way to quantity.

Academic publication has passed that point in most, if not all, disciplines—in some fields by a long shot. For example, Physica A publishes some 3,000 pages each year. Why? Senior physics professors have well-financed labs with five to 10 Ph.D.-student researchers. Since the latter increasingly need more publications to compete for academic jobs, the number of published pages keeps climbing. . .

We can also lay off some blame onto the scientific publishers, who have responded to market conditions by starting new journals as quickly as they can manage to launch them. And while there have been good quality journals launched in the past few years, there have been a bunch of losers, too - and never forget, the advent of a good journal will soak up more of the worthwhile papers, lifting up the ever-expanding pool of mediocre stuff (and worse) by capillary action. You have to fill those pages somehow!

If this problem is driven largely by academia, that's where the solution will have to come from, too. The authors suggest several fixes: (1) limit job applications and tenure reviews to the top five or six papers that a person has to offer. (2) Prorate publication records by the quality of the journals that the papers appeared in. (3) Adopt length restrictions in printed journals, with the rest of the information to be had digitally.

I don't think that those are bad ideas at all - but the problem is, they're already more or less in effect. People should already know which journals are the better ones, and look askance at a publication record full of barking, arf-ing papers from the dog pound. Already, the best papers on a person's list count the most. And as for the size of printed journals, well. . .there are some journals that I read all the time whose printed versions I haven't seen in years.

No, these ideas are worthy, but they don't get to the real problem. It's not like all the crappy papers are coming from younger faculty who are bucking for tenure, you know. Plenty more are emitted by well-entrenched groups who just generate things that no one ever really wants to read. I think we've made it too possible for people to have whole scientific careers of complete mediocrity. I mean, what do you do, as a chemist, when you see another paper where someone found a reagent to dehydrate a primary amide to a nitrile? Did you read it? Of course not. Will you ever come back to it and use it? Not too likely, considering that there are eight hundred and sixty reagents that will already do that for you. We get complaints all the time about me-too drugs, but the me-too reaction problem is a real beast.

Now, I realize that by using the word "mediocrity" I'm in danger of confusing the issue. The abilities of scientists are distributed across a wide range - I doubt if it's a true normal distribution, but there are certainly people who are better and worse at this job. But I'm complaining on the absolute scale, rather than the relative scale. I know that there's always going to be a middle mass of scientific papers, from a middle mass of scientists: I just wish that the whole literature was of higher quality overall. A chunk of what now goes into the mid-tier journals should really be filling up the bottom-tier ones, and most of the stuff that goes into those shouldn't be getting done in the first place.

I suppose what bothers me is the number of people who aren't working up to their potential (although I don't always have the best position to argue that from myself!) Too many academic groups seem to me to work on problems that are beneath them. I know that limits in money and facilities keep some people from working on interesting things, but that's rare, compared to the number who'd just plain rather do something more predictable. And write predictable papers about it. Which no one reads.

Comments (40) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why


COMMENTS

1. Anon on June 24, 2010 8:34 AM writes...

I find this to be a fascinating topic and one that I end up discussing with my peers. There's some more comments on this topic by Peter Lawrence from Cambridge here:
http://www.int-res.com/articles/esep2008/8/e008p009.pdf

"Already, the best papers on a person's list count the most." But why do we need a giant list of publications if everyone already knows which publications are crappy ones? Perhaps if we limited job/tenure applications to 3 (or 4, or 5 whatever just pick a small number) and tell the applicant they can ONLY discuss what actually appears in those 3 papers then there would be pressure to actually write in-depth papers that contribute substantially to science rather than releasing results piecemeal in order to up one's publication count. Maybe not the magic bullet solution, but seems like it would help provide motivation for doing science the "correct" way.

What's interesting is that nobody questions what happens to those students that are working for a boss that actually does only write up quality publications for a finished/polished project. We all claim science should be done for science itself and not for one's the publication record, yet we still judge job applicants by their publication record. What if the applicant worked on a difficult project, made significant progress, but was unable to put the finishing touches on it before it's time to graduate and move on? If the boss doesn't publish it, since it's not fully finished yet, then the student that did honest hard work can't compete with their peers who worked for the publication machines.

I don't know what the solution is, but I think we all know what the problem is. Perhaps an easy place to start is to stop stigmatizing those without a lengthy publication list and let the work they did speak for itself. In other words, stop the attitude of no publications = no job interview.

Permalink to Comment

2. Duct Tape on June 24, 2010 8:36 AM writes...

The point regarding academic potential and 'fluff' within the publishing models are well taken. Nice post.

However, from someone in industry, there needs to be some form of repository to collect the what-seems-to-be-useless-now work. I've run into various situations thru the years where some abstract paper from a 1960 Russian paper has been priceless. Since we can't predict the future, the seemingly mundane can't be fully appraised.

Maybe limiting the for-profit groups and scaling up PLOS?

Permalink to Comment

3. RB Woodweird on June 24, 2010 8:40 AM writes...

This is predicted by The First Law of Metridynamics, which states:

The observed metric will improve.

So when the system told chemists that one of the most important metrics which would determine advancement was the number of publications, the number of publications was of course going to increase.

(The alert reader will recall also The Second Law of Metridynamics, which states:

The sum of all metrics in a closed system is a constant.)

Permalink to Comment

4. Greg Hlatky on June 24, 2010 8:50 AM writes...

"...redundant, inconsequential, and outright poor research..."

Hey, that's me you're talking about!

Permalink to Comment

5. fungus on June 24, 2010 8:58 AM writes...

Woodweird - there's a name for it, it's called "Goodhart's Law".

As soon as something is measured, it changes.

http://en.wikipedia.org/wiki/Goodhart's_law

Permalink to Comment

6. dWj on June 24, 2010 9:34 AM writes...

Partly on Duct Tape's comment, I think a lot of these results would be of some value if stuck as a single entry in a table in the CRC book. The problem is one of organization of results, such that some theorist who thinks that seeing as many known ways to "dehydrate a primary amide to a nitrile" as possible will help construct some theory of molecular dynamics can find that in a useful form, while people to whom it's of no value don't have to sort through it. The ontological studies that determine the enthalpy of formation of 2-methyl-pentane are less highly regarded than they once were, and don't make a great scientist, but they can be worth their cost if performed by someone who isn't going to be publishing in Science anyway. These things don't need to be taking up space in great journals -- they maybe shouldn't all be in Physica A -- but they should be collected somewhere. What we need isn't fewer publications; what we need is a librarian.

Permalink to Comment

7. Tok on June 24, 2010 9:38 AM writes...

Anon - I'd feel sorry for the first grad students through your system and their 10-15 year degree time. We're already at what 6 years now?

Permalink to Comment

8. Anon on June 24, 2010 10:18 AM writes...

Re:Tok
I think you have misunderstood what I meant. Perhaps I should have phrased my thoughts better. I'm not arguing that we should keep students longer (in fact I believe anything longer than 5 yrs for a PhD is way too long). You shouldn't have to feel bad for my non-existent students b/c if we eliminated the "who has the most publications" game then grad students don't need 10-15 yrs. They do the normal 5 yrs and go on their merry way with the science they have accomplished, and we judge them based on that science. Then when job time comes they will present their research to the hiring committee and stand or fall based on their science rather than on some silly number (of publications). We've all seen people that pad themselves up with meaningless publications. How are people that don't pad themselves supposed to compete if the game is set up from the beginning to favor those with the publications (again, we've all seen at least a few job applications that list a publication record as a requirement)
Grad students aren't stupid. Many realize that the goal of doing great science is often in conflict with the goal of getting ahead in their chemical careers. Risky/difficult projects may not pay off, which naturally leads to less publications. Once a grad student sees this, they don't have much motivation to not play the publication game for fear of their own survival as a scientist. That leads to mediocre research being cranked out.
I want to see great research being done just like Derek, and I want to see folks (grad students included) be unafraid of tackling a difficult project. Having lots of papers doesn't automatically mean you just did lots of mediocre science. But if we continue to reward only people that have lots of papers rather than the substance of their actual research then we've got the current problem of way too many meaningless papers flooding the journals.

Permalink to Comment

9. john on June 24, 2010 10:18 AM writes...

One of the greatest problems I've seen, and specifically this is seen in synthetic chemistry above all (I'm a synthetic chemist so don't take it personally) is that many students, post docs and young faculty are measured not by their impact but by how much they work. My boss told me that it's not about intellect or originality to get a degree it's about how much work you do.
This stinks of the labor theory of value, and in a competitive system (such as what we claim to have) this removes so much of the competition. It's not a competition to come up with good ideas or to create something new and useful, look at the famous Carreira letter that goes around the net all the time. It says nothing about expectations for new and original ideas, innovation etc. It says this is how much you have to work to be successful. When will we stop with this terrible idea and actually start asking our "best and brightest" to use the whole of their abilities instead of just their work ethic.

Permalink to Comment

10. Allanon on June 24, 2010 10:23 AM writes...

Very interesting topic. Many days, I'll agree with Greg Hlatky, that hits close to home. Like your statement the world didn't need to find another X, you needed to find it.

I agree there's a problem, but I'm not sure cited-by is a useful measure. You mentioned several examples where papers could be cited but still be worthless. The flip side, mentioned by Duct Tape above, is where papers are not cited in a sliding window of time, but are enormously worthwhile. Consider, as an example, Tsiolkovsky's 1903 paper on rocket travel. The rest of the world was riding on horseback. Steam engines were high tech. Orville and Wilbur Wright were just pushing off a powered glider. Meanwhile, Tsiolkovsky had calculated the speed needed to orbit the earth and calculated that multi-stage rockets burning ideally hydrogen and oxygen or fluorine would be needed. Probably the first opportunities for it to be cited (but it wasn't) were in 1919 and 1923, when Goddard and Oberth published papers along the same lines.


Permalink to Comment

11. RB Woodweird on June 24, 2010 10:30 AM writes...

fungus - You are the epitome of the alert reader. The First Law of Metridynamics is intimately related to Goodhart's Law and to Campbell's Law and also to the Hawthorne Effect. The particular application of the FLoM and more importantly, the SLoM, is in a closed corporate environment.

Permalink to Comment

12. Will on June 24, 2010 10:55 AM writes...

Sticking with organic chemistry as that's my experience

The world will (hopefully) continue to need PhD level synthetic chemists. To get the PhD in synthetic chemistry, you need (theoretically) solve some problem

I think the NIH/NSF is the real culprit here - PI's apply for grants to do total synthesis/methodology, the grants are funded, the work is done and then published somewhere. maybe we're getting to a time when most total synthesis/methodology projects aren't that necessary

but as long as the grants for the same-old, same-old are funded, the same-old work will be done. if the NIH started saying "we don't want to pay professor x to demonstrate his [4+2+5+7+1} rhenium-mediated carbonylation/homologation/dehalo-aromatisation reaction on the aspidosperma family of natural-products" and instead funded truly innovative ideas [1] then the publication of drek would dry up. the problem is that a lot of groups, both big and small, are probably incapable of shifting gears.

given the trend in industrial research, maybe the market of synthetic chemists is oversaturated, and a reduction of the number of groups training chemists is in order anyway

[1] not that I have any truly innovative ideas myself

Permalink to Comment

13. JOHN on June 24, 2010 11:11 AM writes...

I would say that about 90% of everything I've ever done remains unpublished, and much of that work was facilitated by obscure synthetic methods published in 3rd tier journals.

So never cited is not the same as never used.

Permalink to Comment

14. Rowman on June 24, 2010 11:39 AM writes...

I remember in my postdoc days when I used to be highly critical of published work that was only interesting or useful to the people who did the work. Then I got my own lab and started projects that all had the noblest intentions of doing something really significant. Some were successful, some were utter failures, but most fell someplace in between. Many of my students are not aiming for the top of the scientific jobs pyramid, but are looking for a decent career. A publication, even one that will do relatively little to help my career, sometimes has given those students a significant advantage in getting the job they want. It is difficult not to at least try to publish what they did even if it won't really do much for me.

I am not sure how exactly to fix the problem though. Perhaps some sort of giant, searchable electronic repository for research project summary abstracts like some of the poster databases that are starting to pop up, combined with a major constriction of journal size. The bigger problem, speaking from an employer point of view, is that it is still hard to judge whether a person with a freshly awarded graduate degree in chemistry can actually *do* anything. Publications, however meager, still are the best metric.

Permalink to Comment

15. d on June 24, 2010 11:41 AM writes...

This post reminds me of a related comment by Seth Godin:

(http://sethgodin.typepad.com/seths_blog/2010/06/this-better-work.html)

" 'This better work'... is probably the opposite of, 'this might work.' 'This better work,' is the thinking of safety, of proven, of beyond blame. 'This might work,' on the other hand, is the thinking of art, innovation and insight.
If you spend all day working on stuff that better work, you back yourself into a corner, because you'll never have the space or resources to throw some 'might' stuff into the mix. On the other hand, if you spend all your time on stuff that might work, you'll never need to dream up something that better work, because your art will have paid off long ago. "

Permalink to Comment

16. RM on June 24, 2010 12:35 PM writes...

I think everyone should read some of the old papers (e.g very early 1900s), and come to the agreement that things were better then. No inflated introductions (you shouldn't need to "sell" your paper), no massive reference sections (no, it's not worth mentioning a paper just because a reviewer's friend did something somewhat similar), honest and non-aggrandizing discussions/conclusions (Actual admission of flaws, without concern they'll affect publishablity! Complete absence of claims that your work on an obscure limpet will somehow lead to a cure for cancer!)

One of the problems, as most people are indicating, is that we're stuck in a Red Queen's race for publications. "You need a publication to stand out" becomes "You need *several* publications to stand out" becomes "You need *several* publications in *top-tier* journals", etc.

I don't think we should discourage publishing, though, even for stuff that seems "useless". The work was done, the results were obtained. If it's methodologically sound, it should be published - no sense in throwing that effort away. (I agree that people shouldn't set out to do "useless" research, but we can't always predict where we're going - sometimes "useless" results are all you have to show.)

I think that unmanageability of the literature is not due to too many research articles, but *too few* review articles. We may not need another paper talking about how to dehydrate a primary amide to a nitrile, but be honest - have you seen a *really good* review article on the subject? One that's well written and comprehensive, and makes you go "Wow, this paper is a must-read for anyone interested in X"?

with the rest of the information to be had digitally I *strongly* disagree with this statement. Online supporting materials is already a dumping ground for un-edited, un-reviewed dreck. I would like *more* peer review of supp. mat., and advocating sticking stuff there to lighten the load on reviewers is a step backwards.

dWj@6 a lot of these results would be of some value if stuck as a single entry in a table in the CRC book You can't just have a number in a table, you need to have a reference to back it up: How was that number obtained? What techniques were employed? What sort of background corrections were made? How was the compound purified from the know oxidation product? etc. Sure, you can gut the introduction, discussion/conclusions, and most of the results section, but leave the methods section.

Permalink to Comment

17. John Spevacek on June 24, 2010 1:14 PM writes...

RM,

I thought you were going to scoop me; you came close, but I can still add some thoughts.

I was also going to suggest reading significantly older literature, (your comments on the differences in style is dead-on) but I've found that most of what was written back then could also be of the same low quality seen today. I'd say the signal/noise ratio is still the same, it's just that the amplitude is so much greater and being slowly evolving humans, our ability to intake the higher amplitude has not kept pace.

Permalink to Comment

18. GreedyCynicalSelf-Interested on June 24, 2010 1:47 PM writes...

There's too much money going into science. The government should not be funding non-military science. Given that all non-military science should be privately funded, there would be fewere scientists, fewer mediocre papers, and less need for people to claw at each other to advance themselves like crabs trying to escape from a bucket.

Stop funding junk and you won't have any in this country!

Permalink to Comment

19. Jim on June 24, 2010 1:57 PM writes...

Maybe you all can help me out with a problem I'm having. I am currently a 4th year grad student and my boss wants me to put out a paper on some results I got more than a year ago that we didn't think they were important. A couple reasons as to why they weren't considered important was that it mimics previous work done in 2004 which hasn't been cited much (around 10 but even less when you consider that most are from the same group). The hope was to use some of the compounds synthesized for a new reaction, however that has failed. I did manage to get an x-ray of a similar compounds that they were unable to crystallize but it matches their calculations.
My boss says that I need to publish to graduate, but I don't see it as contributing to the scientific community in any way. My labmate (2nd year grad) has contributed and really wants his name on a paper (any paper) so he has just written a transcript. Should I ask that my name be removed, moved to second author, or just accept that this is the state of science right now and that as a grad student if I make a stand I'll just get run over?

Permalink to Comment

20. Hap on June 24, 2010 2:11 PM writes...

#18: That sounds sort of like suicide as a solution to the problem of a constant fear of dying. I guess it's a solution though....

Permalink to Comment

21. Mr. Gunn on June 24, 2010 2:27 PM writes...

The problem of the rising cost of library subscriptions isn't the fault of all the scientists sending in papers. It's demonstrably the fault of the publishers, who've forced libraries into package deals then shuffled the contents of the packages around so that each package contains a few good journals and a bunch of crap.

The solution to this is clear (as the first couple commenters note): open access & better post-publication peer review. Use the PLoS ONE model, where author side-charges defray the cost of publishing (just barely, PLoS ONE only charges $1350) and manuscripts are reviewed for accuracy but not "subjective perceived impact". This frees up lots of resources while taking the burden off the libraries. The crap will still sink.

Permalink to Comment

22. Jamie on June 24, 2010 2:43 PM writes...

One beneficial aspect of low-tier publications, at least in the life sciences, is that text-mining applications can use their results to populate databases on protein interactions and gene ontologies. These databases require a lot of not-so-interesting experimental results to be useful to the community at large.
Maybe we could increase the efficiency of the low-tier publishing groups by developing programs that will identify areas of the databases that are scarcely populated and pointing them in that direction.

Permalink to Comment

23. Vader on June 24, 2010 3:01 PM writes...

Derek, what' you're calling a bug, academia calls a feature. Don't look for academia for solutions.

Permalink to Comment

24. Larry on June 24, 2010 3:03 PM writes...

In my government post (biomedical research), part of my annual performance review is a list of publications (from the previous 2 yrs) on which I am a co-author along with the journal's impact factor. In other words, there is incentive to aim high and publish fewer of the smallest-publishable-unit variety.

Permalink to Comment

25. CMCguy on June 24, 2010 3:04 PM writes...

Although may seem contradictory I think both #12 Will and #16 RM make valid points. Academics have to study things that will ultimately help them get funded and most grant agencies don't seem willing to strongly support more speculative ventures. This creates incremental advances from focus on probable rather than potential more far-reaching progress on the possible (echos of #15d). Of course that later carries high risk of failures which would be frustrating for all parties: the researchers, funding sources and tax payers/public.

At the same time attempting to assign scores for value of a publication and restrict what gets in the literature could be foolish because what may be junk to one person may be a treasure to another. I believe there is still much we don't know, or can adequately control, and even classical techniques can require improvements for particular applications. I appreciate it can be difficult and frustrating to get through the clutter but would hope over time one can develop ability to discern and pull out what is needed for a plan or problem.

Permalink to Comment

26. hn on June 24, 2010 3:33 PM writes...

As funding and jobs grow scarcer, both scientists and funding agencies/employers are growing more risk averse. As budgets continue to be cut in industry, we'll see less innovation too. Better to work hard and predictably crank out stuff than to risk getting no publishable results. Reliable is better than creative. Of course, some stars are reliably creative. Nevertheless, our overall scientific culture has changed.

Permalink to Comment

27. #26 on June 24, 2010 3:38 PM writes...

#19 I wish I had your problem. I have the opposite instead. I have synthetic work that is clearly (IMHO) up to par for being published as is, but my advisor says the project isn't finished yet. I agree there's more that can be done, but I really need a publication to move on and what he wants from me may not be possible in the time I have left. I would say publish it and get out of chemistry like I am. One honest grad student can't change the system. Sad, but true.

Permalink to Comment

28. Bjoern Brembs on June 25, 2010 12:03 AM writes...

You wrote:
> Too many academic groups seem to me to work on
> problems that are beneath them.

WTF? You know what will save humanity in 100 years? You know that some scientific discovery beneath you will remain completely irrelevant for as long as anybody can access this result? What kind of astrologist or prophet do you think you are? Einstein famously didn't think much of quantum mechanics and maybe he even thought it was beneath certain people to study it and they should rather do other things. You think you know science better than Einstein did his physics? More likely, your hubris is lightyears beyond that of Einstein and he would never have thought quantum mechanics was beneath anyone. Only people who vastly overestimate themselves would ever think a scientific discovery, no matter how small and irrelevant at the time of their discovery would always necessarily remain that way. My bet: you've misclassified yourself in your caste system of good, mediocre and bad science by at least two castes.

Permalink to Comment

29. Donough on June 25, 2010 6:13 AM writes...

The problem for me is the lack of realism in academic papers. Quite often I refer to membrane papers and quite often find comments suggesting that this product is 'the best' or 'better than the literature reviewed' This gives the product a sense of viability.

Permalink to Comment

30. Sticky on June 25, 2010 6:20 AM writes...

I am a chemist and sometimes I feel and this could be a terrible idea, that brief communications in synthesis (in other fields too) should be stopped. You often see communications which can be easily made a well written full paper easy to understand and providing better vision of the authors as their contribution to Science. However, the trend is opposite, every journal encourages (even you have from J Med Chem now) brief communications. People publish some results and their full length papers never ever surface. This will also discourage many scientists looking for short and quick success and will also help mediocre category to read more for full papers and get better equipped.

Permalink to Comment

31. Maks on June 25, 2010 7:47 AM writes...

Stop wasting other peoples time. When you send a papaer for review which is complete crap you are wasting a lot of time of other people. If you seriously review a paper it takes at least several hours which could be used to do real science. Especially annoying if you reject a paper and then get it back again from another journal. If your paper is not suited for Nature (be honest to your paper and yourself), then don't send it there. If your paper is lacking crucial experiments, do them first and then send it, but stop wasting the time of reviewers. People are complaining about peer review, but on the other hand we are expected to review papers for free and there is simply too much junk out there which wants to be published.

Permalink to Comment

32. processchemist on June 25, 2010 7:59 AM writes...

"given the trend in industrial research, maybe the market of synthetic chemists is oversaturated, and a reduction of the number of groups training chemists is in order anyway"

In my experience, the oversaturation of chemists on the market matches a lower preparation of the younger ones. I'm talking about synthetic chemistry.
I've seen some awful examples in the industrial field. Obviously, a young chemist maybe never dealt with grignard reagents in his experience. But he MUST know that grignards must be handled in an air and moisture free enviroment.
I'm also aware that if you work on the usual scale of most academic research, probably you never crystallized anything: but you SHOULD know how to perform a generic crystallization (and I'm not talking about picking the right polymorph with the correct particle size distribution obtaining a processable slurry).
Let's not talk about azeotropes and distillations...

Permalink to Comment

33. bmaher.sciwriter on June 25, 2010 12:15 PM writes...

Our metrics special at Nature seems to bear out some of these ideas. Most dept. heads and deans we spoke to say they care more about quality than quantity, but that produces its own traps. If you're interested: http://www.nature.com/news/2010/100616/full/465860a.html
HHMI and some other institutions say they practice what Peter Lawrence preaches, relying on the quality of a handful of publications. This should be regardless of the journal where they were published, otherwise that can alter research practices, too.

Permalink to Comment

34. anon on June 25, 2010 8:59 PM writes...

I am always amazed by how many times the same data can be published.

Permalink to Comment

35. fan on June 26, 2010 8:32 AM writes...

I agree

Permalink to Comment

36. ben on June 26, 2010 10:28 AM writes...

I think that one problem, particularly in early carer scientists is that the output from high risk projects by bright minds resembles the output from the real idiots/hacks/lazy. I think this increases the willingness to produce predicable results that at least indicate that one has a good work ethic. Its easy for incompetence (in managing a lab etc.) to be mistaken for someone attacking a particularly intractable problem.

The best stratagy I've seen (i.e. working on the difficult problem but keeping an eye on publishable alternatives/low hanging side projects) usually produce mediocre results in the short term (unless one gets unusualy lucky). This at least separates potential+work ethic from the absolute bottom of the barrel.

I have difficulty envisioning how else to separate out the dregs without removing the visionaries. This might be more specific to biology though.

Permalink to Comment

37. Kaleberg on June 26, 2010 3:31 PM writes...

I think the emphasis on citation is rather silly. In computer science, at least, seminal papers are rarely cited. Citations usually reflect current popularity of both the subject matter and its current popularizers, not some metric of the paper's actual, eventual worth. In fact, one source of great ideas is to look for uncited papers, as this indicates prematurity.

When you are searching for a particular result, every false hit is a worthless paper. This might be a reason to start using a better search engine.

Permalink to Comment

38. Will Hunting on June 27, 2010 7:26 AM writes...

An interesting counterpoint to this post:

http://blogs.nature.com/ue19877e8/2010/06/22/in-which-we-stand-on-the-shoulders-of-midgets

Permalink to Comment

39. Mr. Gunn on June 27, 2010 4:59 PM writes...

An ethic of "fail faster and try again" would stimulate innovation in academic research. There's a reason even the good VC people tend to encourage this. Unfortunately, this can never happen if evaluation is based on the impact factor. It's demonstrably not correlated with research success and it siphons millions of dollars away from universities while siphoning millions of hours of free labor in the form of peer review.

Permalink to Comment

40. mad on June 28, 2010 4:24 PM writes...

The "Market" will self correct and money will eventualy go to the most productive. Why is eveyone worried about others work? If its garbage they wont go far. Most know how to filter what we read..whats the problem again?

Also #19 publish you need to write...you wont collapse the foundations of scince just because you didnt re-invent matter.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Worst Seminar
Conference in Basel
Messed-Up Clinical Studies: A First-Hand Report
Pharma and Ebola
Lilly Steps In for AstraZeneca's Secretase Inhibitor
Update on Alnylam (And the Direction of Things to Come)
There Must Have Been Multiple Chances to Catch This
Weirdly, Tramadol Is Not a Natural Product After All