Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Get Yer Rhodanines Here | Main | China's Home-Grown Insanity? »

September 8, 2011

Publishing, Perishing, Buying and Selling

Email This Entry

Posted by Derek

Here's another article in the Guardian that makes some very good points about the way we judge scientific productivity by published papers. My favorite line of all: "To have "written" 800 papers is regarded as something to boast about rather than being rather shameful." I couldn't have put it better, and I couldn't agree more. And this part is just as good:

Not long ago, Imperial College's medicine department were told that their "productivity" target for publications was to "publish three papers per annum including one in a prestigious journal with an impact factor of at least five.″ The effect of instructions like that is to reduce the quality of science and to demoralise the victims of this sort of mismanagement.

The only people who benefit from the intense pressure to publish are those in the publishing industry.

Working in industry feels like more of a luxury than ever when I hear about such things. We have our own idiotic targets, to be sure - but the ones that really count are hard to argue with: drugs that people will pay us money for. Our customers (patients, insurance companies, what have you) don't care a bit about our welfare, and they have no interest in keeping our good will. But they pay us money anyway, if we have something to offer that's worthwhile. There's nothing like a market to really get you down to reality.

Comments (26) + TrackBacks (0) | Category: Academia (vs. Industry) | The Scientific Literature


COMMENTS

1. HK on September 8, 2011 10:49 AM writes...

"It's well known that small research groups give better value than big ones, so that should be the rule."

Is that true, or was that a bit of sarcasm that I missed?

Permalink to Comment

2. leftscienceawhileago on September 8, 2011 10:59 AM writes...

Hear hear!

Permalink to Comment

3. TJMC on September 8, 2011 11:09 AM writes...

Derek - I wholeheartedly agree with your point that the best way to measure/guide how we perform Pharma R&D is how well society values the results. Lots of time and possible paths to get there, but hopefully we keep that end goal in our "line of sight".

Permalink to Comment

4. Screening for ideas on September 8, 2011 11:18 AM writes...

This begs the question: is the number of interesting new discoveries proportional to the number of groups or the number of people? It is a little like screening compounds. Do you want more scaffolds or more analogs? I think most would agree that more scaffolds are better. Is that true of research productivity?

Assuming a fixed funding pool,

If the former, that would argue for more groups and necessarily smaller sizes of groups.

If the latter, that would argue for bigger groups and necessarily fewer of them. Presumably there is also some efficiency gained in a large group but efficiently trolling through a heavily mined space may not be an advantage.

Permalink to Comment

5. TJMC on September 8, 2011 11:45 AM writes...

#4 Screen - Great question but there is one more variable: Where can technology best leverage the benefits of each model? For instance, larger groups could benefit by novel screening tools like datamining with semantics, etc. On the other hand, more and smaller groups could take that tech (which may cost a lot less than largr groups), and overcome the advantages of traditional scale in those larger groups.

On another avenue, could ("really hot area") collaboration tools improve the performance of decentralized and more diversified groups? Improving the innovation of large homogenous groups?

In other words, the traditional answers from organizational models can get turned on their head by unexpected innovations, or the possible success of some that are already in use. Look at how steam power, assembly lines or the internet did to so many other business models.

Permalink to Comment

6. TJMC on September 8, 2011 11:45 AM writes...

#4 Screen - Great question but there is one more variable: Where can technology best leverage the benefits of each model? For instance, larger groups could benefit by novel screening tools like datamining with semantics, etc. On the other hand, more and smaller groups could take that tech (which may cost a lot less than largr groups), and overcome the advantages of traditional scale in those larger groups.

On another avenue, could ("really hot area") collaboration tools improve the performance of decentralized and more diversified groups? Improving the innovation of large homogenous groups?

In other words, the traditional answers from organizational models can get turned on their head by unexpected innovations, or the possible success of some that are already in use. Look at how steam power, assembly lines or the internet did to so many other business models.

Permalink to Comment

7. MTK on September 8, 2011 12:20 PM writes...

The libertarian/free market person in me says that if the scientific community were truly a free market where the customer pays for the product that the best and most efficient model would eventually reveal itself.

The dummy in me, however, doesn't quite know what the product is or who the consumer is. When the renumeration is in the form of future grant funds or tenure that is handed out by folks who may or nay not be the consumer than economic principles are pretty much out the window.

Anyway, if we want people to publish good work not just any work, instead of publications being a measure wouldn't citations of those publications (excluding reviews) be a better measure?

Permalink to Comment

8. SteveM on September 8, 2011 12:22 PM writes...

“Academic politics are so vicious precisely because the stakes are so small.”

Permalink to Comment

9. In Vivo Veritas on September 8, 2011 12:43 PM writes...

Imperial College of London?
This guy must be exceeding expectations:
Steve Bloom (gut hormone guy).
1572 papers since 1969.
Wow.

Now, any of you big pharma types spend any time or effort trying to replicate his results or develop his hypotheses?

Permalink to Comment

10. biologist on September 8, 2011 1:04 PM writes...

To #4 (is number of real discoveries proportional to number of groups or number of scientists):

Unfortunately, neither. There have been tens of thousands of scientists and thousands of groups working on cancer, and the result was less than anyone expected. I think it is intellectual saturation: no matter how many people work on a question, ideas come in a certain pattern and each wrong hypothesis has to be falsified. That takes time. In hot areas, major advances are often published in 3, 4, 5 articles in parallel (e.g. HIV). While three might be good for replications, article #4 and 5 are not really needed.

Permalink to Comment

11. drug_hunter on September 8, 2011 1:42 PM writes...

Let's start with a journal that many or most of us probably know well -- J. Med. Chem. I'd guess that

Permalink to Comment

12. Curious Wavefunction on September 8, 2011 2:19 PM writes...

It's well known that small research groups give better value than big ones, so that should be the rule."

Is that true, or was that a bit of sarcasm that I missed?

It's not sarcasm, it's true. Small, intensely focused groups are usually much better at producing ideas and encouraging free thinking without the bureaucracy and official sanctioning. One of the casualties of "big science" is the gradual hemorrhaging of such groups from the science world.

Just think of the top twenty scientific discoveries of the twentieth century. How many do you think were made by big groups? (Think "Watson and Crick")

Permalink to Comment

13. Hap on September 8, 2011 3:04 PM writes...

Markets are good for things people need or want to have now and that can be measured and compared easily. The outcomes of research are unpredictable and unquantifiable in most cases. Research isn't (or shouldn't always be) directed at short-term goals whose progress can be easily evaluated, and it may sometimes tell people things they don't want to hear. That seems like the kind of task markets would be really bad at managing.

The problem of publishing counts as a measure of scientific worth probably stems more from grant requirements that simply by publishing - universities and granters want to know whether they've hired the right people or given the right people money, and want readily evaluated measures to tell them. Impact factors and publication counts are easy to measure - hence they become benchmarks, even if they don't mean anything.

My advisor had lots of papers, and while I'm sure they weren't all great, many of them were useful. I don't see why having lots of papers should be embarrassing, unless none of them (or almost none of them) are any good.

Permalink to Comment

14. Anonymous on September 8, 2011 3:08 PM writes...

Wavefunction - the human genome sequencing is a pretty big advancement, so was nuclear power and putting a man on the moon. Although the last one is arguably an engineering rather than scientific achievement, i'd say it's fair to argue that big groups can make contributions too. some things just can't be accomplished by a small group. Just look at drug discovery in general! Now, that's not to say that small groups aren't important as well, in fact the NIH intramural program is modeled almost entirely on highly productive small groups that collaborate with each other. Just trying to play devil's advocate here...

Permalink to Comment

15. Curious Wavefunction on September 8, 2011 3:55 PM writes...

Anon, yes, big groups have their place and can of course play a role in certain cases (think LHC). But as with everything else you can do too much with big science. A couple of years ago Bob Weinberg from MIT who is a cancer pioneer wrote a great article in Cell arguing that the importance of big groups and consortiums has been overestimated in cancer research and that the creative give-and-take of ideas endemic to small groups is being stifled, leading to a dearth of good ideas in basic cancer research. The fact is that science flourishes best when bureaucracy is kept to a minimum, something that's inherently hard to do in a big group. That does not mean that big groups will automatically stagnate, only that you will have to go to extra lengths to make sure that that strings are loosened. But as far as I can tell this does not often happen and groups like the MRC which produced a dozen Nobel Prize winners through minimal interference with scientific work are rare. Ditto for big pharmaceutical companies. I think we had a post-war era of big science that produced some valuable results. But the pendulum has swung to the other extreme and we again need to push small-scale creative science.

Permalink to Comment

16. Curious Wavefunction on September 8, 2011 3:57 PM writes...

It's also interesting that you mention nuclear power. The only nuclear reactor that actually made money for its manufacturers was the TRIGA, a model that was designed by a small group of very smart people including Freeman Dyson and Edward Teller (this is nicely recounted in the chapter titled "Little Red Schoolhouse" in Dyson's book "Disturbing the Universe").

Permalink to Comment

17. Anonymous on September 8, 2011 4:41 PM writes...

I didn't know that - thanks! I was really referring to the Manhattan project, which was one of the models for and big successes of "big science" in the 20th century, in my opinion. I'll be sure to check out the book.

I should mention that I pretty much agree with you on the small group model -- just didn't want to discount some of the truly amazing discoveries made by bigger organizations.

I think in terms of synthetic chemistry, what I hope this would mean is an end of the huge empires of synthetic chemists with groups of 30 studends/postdocs, and an increase in the number of smaller, more nimble and creative groups. The groups i've been in have generally worked best at a smaller size.

I think there's a critical mass for innovation that's somewhere around 3-7 people in a group, where you have good institutional memory and core knowledge, but a small enough people that the PI can truly focus on the projects that are ongoing in the lab. But that's just my 2 cents and what do I know...

Permalink to Comment

18. TJMC on September 8, 2011 4:45 PM writes...

#15 CWF - I think we tend to paint these issues with too broad a brush. Big groups can succeed and innovate, and small ones can languish. Some of it is the talent there, some on leadership, some on structure and goals/process...

It seems to me that the process of innovation, cross-pollinating ideas and problems, etc., CAN work in either size group. It just takes enlightened management or a determination to collaborate despite management. And I have seen it work both ways –in large as well as small groups.

Before some say that “you cannot change (be free) until leadership leads or changes” (something I have heard far, far too many times in the past 30 years), I would look to what some are calling the “Arab Spring” effect. Time, distance and collaborations no longer have to follow traditional paths (what most other commenters are referring to), that we have experienced.

Permalink to Comment

19. GC on September 8, 2011 4:51 PM writes...

This isn't much different from "hundreds of lines of code" (CLOC) in software development, where people were scored by the number of lines of code they wrote.

It took management decades to distinguish the difference between tons of assembly-line churned-out buggy shit that dies the first time it sees production data, and the good stuff that might be only a couple hundred lines that did take 2 or 3 weeks to write, but has no bugs, covers all the corner cases, and isn't brittle.

Permalink to Comment

20. Stephen on September 8, 2011 7:41 PM writes...

The proposal from Imperial seems quite modest. If you have a group of say 5 then 3 papers a year, including one good one is about par for a chemist. Even I would be worried if I wasnt getting that.


Those that have many hundreds of papers are probably in big collaborations (where the effective nuber of students/collaborators) can be up to a hundred.

Scientists may beleive that if they werent measured so much that they would be more productive. I suspect not. Good scientist will produce good work no matter what. It is only the mediocre ones that fill up the journals with garbage. The key is to use the right metric to analyze performance

Permalink to Comment

21. drug_hunter on September 8, 2011 8:05 PM writes...

Let's start with a journal most of us know pretty well -- J. Med. Chem. I'd guess that less than 10% of the articles in JMC are worth reading...

Permalink to Comment

22. Bobby Shaftoe on September 8, 2011 8:14 PM writes...

@15: Wavefunction, a great comment. The only thing you did wrong (other than writing "consortiums" rather than "consortia") was to omit the reference to the Weinberg paper. I was curious enough to find it at Cell 2006, 126, 9. I'm familiar with some of Weinberg's stuff, and it is difficult to dispute that he is a beast of a cancer researcher. He makes a very compelling argument in this essay about why our funding balance needs to be reconsidered to reasonably support both "innovative" small groups and "reduction to practice" large groups. It should be recognized that these are two extremes and the usual applicants lie somewhere between on the spectrum. Unfortunately, the pendulum currently resides closer to the data-producing, metrics-meeting, turn-the-cranking large group side of things....

Permalink to Comment

23. mike on September 9, 2011 10:01 AM writes...

@GC - we got the same thing in medicinal chemistry. It was amazing how many more reactions were run when "number of reactions" was the major criterion used to major productivity. And when the measure was "number of compounds registered", that number went through the roof. But they were mostly useless reactions, and useless compounds, tons of easy-to-make, meaningless analogs thrown in to pad the numbers.

As a friend of mine keeps pointing out, "you get what you measure"

Permalink to Comment

24. MIMD on September 9, 2011 6:52 PM writes...

Derek,

in blogging as you do (and as I do) for years, you've probably written 100 x the amount that the average academic under publication pressure does.

And what you write is of far more value, for the most part.

Permalink to Comment

25. Elvesier on September 10, 2011 1:17 PM writes...

Just announced: Elvesier's list of new chemistry journals for 2012. Looks awesome.

Permalink to Comment

26. Sili on September 10, 2011 4:50 PM writes...

Ah, Colquhoun. It did indeed sound like him.

Incidentally, he's someone who doesn't have much trust in the free market. His paean to the NHS is touching.

There's nothing like a market to really get you down to reality.
Which is why the Republan Party is so beholden to the ideas that Global Warming and the theory of evolution are Liberal conspiracies?

Frankly, I don't like the fact that all research should be subject to the whims of people whose attention cannot see beyond the next financial quarter or at best the next election cycle.

CERN is not perfect, but at least it's there. Unlike the Superconducting Supercollider.

And what market will ever make it profitable to cure malaria or prevent the spread of HIV?

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Gitcher SF5 Groups Right Here
Changing A Broken Science System
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face