Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Pfizer Seeks Respect | Main | Making Arenes React, Via Rhodium »

February 20, 2014

The NIH Takes a Look At How the Money's Spent

Email This Entry

Posted by Derek

The NIH is starting to wonder what bang-for-the-buck it gets for its grant money. That's a tricky question at best - some research takes a while to make an impact, and the way that discoveries can interact is hard to predict. And how do you measure impact, by the way? These are all worthy questions, but here's apparently the way things are being approached:

Michael Lauer's job at the National Institutes of Health (NIH) is to fund the best cardiology research and to disseminate the results rapidly to other scientists, physicians, and the public. But NIH's peer-review system, which relies on an army of unpaid volunteer scientists to prioritize grant proposals, may be making it harder to achieve that goal. Two recent studies by Lauer, who heads the Division of Cardiovascular Sciences at NIH's National Heart, Lung, and Blood Institute (NHLBI) in Bethesda, Maryland, raise some disturbing questions about a system used to distribute billions of dollars of federal funds each year.

(MiahcalLauer recently analyzed the citation record of papers generated by nearly 1500 grants awarded by NHLBI to individual investigators between 2001 and 2008. He was shocked by the results, which appeared online last month in Circulation Research: The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores. That was the case even though low-scoring researchers had been given less money than their top-rated peers.

I understand that citations and publications are measurable, while most other ways to gauge importance aren't. But that doesn't mean that they're any good, and I worry that the system is biased enough already towards making these the coin of the realm. This sort of thing worries me, too:

Still, (Richard) Nakamura is always looking for fresh ways to assess the performance of study sections. At the December meeting of the CSR advisory council, for example, he and Tabak described one recent attempt that examined citation rates of publications generated from research funded by each panel. Those panels with rates higher than the norm—represented by the impact factor of the leading journal in that field—were labeled "hot," while panels with low scores were labeled "cold."

"If it's true that hotter science is that which beats the journals' impact factors, then you could distribute more money to the hot committees than the cold committees," Nakamura explains. "But that's only if you believe that. Major corporations have tried to predict what type of science will yield strong results—and we're all still waiting for IBM to create a machine that can do research with the highest payoff," he adds with tongue in cheek.

"I still believe that scientists ultimately beat metrics or machines. But there are serious challenges to that position. And the question is how to do the research that will show one approach is better than another."

I'm glad that he doesn't seem to be taking this approach completely seriously, but others may. If only impact factors and citation rates were real things that advanced human knowledge, instead of games played by publishers and authors!

Comments (34) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why


COMMENTS

1. NJBiologist on February 20, 2014 10:52 AM writes...

I don't have access to the article right now, so this may be a moot point--but have either Lauer or Nakamura considered the possibility that the funding rate is so low that essentially everything that gets funded is a good proposal? If a study section funds two good projects, I wouldn't expect to find a difference in citations (or any other quality measure) between the two.

Permalink to Comment

2. they get zombies on February 20, 2014 10:53 AM writes...

at least according to this redditor:

http://www.reddit.com/r/Bitching/comments/1ycze0/it_was_the_year_2015/

Permalink to Comment

3. wgc on February 20, 2014 10:53 AM writes...

--The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores. That was the case even though low-scoring researchers had been given less money than their top-rated peers.--

There are two ways to interpret this:

1) As researchers and NIH have long contended, the competition for grant funding is so competitive that there are many more worthwhile grants than there is money for them. So the lower ranked funded grants are just as good (within a margin of error) as the higher ranked grants. This conclusion is made by folks advocating more funding for NIH.

2) Assuming that the lower ranked grants did indeed get less funding, then the NIH can get a better return on its money by funding more grants at a lower award amount for the same amount of research funding. This is an argument to not fund so many big grants and large projects, which is accelerating with the push by the NIH director and many clinicians for funding translational research projects. There are a lot of individual researchers who favor this, which is not inconsistent with more overall money for research at NIH.

Permalink to Comment

4. Gerry Atrickseeker on February 20, 2014 11:19 AM writes...

Every scientist who is worth his/her salt can recount experiences where an NIH study section trashed a grant on a topic that later turned out to be quite important. It has been pointed out many times that NIH study sections are inherently conservative and tend to reflect the existing consensus in a field. Major breakthroughs, however, come from science that disrupts consensus. This has been understood ever since Thomas Kuhn and The Structure of Scientific Revolutions. The issue is how to identify breakthrough science and differentiate it from misguided error. To give the NIH some credit, in recent years it has implemented a number of granting programs that try to stress innovation (e.g. the Pioneer awards) as well as funding for beginning investigators who may have some new ideas.

Almost the converse situation prevails in publication where the ilk of Nature, CELL and other high profile journals try to focus exclusively on ‘hot’ topics. Over time some of the designated hot areas prove to be not so hot and dwindle away, but nonetheless the ‘hot’ articles will have generated high impact factors. No wonder there is a discrepancy between NIH funding and publication impact!

Although there is no need to obsess about the grant/publication disparity, it is clear that the NIH peer review system could use some renovation (as well as more money to dispense). The titles and functions of study sections still primarily reflect a disease/organ system specific orientation. However, current biomedical science is developing information and insights that cut across traditional boundaries. If the study section system were more reflective of the thrust of current research there would probably be fewer misfires on grant funding.
http://scienceforthefuture.blogspot.com/

Permalink to Comment

5. DCRogers on February 20, 2014 11:25 AM writes...

As a Bayesian, I say always look at the priors.

In this case, the results are compatible with a pool of applications that has many more good projects than can be funded -- the job of the review is to weed out the duds.

In this case, one would not need or expect the review process to show any useful prioritization among the 'good' projects: yet it could have been fully successful.

This is further complicated if the funding is supposed to go for cutting-edge research -- which implies many worthwhile projects will still fail. Equating project failure (or its cousin, lack of citation) with funding failure will simply lead to overly-cautious funding of 'safe' research -- a contradiction of terms, with results guaranteed to be both cite-able and non-earthshaking.

Permalink to Comment

6. BG on February 20, 2014 12:02 PM writes...

I haven't heard anything positive about how these study sections are conducted. Mainly that there are people in there that are voting on grants outside of their field, so they give it bad scores because they don't understand it. Supposedly, there is alot of aggression too.

Permalink to Comment

7. Vaudaux on February 20, 2014 12:21 PM writes...

Consistent with #1, 2, 4: I have sat on many study sections. The impression I have is that the only proposals funded these days are those categorized by reviewers in the meetings as having impact scores of either 1 (Exceptionally strong with essentially no weaknesses), 2 (Extremely strong with negligible weaknesses) or occasionally 3 (Very strong with only some minor weaknesses).

There is no reason to expect that within this narrow range of scores, scores would correlate the numbers of resulting publications or cited publications.

Permalink to Comment

8. Anonymous on February 20, 2014 12:30 PM writes...

Doesn't this finding just show that:

1. Nobody really has any clue what will deliver great impact
2. You can deliver just as much impact with a much smaller budget as with a larger one.

So on that basis, just give everyone a small budget, so that everyone collectively can deliver more impact overall.

Permalink to Comment

9. MoMo on February 20, 2014 12:41 PM writes...

It also shows the sorry state and caliber of the reviewers. The study sections are filled with 3rd to 6th rate scientists and the NIH granting system has turned into welfare for the educated and degreed.

Permalink to Comment

10. Anonymous on February 20, 2014 12:42 PM writes...

If the grants that receive a larger amount of money do not result in either a larger number of publications or higher impact publications, then what is the point of the larger award? Are these labs just inefficient with the funds? It would be interesting to see some sort of metric for productivity versus total funding.

Permalink to Comment

11. Bernard Munos on February 20, 2014 1:38 PM writes...

There is clearly an issue in how grant money is being distributed. In short, too much money goes to fund safe research instead of breakthrough research. That problem, however, is not of NIH's making. Just the opposite, it has resisted NIH's attempts to correct it.

Evidence of conservatism is not hard to find. Try getting a grant to study an unvalidated target, or explore a new tantalizing hypothesis. Your probability of success drops down significantly, and even worse if you are a young PhD without an extensive publication record. The result is that we keep studying the same 50 kinases, and ignore the other 450; or keep investigating the same hypotheses (e.g., beta-amyloid for Alzheimer's) and underfund other worthwhile ideas.

NIH is well aware of the dangers of keeping to our zones of comfort, and has tried to change that through pioneer grants, new investigator awards, and more recently NCATS, among other attempts. This being Washington, however, changing established practices is never easy, and often met with end runs that can become ugly. Reallocating billions of dollars from safe science to breakthrough science creates winners and losers, and it is easy for losers to charge (through their favorite congressmen) that the changes will recklessly squander taxpayer money on unproven science that is likely to fail. The resulting debate is usually enough to stall the changes.

Yet, the point of science is to explore uncharted territory. This is where innovation comes from. "Validated targets" typically yield one or two drugs, which means that, by the time they are validated, it is time to move one and fund something else. We often read laments that research is tougher because the low-hanging fruit has been picked. Not true. We are not searching where those fruits are, and limiting ourselves to areas that have been over-harvested.

From the taxpayers' standpoint, the reckless behavior is not to spend money on unproven science, but to spend it on established science that is well understood. For grant decisions, the question that matters is not whether a proposal is backed by abundant prior research, but whether, if successful, it will change therapy. Transformational research is what enriches us as a society. I would argue that grants that do not meet that standard should be defunded, and the money reallocated to investigators keen to investigate more promising, if uproven, areas.

Permalink to Comment

12. Lu on February 20, 2014 1:40 PM writes...

9. Anonymous on February 20, 2014 12:42 PM writes...
what is the point of the larger award? Are these labs just inefficient with the funds?

Smaller award usually means that graduate students have to teach classes to support themselves and do science on nights/weekends.

Permalink to Comment

13. gwern on February 20, 2014 2:07 PM writes...

> I understand that citations and publications are measurable, while most other ways to gauge importance aren't. But that doesn't mean that they're any good, and I worry that the system is biased enough already towards making these the coin of the realm.

What's nice about something like this is that the contradiction involves two things we love to hate: if the citations & publications are a meaningful measure of importance, then this result condemns the review ratings; but if we go the other direction, then it refutes the use of citations/publications.... They can't *both* be right.

Permalink to Comment

14. dearieme on February 20, 2014 2:59 PM writes...

I remember applying for two grants at the same time. One was for the investigation of an original and bloody good idea; the other was for a routine study that one of my postgraduate students thought he'd like to pursue as a postdoc. The first was turned down, the second funded. I found out that one problem with the first was that they couldn't even decide which committee to pass it to; the second was nodded through because (unknown to me) they had made a policy decision to favour that area.

Such are the lunacies of the scientific bureaucracies.

Permalink to Comment

15. SteveM on February 20, 2014 3:52 PM writes...

Re: "The funded projects with the poorest priority scores from reviewers garnered just as many citations and publications as those with the best scores"

Adding to #1 NJBiologist, using ordinal rankings for any purpose other than ordinal ranking is statistically pathological - but pervasive.

And as suggested by others, who knows what the grant money is used for? I had a co-op job in an analytical lab of the Philadelphia Water Department when I was in college. One day my boss came to my desk with an armful of lab equipment catalogs. He told me to buy $15K worth of stuff before Friday because that's what was left from an EPA grant. Government money is considered use or lose from grant to grant. So the last thing a PI does is leave even a penny unspent.

The delivered stuff sat in a storage room with no apparent purpose.

Permalink to Comment

16. Emjeff on February 20, 2014 4:07 PM writes...

The bigger question is why an organization with a $30 billion budget only dispenses $10 billion in grants. What are we getting for that $20 billion? I would guess not much.

Permalink to Comment

17. synorgchem on February 20, 2014 5:06 PM writes...

Know of many NIH grants to blatant frauds, even when they are alerted to the sham in explicit detait the money still flows.
It's obvious they simply cant attract the talent necessary to see through the smoke and mirror shows.

Permalink to Comment

18. MoMo on February 20, 2014 5:07 PM writes...

One other issue- the NIH measure should be in PATENTS and COMMERCIALIZED PRODUCTS- not in how many times an article has been viewed or cited.

But that's what we have now from the NIH. Its an antiquated welfare-based system that keeps scientists from unemployment, as judged by lackluster and third rate scientists.

Permalink to Comment

19. Anonymous on February 20, 2014 6:30 PM writes...

@13
It's a known paradox- a metric can be good and correlative as long as it is not used as a quota of sorts.

@18
That biases fundamental research- though depending on the implementation, might be for or against it.
Also, there might be significant (and varying) delay between the publication and commercial products.

Generally science tries to promote itself. Actually technological progress and improvement of life is difficult to predict and achieve. Wanted, but sometimes unintended.

Permalink to Comment

20. fluorogrol on February 21, 2014 3:48 AM writes...

@MoMo: You do realise that the applied research resulting in patents and commercialisation is made possible by fundamental understanding, synthetic chemistry, etc., right?

Permalink to Comment

21. johnC on February 21, 2014 11:57 AM writes...

#18: how many applications are filed by a university/hospital depends upon availability of general funds. MIT files a lot more patents than many outstanding public universities. Taking patents into account for NIH funding will only further perpetuate rich-get-richer.

Permalink to Comment

22. a. nonymaus on February 21, 2014 12:13 PM writes...

Public-funded research should not be held captive to a temporary monopoly. The entire purpose of a patent is that the discovery is disclosed in return for the monopoly. If the research is done on a public grant given with the stipulation that the results are disclosed via publication, a patent on that research is blatant double-dipping. If industry wants discoveries that are under patent, they need to hire some scientists and do their own damn research.

Permalink to Comment

23. anon on February 21, 2014 2:20 PM writes...

The point that is being missed here reflects the current poor funding level.

Funding levels are such that only the top 12-15% (or less) of grants are receiving funding.

The message that I take from this article is that it is difficult to differentiate between the 90th and 99th percentile in terms of success potential. I believe that more grants are deserving of funds than the availability of funds.

Another confounding factor is that Programic officials can select "lower" scoring grants that are hot topics but failed to earn top marks (often due to factors unrelated to the science, such as grantsmanship/ability to describe broader impact/ect). Therefore, are we actually comparing highly successful "cherry picked" lower scores against the higher scored grants?

Permalink to Comment

24. long time pharma on February 21, 2014 2:25 PM writes...

@13 I think they can both be right.

We get more XXX / $ on lower metrics. Well, the metrics are not distinguishing as others have said. But worse than that is that the metrics may be misguided.

Pharma has gone through this: more candidates only correlate poorly with more INDs and those only correlate poorly with NDAs.

The problem here is that the NIH does not really believe its mission. In other words, the political forces are not aligned with the practice.

Many researchers see $$$ payoffs down the road for themselves (personally or as funding) and the university tech transfer offices see the same. When this is the focus don't be surprised if innovation looks like it is just a way to start a company. There is nothing wrong with starting companies or discovering drugs, but is that really the NIH goal or even role? Is that what breakthrough science consists of? I fear that this ethos is what is implicitly driving funding decisions.

I would hope that taxpayer funded research is not just about seeing a near term return on investment.

I argue that the NIH should be about generating important new knowledge that *might* lead to companies and products. The way to improve NIH knowledge generation is not to pick the biggest XXX/$, rather to fund enough diversity of proposals so that something like 20% totally fail and 20% fail but do something unexpected.

If you aren't failing you aren't trying.

Permalink to Comment

25. lynn on February 21, 2014 2:57 PM writes...

#23 anon - is right. With such low funding levels, there is very little difference in quality among the funded grants and this calculation is meaningless. Also, as I am sitting in an NIH study section right now, I take exception to all this bad mouthing. The critical parameter is generally the quality of the approach. If there is an innovative idea that is approached incorrectly, it will get a lower score. And peer reviewers have no control over funding. The institutes make those decisions. And the quality of the panels seems pretty darn good to me (if I do say so myself).

Permalink to Comment

26. gwern on February 21, 2014 3:07 PM writes...

"Percentile Ranking and Citation Impact of a Large Cohort of NHLBI-Funded Cardiovascular R01 Grants" http://circres.ahajournals.org/content/early/2014/01/09/CIRCRESAHA.114.302656.abstract

"Methods and Results: We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008, and followed their progress for linked publications and citations to those publications. Our co-primary endpoints were citations received per million dollars of funding, citations obtained within 2-years of publication, and 2-year citations for each grant's maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total NIH awards, the portfolio yielded 16,793 publications that appeared between 2001 and 2012 (median per grant 8, 25th and 75th percentiles 4 and 14, range 0 - 123), which received 2,224,255 citations (median per grant 1048, 25th and 75th percentiles 492 and 1,932, range 0 - 16,295). We found no association between percentile ranking and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus non-human focus, and institutional funding. An exploratory machine-learning analysis suggested that grants with the very best percentile rankings did yield more maximally cited papers.

Conclusions: In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics."

Permalink to Comment

27. Jonathan on February 21, 2014 3:27 PM writes...

@emjeff what on earth are you talking about? In 2013, more than $20 billion was spent on extramural research grants, nearly $3 billion was spent on research contracts to companies, and $3 billion was spent on intramural research.

It's not like that's some kind of secret, either.

http://officeofbudget.od.nih.gov/pdfs/FY13/FY%202013%20Full-Year%20NIH%20Mechanism%20Table%20Posting%20.pdf

Permalink to Comment

28. jbosch on February 22, 2014 12:06 AM writes...

@25
"The critical parameter is generally the quality of the approach. If there is an innovative idea that is approached incorrectly, it will get a lower score."

Well but what if the reviewer is not in the position of judging if the approach the PI picked for that project is suited ? I've seen enough of the type "use this approach instead don't rely on SPR it's too difficult to control blabla".

Or even better getting criticized for using a standard technique, in this case X-ray crystallography - shouldn't that be a strength instead of a weakness, it's been 100years now that X-ray crystallography has been kicking around and I would call it a pretty solid technique to investigate proteins at atomic resolution.

Permalink to Comment

29. jbosch on February 22, 2014 12:09 AM writes...

And by the way this article comes to mind:
www.ncbi.nlm.nih.gov/pmc/articles/PMC3446280/

Permalink to Comment

30. Anonymous BMS Researcher on February 22, 2014 12:27 AM writes...

I have long thought trying to distinguish 97th %ile applications from 85th &ile applications is preposterous. Therefore the only intellellectually honest approach is to ask each Study Section the simple question: which of these proposals would you fund IF DOLLARS WERE SUFFICIENT TO FUND ALL WORTHY PROPOSALS. Then pick entirely at RANDOM from worthy proposals. Anything other than random selection after an initial quality filter is cognitively invalid.

When politicians object to the random selection, the reply would be, "until funding improves, this is the only honest approach, do not blame us for the problem you created."

Permalink to Comment

31. DLIB on February 22, 2014 2:05 AM writes...

@29...Spot on!!

The quality of the review process is horendous!!!

Here is an example from the NIH website on Peer review that demonstrates the abysmal way in which scientific review is conducted…I hope that this is a tongue- in-cheek illustration because it’s pretty bad!!

http://public.csr.nih.gov/Pages/default.aspx

At 3:34 the name of the investigator is revealed.

At 4:55 the lead reviewer discusses the strengths of the “ENVIRONMENT” in which he says that the place he works at has all the necessary people and equipment to accomplish the work

At 5:06 lead reviewer discusses the weaknesses of the “ENVIRONMENT” in which he says that the available equipment to run percoll gradients (this is a simple centrifuge by the way which is an absolute joke!!!) and his access to Fluorescence Microscopy is “…detailed but not adequately described…”. That last statement I guess relates to his card key access or something since this type of microscopy is everywhere in bioscience!!

Permalink to Comment

32. Wowchemisti on February 22, 2014 5:57 AM writes...

I'm a prof with NIH funding and you are missing a lot. First some universities charge higher indirects, tuition, ect. So the amount $ funded may not correlate very well if the high ones (ie the harvards which needs to give students/post docs higher pay) have different distributions as the lower, successful proposals from smaller universities. On that note the second issue:internal politics. Normally if you get a big cut from NIH your university helps you out by charging less tuition, give backs, and other soft support. This could be a huge factor. Why do you think the NSF researchers get by on $60k/yr and have like 5 grad students. Third, a single study section doesn't fund one type of experiment. Expts have different costs, like if you're doing MOA with a ton of westerns, compared to synthesis. Depending on whats hot will move the relative impact scores and distributions. This further complicates the issue. In all its way to complex to try and rationalize

Permalink to Comment

33. Sili on February 22, 2014 4:34 PM writes...

Why not do an RCT? The budget is big enough that they could easily set, say, 10% aside for a control group, to be distributed randomly. Give it a few years before removing the blinding and see if review makes any difference in outcome.

Permalink to Comment

34. Fred on February 25, 2014 1:30 PM writes...

Monopoly funding is also a problem. In another branch of science, federal government grant funds used to be widely spread around. Various lab heads and others had pots of money to award. As the funders themselves were from more varied backgrounds and working environments, more diverse projects were funded and someone with somewhat oddball ideas had a better chance of talking someone into funding.

This was discontinued as the administrative overhead was judged to be too large when compared to the cost of administering larger grants from a central office. As one would expect, uniformity then prevailed and the field suffered.

We might be better off leaving the money in the private sector.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Why Not Bromine?
Fragonomics, Eh?
Amicus Fights Its Way Through in Fabry's
Did Pfizer Cut Back Some of Its Best Compounds?
Don't Optimize Your Plasma Protein Binding
Fluorinated Fingerprinting
One of Those Days
ChemDraw Days