Improved Success Rate and Other Funding Trends in Fiscal Year 2014

The Consolidated and Further Continuing Appropriations Act, 2015 Exit icon, provides funding for the Federal Government through September 30. NIGMS has a Fiscal Year 2015 appropriation of $2.372 billion, which is $13 million, or 0.5%, higher than it was in Fiscal Year 2014.

As I explained in an earlier post, we made a number of adjustments to our portfolio and funding policies last fiscal year in order to bolster our support for investigator-initiated research. Partly because of these changes, the success rate for research project grants (RPGs)—which are primarily R01s—was 25 percent in Fiscal Year 2014. This is 5 percentage points higher than it was in Fiscal Year 2013. Had we not made the funding policy changes, we predicted that the success rate would have remained flat at 20 percent.

Figure 1 shows the number of RPG applications we received and funded, as well as the corresponding success rates, for Fiscal Years 2002-2014.

Figure 1. Number of competing RPG applications assigned to NIGMS (blue line with diamonds, left axis) and number funded (red line with squares, left axis) for Fiscal Years 2002-2014. The success rate (number of applications funded divided by the total number of applications) is shown in the green line with triangles, right axis. Data: Tony Moore.
Figure 1. Number of competing RPG applications assigned to NIGMS (blue line with diamonds, left axis) and number funded (red line with squares, left axis) for Fiscal Years 2002-2014. The success rate (number of applications funded divided by the total number of applications) is shown in the green line with triangles, right axis. Data: Tony Moore.

Moving forward, it will be important to employ strategies that will enable us to at least maintain this success rate. In keeping with this goal, we recently released a financial management plan (no longer available) that continues many of the funding policies we instituted last year. As funds from the retirement of the Protein Structure Initiative come back into the investigator-initiated RPG pool, we’ll be working to ensure that they support a sustained improvement in success rate rather than create a 1-year spike followed by a return to lower rates.

Figures 2 and 3 show data for funding versus the percentile scores of the R01 applications we received. People frequently ask me what NIGMS’ percentile cutoff or “payline” is, but it should be clear from these figures that we do not use a strict percentile score criterion for making funding decisions. Rather, we take a variety of factors into account in addition to the score, including the amount of other support already available to the researcher; the priority of the research area for the Institute’s mission; and the importance of maintaining a broad and diverse portfolio of research topics, approaches and investigators.

Figure 2. Percentage of competing R01 applications funded by NIGMS as a function of percentile scores for Fiscal Years 2010-2014. For Fiscal Year 2014, the success rate for R01 applications was 25.7 percent, and the midpoint of the funding curve was at approximately the 22nd percentile. Data: Jim Deatherage.
Figure 2. Percentage of competing R01 applications funded by NIGMS as a function of percentile scores for Fiscal Years 2010-2014. For Fiscal Year 2014, the success rate for R01 applications was 25.7 percent, and the midpoint of the funding curve was at approximately the 22nd percentile. See more details about the data analysis for Figure 2. Data: Jim Deatherage.
Figure 3. Number of competing R01 applications (solid black bars) assigned to NIGMS and number funded (striped red bars) in Fiscal Year 2014 as a function of percentile scores. Data: Jim Deatherage.
Figure 3. Number of competing R01 applications (solid black bars) assigned to NIGMS and number funded (striped red bars) in Fiscal Year 2014 as a function of percentile scores. See more details about the data analysis for Figure 3. Data: Jim Deatherage.

It’s too early to say what the success rate will be for Fiscal Year 2015 because it can be influenced by a number of factors, as I described last year. However, we’re hopeful that by continuing to adjust our priorities and policies to focus on supporting a broad and diverse portfolio of investigators, we can reverse the trend of falling success rates seen in recent years.

More on My Shared Responsibility Post

Thanks for all of the comments and discussion on my last post. There were many good points and ideas brought up, and these will be very useful as we consider additional policy changes at NIGMS and NIH. I hope these conversations will continue outside of NIH as well.

Several people asked about the current distribution of funding among NIGMS principal investigators (PIs). Here are a few relevant statistics:

  • In terms of the NIH research funding of NIGMS grantees, in Fiscal Year 2013, 5 percent of the PIs had 25 percent of this group’s total NIH direct costs and 20 percent of the PIs had half of it. A similar pattern was recapitulated NIH-wide.
  • NIGMS PIs who had over $500,000 in total NIH direct costs held approximately $400 million in NIGMS funding.
  • The figure below shows the distribution of total NIH direct costs for NIGMS-supported investigators as well as the average number of NIH research grants held by PIs in each range.
Graph representing distribution of NIGMS investigartors' total NIH direct costs for research in FY2013
Figure 1. The distribution of NIGMS investigators’ total NIH direct costs for research in Fiscal Year 2013 (blue bars, left axis). The number below each bar represents the top of the direct cost range for that bin. The average number of NIH research grants held by PIs in each group is also shown (red line with squares, right axis). The direct costs bin ranges were chosen so that the first four bins each included 20 percent of NIGMS investigators.

With regard to changes NIH might make to help re-optimize the biomedical research ecosystem, NIH Director Francis Collins recently formed two NIH-wide working groups to develop possible new policies and programs related to some of the issues that I highlighted in my blog post and that were discussed in the subsequent comments. The first group, chaired by NIH Deputy Director for Extramural Research Sally Rockey, will explore ways to decrease the age at which investigators reach independence in research. The second, chaired by me, will look at developing more efficient and sustainable funding policies. Once these committees have made their recommendations, Sally plans to set up a group to consider the question of NIH support for faculty salaries.

As I mentioned in my post, we at NIGMS have been working for some time on these issues. We’ll be discussing additional changes and ideas with the community in the coming weeks and months on this blog and in other forums, including our upcoming Advisory Council meeting.

A Shared Responsibility

The doubling of the NIH budget between 1998 and 2003 affected nearly every part of the biomedical research enterprise. The strategies we use to support research, the manner in which scientists conduct research, the ways in which researchers are evaluated and rewarded, and the organization of research institutions were all influenced by the large, sustained increases in funding during the doubling period.

Despite the fact that the budget doubling ended more than a decade ago, the biomedical research enterprise has not re-equilibrated to function optimally under the current circumstances. As has been pointed out by others (e.g., Ioannidis, 2011; Vale, 2012; Bourne, 2013; Alberts et al., 2014), the old models for supporting, evaluating, rewarding and organizing research are not well suited to today’s realities. Talented and productive investigators at all levels are struggling to keep their labs open (see Figure 1 below, Figure 3 in my previous post on factors affecting success rates and Figure 3 in Sally Rockey’s 2012 post on application numbers). Trainees are apprehensive about pursuing careers in research (Polka and Krukenberg, 2014). Study sections are discouraged by the fact that most of the excellent applications they review won’t be funded and by the difficulty of trying to prioritize among them. And the nation’s academic institutions and funding agencies struggle to find new financial models to continue to support research and graduate education. If we do not retool the system to become more efficient and sustainable, we will be doing a disservice to the country by depriving it of scientific advances that would have led to improvements in health and prosperity.

Re-optimizing the biomedical research enterprise will require significant changes in every part of the system. For example, despite prescient, early warnings from Bruce Alberts (1985) about the dangers of confusing the number of grants and the size of one’s research group with success, large labs and big budgets have come to be viewed by many researchers and institutions as key indicators of scientific achievement. However, when basic research labs get too big it creates a number of inefficiencies. Much of the problem is one of bandwidth: One person can effectively supervise, mentor and train a limited number of people. Furthermore, the larger a lab gets, the more time the principal investigator must devote to writing grants and performing administrative tasks, further reducing the time available for actually doing science.

Although certain kinds of research projects—particularly those with an applied outcome, such as clinical trials—can require large teams, a 2010 analysis by NIGMS and a number of subsequent studies of other funding systems (Fortin and Currie, 2013; Gallo et al., 2014) have shown that, on average, large budgets do not give us the best returns on our investments in basic science. In addition, because it is impossible to know in advance where the next breakthroughs will arise, having a broad and diverse research portfolio should maximize the number of important discoveries that emerge from the science we support (Lauer, 2014).

These and other lines of evidence indicate that funding smaller, more efficient research groups will increase the net impact of fundamental biomedical research: valuable scientific output per taxpayer dollar invested. But to achieve this increase, we must all be willing to share the responsibility and focus on efficiency as much as we have always focused on efficacy. In the current zero-sum funding environment, the tradeoffs are stark: If one investigator gets a third R01, it means that another productive scientist loses his only grant or a promising new investigator can’t get her lab off the ground. Which outcome should we choose?

My main motivation for writing this post is to ask the biomedical research community to think carefully about these issues. Researchers should ask: Can I do my work more efficiently? What size does my lab need to be? How much funding do I really need? How do I define success? What can I do to help the research enterprise thrive?

Academic institutions should ask: How should we evaluate, reward and support researchers? What changes can we make to enhance the efficiency and sustainability of the research enterprise?

And journals, professional societies and private funding organizations should examine the roles they can play in helping to rewire the unproductive incentive systems that encourage researchers to focus on getting more funding than they actually need.

We at NIGMS are working hard to find ways to address the challenges currently facing fundamental biomedical research. As just one example, our MIRA program aims to create a more efficient, stable, flexible and productive research funding mechanism. If it is successful, the program could become the Institute’s primary means of funding individual investigators and could help transform how we support fundamental biomedical research. But reshaping the system will require everyone involved to share the responsibility. We owe it to the next generation of researchers and to the American public.

Graph representing NIGMS principal investigators (PIs) without NIH R01 funding between 200 and 2014.
Figure 1. The number of NIGMS principal investigators (PIs) without NIH R01 funding has increased over time. All NIGMS PIs are shown by the purple Xs (left axis). NIGMS PIs who were funded in each fiscal year are represented by the orange circles (left axis). PIs who had no NIH funding in a given fiscal year but had funding from NIGMS within the previous 8 years and were still actively applying for funding within the previous 4 years are shown by the green triangles (left axis); these unfunded PIs have made up an increasingly large percentage of all NIGMS PIs over the past decade (blue squares; right axis). Definitions: “PI” includes both contact PIs and PIs on multi-PI awards. This analysis includes only R01, R37 and R29 (“R01 equivalent”) grants and PIs. Other kinds of NIH grant support are not counted. An “NIGMS PI” is defined as a current or former NIGMS R01 PI who was either funded by NIGMS in the fiscal year shown or who was not NIH-funded in the fiscal year shown but was funded by NIGMS within the previous 8 years and applied for NIGMS funding within the previous 4 years. The latter criterion indicates that these PIs were still seeking funding for a substantial period of time after termination of their last NIH grant. Note that PIs who had lost NIGMS support but had active R01 support from another NIH institute or center are not counted as “NIGMS PIs” because they were still funded in that fiscal year. Also not counted as “NIGMS PIs” are inactive PIs, defined as PIs who were funded by NIGMS in the previous 8 years but who did not apply for NIGMS funding in the previous 4 years. Data analysis was performed by Lisa Dunbar and Jim Deatherage.

UPDATE: For additional details, read More on My Shared Responsibility Post.

Productivity Metrics and Peer Review Scores, Continued

In a previous post, I described some initial results from an analysis of the relationships between a range of productivity metrics and peer review scores. The analysis revealed that these productivity metrics do correlate to some extent with peer review scores but that substantial variation occurs across the population of grants.

Here, I explore these relationships in more detail. To facilitate this analysis, I separated the awards into new (Type 1) and competing renewal (Type 2) grants. Some parameters for these two classes are shown in Table 1.

Table 1. Selected=

Table 1. Selected parameters for the population of Type 1 (new) and Type 2 (competing renewal) grants funded in Fiscal Year 2006: average numbers of publications, citations and highly cited citations (defined as those being in the top 10% of time-corrected citations for all research publications).

For context, the Fiscal Year 2006 success rate was 26%, and the midpoint on the funding curve was near the 20th percentile.

To better visualize trends in the productivity metrics data in light of the large amounts of variability, I calculated running averages over sets of 100 grants separately for the Type 1 and Type 2 groups of grants, shown in Figures 1-3 below.

Figure 1. Running averages for the number of publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 1. Running averages for the number of publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 2. Running averages for the number of citations over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 2. Running averages for the number of citations over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 3. Running averages for the number of highly cited publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 3. Running averages for the number of highly cited publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

These graphs show somewhat different behavior for Type 1 and Type 2 grants. For Type 1 grants, the curves are relatively flat, with a small decrease in each metric from the lowest (best) percentile scores that reaches a minimum near the 12th percentile and then increases somewhat. For Type 2 grants, the curves are steeper and somewhat more monotonic.

Note that the curves for the number of highly cited publications for Type 1 and Type 2 grants are nearly superimposable above the 7th percentile. If this metric truly reflects high scientific impact, then the observations that new grants are comparable to competing renewals and that the level of highly cited publications extends through the full range of percentile scores reinforce the need to continue to support new ideas and new investigators.

While these graphs shed light on some of the underlying trends in the productivity metrics and the large amount of variability that is observed, one should be appropriately cautious in interpreting these data given the imperfections in the metrics; the fact that the data reflect only a single year; and the many legitimate sources of variability, such as differences between fields and publishing styles.

Productivity Metrics and Peer Review Scores

A key question regarding the NIH peer review system relates to how well peer review scores predict subsequent scientific output. Answering this question is a challenge, of course, since meaningful scientific output is difficult to measure and evolves over time–in some cases, a long time. However, by linking application peer review scores to publications citing support from the funded grants, it is possible to perform some relevant analyses.

The analysis I discuss below reveals that peer review scores do predict trends in productivity in a manner that is statistically different from random ordering. That said, there is a substantial level of variation in productivity metrics among grants with similar peer review scores and, indeed, across the full distribution of funded grants.

I analyzed 789 R01 grants that NIGMS competitively funded during Fiscal Year 2006. This pool represents all funded R01 applications that received both a priority score and a percentile score during peer review. There were 357 new (Type 1) grants and 432 competing renewal (Type 2) grants, with a median direct cost of $195,000. The percentile scores for these applications ranged from 0.1 through 43.4, with 93% of the applications having scores lower than 20. Figure 1 shows the percentile score distribution.

Figure 1. Cumulative number of NIGMS R01 grants in Fiscal Year 2006 as a function of percentile score.

Figure 1. Cumulative number of NIGMS R01 grants in Fiscal Year 2006 as a function of percentile score.

These grants were linked (primarily by citation in publications) to a total of 6,554 publications that appeared between October 2006 and September 2010 (Fiscal Years 2007-2010). Those publications had been cited 79,295 times as of April 2011. The median number of publications per grant was 7, with an interquartile range of 4-11. The median number of citations per grant was 73, with an interquartile range of 26-156.

The numbers of publications and citations represent the simplest available metrics of productivity. More refined metrics include the number of research (as opposed to review) publications, the number of citations that are not self-citations, the number of citations corrected for typical time dependence (since more recent publications have not had as much time to be cited as older publications), and the number of highly cited publications (which I defined as the top 10% of all publications in a given set). Of course, the metrics are not independent of one another. Table 1 shows these metrics and the correlation coefficients between them.

Table 1. Correlation coefficients between nine metrics of productivity.

Table 1. Correlation coefficients between nine metrics of productivity.

How do these metrics relate to percentile scores? Figures 2-4 show three distributions.

Figure 2. Distribution of the number of publications as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of publications.

Figure 2. Distribution of the number of publications as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of publications.

Figure 3. Distribution of the number of citations as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of citations.

Figure 3. Distribution of the number of citations as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of citations.

Figure 4. Distribution of the number of highly cited publications as a function of percentile score. Highly cited publications are defined as those in the top 10% of all research publications in terms of the total number of citations corrected for the observed average time dependence of citations.

Figure 4. Distribution of the number of highly cited publications as a function of percentile score. Highly cited publications are defined as those in the top 10% of all research publications in terms of the total number of citations corrected for the observed average time dependence of citations.

As could be anticipated, there is substantial scatter across each distribution. However, as could also be anticipated, each of these metrics has a negative correlation coefficient with the percentile score, with higher productivity metrics corresponding to lower percentile scores, as shown in Table 2.

Table 2. Correlation coefficients between the grant percentile score and nine metrics of productivity.

Table 2. Correlation coefficients between the grant percentile score and nine metrics of productivity.

Do these distributions reflect statistically significant relationships? This can be addressed through the use of a Lorenz curve Exit icon to plot the cumulative fraction of a given metric as a function of the cumulative fraction of grants, ordered by their percentile scores. Figure 5 shows the Lorentz curve for citations.

Figure 5. Cumulative fraction of citations as a function of the cumulative fraction of grants, ordered by percentile score. The shaded area is related to the excess fraction of citations associated with more highly rated grants.

Figure 5. Cumulative fraction of citations as a function of the cumulative fraction of grants, ordered by percentile score. The shaded area is related to the excess fraction of citations associated with more highly rated grants.

The tendency of the Lorenz curve to reflect a non-uniform distribution can be measured by the Gini coefficient Exit icon. This corresponds to twice the shaded area in Figure 5. For citations, the Gini coefficient has a value of 0.096. Based on simulations, this coefficient is 3.5 standard deviations above that for a random distribution of citations as a function of percentile score. Thus, the relationship between citations and the percentile score for the distribution is highly statistically significant, even if the grant-to-grant variation within a narrow range of percentile scores is quite substantial. Table 3 shows the Gini coefficients for the all of the productivity metrics.

Table 3. Gini coefficients for nine metrics of productivity. The number of standard deviations above the mean, as determined by simulations, is shown in parentheses below each coefficient.

Table 3. Gini coefficients for nine metrics of productivity. The number of standard deviations above the mean, as determined by simulations, is shown in parentheses below each coefficient.

Of these metrics, overall citations show the most statistically significant Gini coefficient, whereas highly cited publications show one of the least significant Gini coefficients. As shown in Figure 4, the distribution of highly cited publications is relatively even across the entire percentile score range.

Fiscal Year 2010 R01 Funding Outcomes and Estimates for Fiscal Year 2011

Fiscal Year 2010 ended on September 30, 2010. We have now analyzed the overall results for R01 grants, shown in Figures 1-3.

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2010.

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2010.

Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2006-2010. The thicker curve (black) corresponds to grants made in Fiscal Year 2010. The success rate for R01 applications was 27%, and the midpoint of the funding curve was at approximately the 21st percentile. These parameters are comparable to those for Fiscal Year 2009, excluding awards made with funds from the American Recovery and Reinvestment Act.

Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2006-2010. The thicker curve (black) corresponds to grants made in Fiscal Year 2010. The success rate for R01 applications was 27%, and the midpoint of the funding curve was at approximately the 21st percentile. These parameters are comparable to those for Fiscal Year 2009, excluding awards made with funds from the American Recovery and Reinvestment Act.

The total NIGMS expenditures (including both direct and indirect costs) for R01 grants are shown in Figure 3 for Fiscal Year 1996 through Fiscal Year 2010.

Figure 3. Overall NIGMS expenditures on R01 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2010. The dotted line shows the impact of awards (including supplements) made with Recovery Act funds. Results are in actual dollars with no correction for inflation.

Figure 3. Overall NIGMS expenditures on R01 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2010. The dotted line shows the impact of awards (including supplements) made with Recovery Act funds. Results are in actual dollars with no correction for inflation.

What do we anticipate for the current fiscal year (Fiscal Year 2011)? At this point, no appropriation bill has passed and we are operating under a continuing resolution through March 4, 2011, that funds NIH at Fiscal Year 2010 levels. Because we do not know the final appropriation level, we are not able at this time to estimate reliably the number of competing grants that we will be able to support. We can, however, estimate the number of research project grant applications in the success rate base (correcting for applications that are reviewed twice in the same fiscal year). We predict that this number will be approximately 3,875, an increase of 17% over Fiscal Year 2010.

UPDATE: The original post accidentally included a histogram from a previous year. The post now includes the correct Fiscal Year 2010 figure.

Another Look at Measuring the Scientific Output and Impact of NIGMS Grants

In a recent post, I described initial steps toward analyzing the research output of NIGMS R01 and P01 grants. The post stimulated considerable discussion in the scientific community and, most recently, a Nature news article Exit icon.

In my earlier post, I noted two major observations. First, the output (measured by the number of publications from 2007 through mid-2010 that could be linked to all NIH Fiscal Year 2006 grants from a given investigator) did not increase linearly with increased total annual direct cost support, but rather appeared to reach a plateau. Second, there were considerable ranges in output at all levels of funding.

These observations are even more apparent in the new plot below, which removes the binning in displaying the points corresponding to individual investigators.

A plot of the number of grant-linked publications from 2007 to mid-2010 for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006 as a function of the total annual direct cost for those grants. For this data set, the overall correlation coefficient between the number of publications and the total annual direct cost is 0.14.

A plot of the number of grant-linked publications from 2007 to mid-2010 for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006 as a function of the total annual direct cost for those grants. For this data set, the overall correlation coefficient between the number of publications and the total annual direct cost is 0.14.

Measuring the Scientific Output and Impact of NIGMS Grants

A frequent topic of discussion at our Advisory Council meetings—and across NIH—is how to measure scientific output in ways that effectively capture scientific impact. We have been working on such issues with staff of the Division of Information Services in the NIH Office of Extramural Research. As a result of their efforts, as well as those of several individual institutes, we now have tools that link publications to the grants that funded them.

Using these tools, we have compiled three types of data on the pool of investigators who held at least one NIGMS grant in Fiscal Year 2006. We determined each investigator’s total NIH R01 or P01 funding for that year. We also calculated the total number of publications linked to these grants from 2007 to mid-2010 and the average impact factor for the journals in which these papers appeared. We used impact factors in place of citations because the time dependence of citations makes them significantly more complicated to use.

I presented some of the results of our analysis of this data at last week’s Advisory Council meeting. Here are the distributions for the three parameters for the 2,938 investigators in the sample set:

Histograms showing the distributions of total annual direct costs, number of publications linked to those grants from 2007 to mid-2010 and average impact factor for the publication journals for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006.

Histograms showing the distributions of total annual direct costs, number of publications linked to those grants from 2007 to mid-2010 and average impact factor for the publication journals for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006.

For this population, the median annual total direct cost was $220,000, the median number of grant-linked publications was six and the median journal average impact factor was 5.5.

A plot of the median number of grant-linked publications and median journal average impact factors versus grant total annual direct costs is shown below.

A plot of the median number of grant-linked publications from 2007 to mid-2010 (red circles) and median average impact factor for journals in which these papers were published (blue squares) for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006. The shared bars show the interquartile ranges for the number of grant-linked publications (longer red bars) and journal average impact factors (shorter blue bars). The medians are for bins, with the number of investigators in each bin shown below the bars.

A plot of the median number of grant-linked publications from 2007 to mid-2010 (red circles) and median average impact factor for journals in which these papers were published (blue squares) for 2,938 investigators who held at least one NIGMS R01 or P01 grant in Fiscal Year 2006. The shared bars show the interquartile ranges for the number of grant-linked publications (longer red bars) and journal average impact factors (shorter blue bars). The medians are for bins, with the number of investigators in each bin shown below the bars.

This plot reveals several important points. The ranges in the number of publications and average impact factors within each total annual direct cost bin are quite large. This partly reflects variations in investigator productivity as measured by these parameters, but it also reflects variations in publication patterns among fields and other factors.

Nonetheless, clear trends are evident in the averages for the binned groups, with both parameters increasing with total annual direct costs until they peak at around $700,000. These observations provide support for our previously developed policy on the support of research in well-funded laboratories. This policy helps us use Institute resources as optimally as possible in supporting the overall biomedical research enterprise.

This is a preliminary analysis, and the results should be viewed with some skepticism given the metrics used, the challenges of capturing publications associated with particular grants, the lack of inclusion of funding from non-NIH sources and other considerations. Even with these caveats, the analysis does provide some insight into the NIGMS grant portfolio and indicates some of the questions that can be addressed with the new tools that NIH is developing.