Tag: Funding Outcomes

P01 Outcomes Analysis

9 comments

As part of our program assessment process, we have analyzed NIGMS program project (P01) grants to improve our understanding of how their outcomes compare with those of other mechanisms.

The most recent NIGMS funding opportunity announcement for P01s states that individual projects “must be clearly interrelated and synergistic so that the research ideas, efforts, and outcomes of the program as a whole will offer a distinct advantage over pursuing the individual projects separately.” From this perspective, we sought to address three major questions:

  • Do P01s achieve synergies above and beyond a collection of separate grants?
  • How do the results from P01s compare with those from R01s?
  • Do certain fields of science need P01s more than others?

To address these questions, we analyzed the outcomes of P01 grants using several different metrics and compared these outcomes to those of two comparator groups: single-principal investigator (PI) R01s and multiple-PI R01s. Since P01s could be considered as a collection of single-PI R01s and one or more cores, we chose single-PI R01s as a comparator group. Because a major facet of P01s is their focus on using collaborative approaches to science, we also wanted to compare their outcomes to another collaboration-focused research grant: multiple-PI R01s. While structurally different from P01s, multiple-PI R01s allow for a comparison between two competing models of funding team science within the NIGMS portfolio.

Continue reading “P01 Outcomes Analysis”

First Awards Issued in MIRA Pilot Program

5 comments

We have begun making grant awards resulting from responses to RFA-GM-16-002 (R35), the Maximizing Investigators’ Research Award (MIRA) pilot program. Out of the 179 applications we received, we have so far authorized 123 awards. The median yearly direct costs for these grants is $399,842, and the mean is $405,884. For comparison, the median yearly direct costs for an NIGMS R01 in Fiscal Year 2015 was $210,000, and the mean was $237,254.  On average, the budgets of these MIRAs to established investigators were reduced by 12% relative to the investigators’ recent NIGMS funding history. As described in the funding opportunity announcement (FOA), the budget reductions were in exchange for the benefits of the program: a 5-year award instead of the standard 4-year one, increased flexibility to follow new research directions, increased funding stability and decreased administrative burden. We will use the funds freed up through this trade-off to support other investigators and improve the distribution of NIGMS funding. It will take time for the full benefits of the program to individual investigators and the research community to become clear.

You can find more information about these awards on NIH RePORTER by entering RFA-GM-16-002 in the FOA field; however, the record of funded grants will not be complete until the end of Fiscal Year 2016. Because merging an investigator’s previous funding into a single award presents a variety of complications, in some cases the first-year budget of the MIRA is lower than the eventual funding level will be. This is frequently the case when the principal investigator (PI) was part of multi-PI grants that will be allowed to end before all NIGMS funding for the investigator is put on the MIRA or when the PI had already received funds from NIGMS in the current fiscal year.

We will begin making awards for the new and early stage investigators MIRA (RFA-GM-16-003) after the May 19-20 meeting of the National Advisory General Medical Sciences Council.

You can find additional information about the program on our MIRA web page.

Application and Funding Trends

8 comments

The Consolidated Appropriations Act, 2016, provides funding for the Federal Government through September 30. NIGMS has a Fiscal Year 2016 appropriation of $2.512 billion, which is $140 million, or 5.9%, higher than it was in Fiscal Year 2015. With this opportunity to expand NIGMS support for fundamental biomedical research comes a responsibility to make carefully considered investments with taxpayer funds.

Application Trends

One of the most commonly cited metrics when discussing grants is success rate, calculated as the number of applications funded divided by the number of applications received. As shown in Figure 1, the success rate for NIGMS research project grants (RPGs) increased from 24.8% in Fiscal Year 2014 to 29.6% in Fiscal Year 2015. This was due to an increase in the number of funded competing RPGs as well as a decline in the number of competing RPG applications. In contrast, in Fiscal Year 2013, applications increased while awards decreased, leading to a notable decrease in success rate. Overall, we have seen a decrease in RPG applications over the last 2 years, a trend warranting additional investigation.

Figure 1. Number of NIGMS Competing RPG Applications, Funded Competing RPGs and Success Rates for RPGs, Fiscal Years 2004-2015. NIGMS RPG applications (blue circles, dashed line; left axis) decreased from Fiscal Years 2014 to 2015 to a 5-year low. Meanwhile, NIGMS-funded RPGs (green squares, solid line; left axis) increased in Fiscal Year 2015 to a level not seen since Fiscal Year 2007. As a result, the NIGMS RPG success rate (gray triangles, dotted line; right axis) was the second highest it has been in the past decade.
Figure 1. Number of NIGMS Competing RPG Applications, Funded Competing RPGs and Success Rates for RPGs, Fiscal Years 2004-2015. NIGMS RPG applications (blue circles, dashed line; left axis) decreased from Fiscal Years 2014 to 2015 to a 5-year low. Meanwhile, NIGMS-funded RPGs (green squares, solid line; left axis) increased in Fiscal Year 2015 to a level not seen since Fiscal Year 2007. As a result, the NIGMS RPG success rate (gray triangles, dotted line; right axis) was the second highest it has been in the past decade.
Continue reading “Application and Funding Trends”

MIRA Status and Future Plans

29 comments

Now that we have completed the review process for Maximizing Investigators’ Research Award (MIRA) applications from the first eligible cohort of established investigators, I would like to update you on the program’s status and plans for its future. I shared this information with our Advisory Council at its recent meeting in January.

Screenshot of video

My update on the MIRA program at the January 2016 Advisory Council meeting begins at 26:06.

The first funding opportunity announcement (FOA) we issued (RFA-GM-16-002) was for established investigators who had either two NIGMS R01s or one NIGMS R01 for more than $400,000 in direct costs. In either case, one grant had to be expiring in 2016 or 2017. Out of the 710 investigators who could have met these criteria, 179 submitted applications, corresponding to 25% of the eligible pool.

Among the eligible investigators, 80% were male and 20% were female. This ratio was unchanged among those who applied, as were the percentages across racial and ethnic groups (Figure 1). Thus, although the demographics of the group of investigators that was eligible for this first FOA were skewed in several ways, the skewing was not exacerbated in those who chose to apply. Continue reading “MIRA Status and Future Plans”

Improved Success Rate and Other Funding Trends in Fiscal Year 2014

10 comments

The Consolidated and Further Continuing Appropriations Act, 2015, provides funding for the Federal Government through September 30. NIGMS has a Fiscal Year 2015 appropriation of $2.372 billion, which is $13 million, or 0.5%, higher than it was in Fiscal Year 2014.

As I explained in an earlier post, we made a number of adjustments to our portfolio and funding policies last fiscal year in order to bolster our support for investigator-initiated research. Partly because of these changes, the success rate for research project grants (RPGs)—which are primarily R01s—was 25 percent in Fiscal Year 2014. This is 5 percentage points higher than it was in Fiscal Year 2013. Had we not made the funding policy changes, we predicted that the success rate would have remained flat at 20 percent.

Figure 1 shows the number of RPG applications we received and funded, as well as the corresponding success rates, for Fiscal Years 2002-2014.

Figure 1. Number of competing RPG applications assigned to NIGMS (blue line with diamonds, left axis) and number funded (red line with squares, left axis) for Fiscal Years 2002-2014. The success rate (number of applications funded divided by the total number of applications) is shown in the green line with triangles, right axis. Data: Tony Moore.
Figure 1. Number of competing RPG applications assigned to NIGMS (blue line with diamonds, left axis) and number funded (red line with squares, left axis) for Fiscal Years 2002-2014. The success rate (number of applications funded divided by the total number of applications) is shown in the green line with triangles, right axis. Data: Tony Moore.

Moving forward, it will be important to employ strategies that will enable us to at least maintain this success rate. In keeping with this goal, we recently released a financial management plan (no longer available) that continues many of the funding policies we instituted last year. As funds from the retirement of the Protein Structure Initiative come back into the investigator-initiated RPG pool, we’ll be working to ensure that they support a sustained improvement in success rate rather than create a 1-year spike followed by a return to lower rates.

Figures 2 and 3 show data for funding versus the percentile scores of the R01 applications we received. People frequently ask me what NIGMS’ percentile cutoff or “payline” is, but it should be clear from these figures that we do not use a strict percentile score criterion for making funding decisions. Rather, we take a variety of factors into account in addition to the score, including the amount of other support already available to the researcher; the priority of the research area for the Institute’s mission; and the importance of maintaining a broad and diverse portfolio of research topics, approaches and investigators.

Figure 2. Percentage of competing R01 applications funded by NIGMS as a function of percentile scores for Fiscal Years 2010-2014. For Fiscal Year 2014, the success rate for R01 applications was 25.7 percent, and the midpoint of the funding curve was at approximately the 22nd percentile. Data: Jim Deatherage.
Figure 2. Percentage of competing R01 applications funded by NIGMS as a function of percentile scores for Fiscal Years 2010-2014. For Fiscal Year 2014, the success rate for R01 applications was 25.7 percent, and the midpoint of the funding curve was at approximately the 22nd percentile. See more details about the data analysis for Figure 2. Data: Jim Deatherage.
Figure 3. Number of competing R01 applications (solid black bars) assigned to NIGMS and number funded (striped red bars) in Fiscal Year 2014 as a function of percentile scores. Data: Jim Deatherage.
Figure 3. Number of competing R01 applications (solid black bars) assigned to NIGMS and number funded (striped red bars) in Fiscal Year 2014 as a function of percentile scores. See more details about the data analysis for Figure 3. Data: Jim Deatherage.

It’s too early to say what the success rate will be for Fiscal Year 2015 because it can be influenced by a number of factors, as I described last year. However, we’re hopeful that by continuing to adjust our priorities and policies to focus on supporting a broad and diverse portfolio of investigators, we can reverse the trend of falling success rates seen in recent years.

More on My Shared Responsibility Post

4 comments

Thanks for all of the comments and discussion on my last post. There were many good points and ideas brought up, and these will be very useful as we consider additional policy changes at NIGMS and NIH. I hope these conversations will continue outside of NIH as well.

Several people asked about the current distribution of funding among NIGMS principal investigators (PIs). Here are a few relevant statistics:

  • In terms of the NIH research funding of NIGMS grantees, in Fiscal Year 2013, 5 percent of the PIs had 25 percent of this group’s total NIH direct costs and 20 percent of the PIs had half of it. A similar pattern was recapitulated NIH-wide.
  • NIGMS PIs who had over $500,000 in total NIH direct costs held approximately $400 million in NIGMS funding.
  • The figure below shows the distribution of total NIH direct costs for NIGMS-supported investigators as well as the average number of NIH research grants held by PIs in each range.
Graph representing distribution of NIGMS investigartors' total NIH direct costs for research in FY2013
Figure 1. The distribution of NIGMS investigators’ total NIH direct costs for research in Fiscal Year 2013 (blue bars, left axis). The number below each bar represents the top of the direct cost range for that bin. The average number of NIH research grants held by PIs in each group is also shown (red line with squares, right axis). The direct costs bin ranges were chosen so that the first four bins each included 20 percent of NIGMS investigators.

With regard to changes NIH might make to help re-optimize the biomedical research ecosystem, NIH Director Francis Collins recently formed two NIH-wide working groups to develop possible new policies and programs related to some of the issues that I highlighted in my blog post and that were discussed in the subsequent comments. The first group, chaired by NIH Deputy Director for Extramural Research Sally Rockey, will explore ways to decrease the age at which investigators reach independence in research. The second, chaired by me, will look at developing more efficient and sustainable funding policies. Once these committees have made their recommendations, Sally plans to set up a group to consider the question of NIH support for faculty salaries.

As I mentioned in my post, we at NIGMS have been working for some time on these issues. We’ll be discussing additional changes and ideas with the community in the coming weeks and months on this blog and in other forums, including our upcoming Advisory Council meeting.

A Shared Responsibility

63 comments

The doubling of the NIH budget between 1998 and 2003 affected nearly every part of the biomedical research enterprise. The strategies we use to support research, the manner in which scientists conduct research, the ways in which researchers are evaluated and rewarded, and the organization of research institutions were all influenced by the large, sustained increases in funding during the doubling period.

Despite the fact that the budget doubling ended more than a decade ago, the biomedical research enterprise has not re-equilibrated to function optimally under the current circumstances. As has been pointed out by others (e.g., Ioannidis, 2011; Vale, 2012; Bourne, 2013; Alberts et al., 2014), the old models for supporting, evaluating, rewarding and organizing research are not well suited to today’s realities. Talented and productive investigators at all levels are struggling to keep their labs open (see Figure 1 below, Figure 3 in my previous post on factors affecting success rates and Figure 3 in Sally Rockey’s 2012 post on application numbers). Trainees are apprehensive about pursuing careers in research (Polka and Krukenberg, 2014). Study sections are discouraged by the fact that most of the excellent applications they review won’t be funded and by the difficulty of trying to prioritize among them. And the nation’s academic institutions and funding agencies struggle to find new financial models to continue to support research and graduate education. If we do not retool the system to become more efficient and sustainable, we will be doing a disservice to the country by depriving it of scientific advances that would have led to improvements in health and prosperity.

Re-optimizing the biomedical research enterprise will require significant changes in every part of the system. For example, despite prescient, early warnings from Bruce Alberts (1985) about the dangers of confusing the number of grants and the size of one’s research group with success, large labs and big budgets have come to be viewed by many researchers and institutions as key indicators of scientific achievement. However, when basic research labs get too big it creates a number of inefficiencies. Much of the problem is one of bandwidth: One person can effectively supervise, mentor and train a limited number of people. Furthermore, the larger a lab gets, the more time the principal investigator must devote to writing grants and performing administrative tasks, further reducing the time available for actually doing science.

Although certain kinds of research projects—particularly those with an applied outcome, such as clinical trials—can require large teams, a 2010 analysis by NIGMS and a number of subsequent studies of other funding systems (Fortin and Currie, 2013; Gallo et al., 2014) have shown that, on average, large budgets do not give us the best returns on our investments in basic science. In addition, because it is impossible to know in advance where the next breakthroughs will arise, having a broad and diverse research portfolio should maximize the number of important discoveries that emerge from the science we support (Lauer, 2014).

These and other lines of evidence indicate that funding smaller, more efficient research groups will increase the net impact of fundamental biomedical research: valuable scientific output per taxpayer dollar invested. But to achieve this increase, we must all be willing to share the responsibility and focus on efficiency as much as we have always focused on efficacy. In the current zero-sum funding environment, the tradeoffs are stark: If one investigator gets a third R01, it means that another productive scientist loses his only grant or a promising new investigator can’t get her lab off the ground. Which outcome should we choose?

My main motivation for writing this post is to ask the biomedical research community to think carefully about these issues. Researchers should ask: Can I do my work more efficiently? What size does my lab need to be? How much funding do I really need? How do I define success? What can I do to help the research enterprise thrive?

Academic institutions should ask: How should we evaluate, reward and support researchers? What changes can we make to enhance the efficiency and sustainability of the research enterprise?

And journals, professional societies and private funding organizations should examine the roles they can play in helping to rewire the unproductive incentive systems that encourage researchers to focus on getting more funding than they actually need.

We at NIGMS are working hard to find ways to address the challenges currently facing fundamental biomedical research. As just one example, our MIRA program aims to create a more efficient, stable, flexible and productive research funding mechanism. If it is successful, the program could become the Institute’s primary means of funding individual investigators and could help transform how we support fundamental biomedical research. But reshaping the system will require everyone involved to share the responsibility. We owe it to the next generation of researchers and to the American public.

Graph representing NIGMS principal investigators (PIs) without NIH R01 funding between 200 and 2014.
Figure 1. The number of NIGMS principal investigators (PIs) without NIH R01 funding has increased over time. All NIGMS PIs are shown by the purple Xs (left axis). NIGMS PIs who were funded in each fiscal year are represented by the orange circles (left axis). PIs who had no NIH funding in a given fiscal year but had funding from NIGMS within the previous 8 years and were still actively applying for funding within the previous 4 years are shown by the green triangles (left axis); these unfunded PIs have made up an increasingly large percentage of all NIGMS PIs over the past decade (blue squares; right axis). Definitions: “PI” includes both contact PIs and PIs on multi-PI awards. This analysis includes only R01, R37 and R29 (“R01 equivalent”) grants and PIs. Other kinds of NIH grant support are not counted. An “NIGMS PI” is defined as a current or former NIGMS R01 PI who was either funded by NIGMS in the fiscal year shown or who was not NIH-funded in the fiscal year shown but was funded by NIGMS within the previous 8 years and applied for NIGMS funding within the previous 4 years. The latter criterion indicates that these PIs were still seeking funding for a substantial period of time after termination of their last NIH grant. Note that PIs who had lost NIGMS support but had active R01 support from another NIH institute or center are not counted as “NIGMS PIs” because they were still funded in that fiscal year. Also not counted as “NIGMS PIs” are inactive PIs, defined as PIs who were funded by NIGMS in the previous 8 years but who did not apply for NIGMS funding in the previous 4 years. Data analysis was performed by Lisa Dunbar and Jim Deatherage.

UPDATE: For additional details, read More on My Shared Responsibility Post.

Productivity Metrics and Peer Review Scores, Continued

11 comments

In a previous post, I described some initial results from an analysis of the relationships between a range of productivity metrics and peer review scores. The analysis revealed that these productivity metrics do correlate to some extent with peer review scores but that substantial variation occurs across the population of grants.

Here, I explore these relationships in more detail. To facilitate this analysis, I separated the awards into new (Type 1) and competing renewal (Type 2) grants. Some parameters for these two classes are shown in Table 1.

Table 1. Selected=

Table 1. Selected parameters for the population of Type 1 (new) and Type 2 (competing renewal) grants funded in Fiscal Year 2006: average numbers of publications, citations and highly cited citations (defined as those being in the top 10% of time-corrected citations for all research publications).

For context, the Fiscal Year 2006 success rate was 26%, and the midpoint on the funding curve was near the 20th percentile.

To better visualize trends in the productivity metrics data in light of the large amounts of variability, I calculated running averages over sets of 100 grants separately for the Type 1 and Type 2 groups of grants, shown in Figures 1-3 below.

Figure 1. Running averages for the number of publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 1. Running averages for the number of publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 2. Running averages for the number of citations over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 2. Running averages for the number of citations over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 3. Running averages for the number of highly cited publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

Figure 3. Running averages for the number of highly cited publications over sets of 100 grants funded in Fiscal Year 2006 for Type 1 (new, solid line) and Type 2 (competing renewal, dotted line) grants as a function of the average percentile for that set of 100 grants.

These graphs show somewhat different behavior for Type 1 and Type 2 grants. For Type 1 grants, the curves are relatively flat, with a small decrease in each metric from the lowest (best) percentile scores that reaches a minimum near the 12th percentile and then increases somewhat. For Type 2 grants, the curves are steeper and somewhat more monotonic.

Note that the curves for the number of highly cited publications for Type 1 and Type 2 grants are nearly superimposable above the 7th percentile. If this metric truly reflects high scientific impact, then the observations that new grants are comparable to competing renewals and that the level of highly cited publications extends through the full range of percentile scores reinforce the need to continue to support new ideas and new investigators.

While these graphs shed light on some of the underlying trends in the productivity metrics and the large amount of variability that is observed, one should be appropriately cautious in interpreting these data given the imperfections in the metrics; the fact that the data reflect only a single year; and the many legitimate sources of variability, such as differences between fields and publishing styles.

Productivity Metrics and Peer Review Scores

18 comments

A key question regarding the NIH peer review system relates to how well peer review scores predict subsequent scientific output. Answering this question is a challenge, of course, since meaningful scientific output is difficult to measure and evolves over time–in some cases, a long time. However, by linking application peer review scores to publications citing support from the funded grants, it is possible to perform some relevant analyses.

The analysis I discuss below reveals that peer review scores do predict trends in productivity in a manner that is statistically different from random ordering. That said, there is a substantial level of variation in productivity metrics among grants with similar peer review scores and, indeed, across the full distribution of funded grants.

I analyzed 789 R01 grants that NIGMS competitively funded during Fiscal Year 2006. This pool represents all funded R01 applications that received both a priority score and a percentile score during peer review. There were 357 new (Type 1) grants and 432 competing renewal (Type 2) grants, with a median direct cost of $195,000. The percentile scores for these applications ranged from 0.1 through 43.4, with 93% of the applications having scores lower than 20. Figure 1 shows the percentile score distribution.

Figure 1. Cumulative number of NIGMS R01 grants in Fiscal Year 2006 as a function of percentile score.

Figure 1. Cumulative number of NIGMS R01 grants in Fiscal Year 2006 as a function of percentile score.

These grants were linked (primarily by citation in publications) to a total of 6,554 publications that appeared between October 2006 and September 2010 (Fiscal Years 2007-2010). Those publications had been cited 79,295 times as of April 2011. The median number of publications per grant was 7, with an interquartile range of 4-11. The median number of citations per grant was 73, with an interquartile range of 26-156.

The numbers of publications and citations represent the simplest available metrics of productivity. More refined metrics include the number of research (as opposed to review) publications, the number of citations that are not self-citations, the number of citations corrected for typical time dependence (since more recent publications have not had as much time to be cited as older publications), and the number of highly cited publications (which I defined as the top 10% of all publications in a given set). Of course, the metrics are not independent of one another. Table 1 shows these metrics and the correlation coefficients between them.

Table 1. Correlation coefficients between nine metrics of productivity.

Table 1. Correlation coefficients between nine metrics of productivity.

How do these metrics relate to percentile scores? Figures 2-4 show three distributions.

Figure 2. Distribution of the number of publications as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of publications.

Figure 2. Distribution of the number of publications as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of publications.

Figure 3. Distribution of the number of citations as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of citations.

Figure 3. Distribution of the number of citations as a function of percentile score. The inset shows a histogram of the number of grants as a function of the number of citations.

Figure 4. Distribution of the number of highly cited publications as a function of percentile score. Highly cited publications are defined as those in the top 10% of all research publications in terms of the total number of citations corrected for the observed average time dependence of citations.

Figure 4. Distribution of the number of highly cited publications as a function of percentile score. Highly cited publications are defined as those in the top 10% of all research publications in terms of the total number of citations corrected for the observed average time dependence of citations.

As could be anticipated, there is substantial scatter across each distribution. However, as could also be anticipated, each of these metrics has a negative correlation coefficient with the percentile score, with higher productivity metrics corresponding to lower percentile scores, as shown in Table 2.

Table 2. Correlation coefficients between the grant percentile score and nine metrics of productivity.

Table 2. Correlation coefficients between the grant percentile score and nine metrics of productivity.

Do these distributions reflect statistically significant relationships? This can be addressed through the use of a Lorenz curve Link to external web site to plot the cumulative fraction of a given metric as a function of the cumulative fraction of grants, ordered by their percentile scores. Figure 5 shows the Lorentz curve for citations.

Figure 5. Cumulative fraction of citations as a function of the cumulative fraction of grants, ordered by percentile score. The shaded area is related to the excess fraction of citations associated with more highly rated grants.

Figure 5. Cumulative fraction of citations as a function of the cumulative fraction of grants, ordered by percentile score. The shaded area is related to the excess fraction of citations associated with more highly rated grants.

The tendency of the Lorenz curve to reflect a non-uniform distribution can be measured by the Gini coefficient Link to external web site. This corresponds to twice the shaded area in Figure 5. For citations, the Gini coefficient has a value of 0.096. Based on simulations, this coefficient is 3.5 standard deviations above that for a random distribution of citations as a function of percentile score. Thus, the relationship between citations and the percentile score for the distribution is highly statistically significant, even if the grant-to-grant variation within a narrow range of percentile scores is quite substantial. Table 3 shows the Gini coefficients for the all of the productivity metrics.

Table 3. Gini coefficients for nine metrics of productivity. The number of standard deviations above the mean, as determined by simulations, is shown in parentheses below each coefficient.

Table 3. Gini coefficients for nine metrics of productivity. The number of standard deviations above the mean, as determined by simulations, is shown in parentheses below each coefficient.

Of these metrics, overall citations show the most statistically significant Gini coefficient, whereas highly cited publications show one of the least significant Gini coefficients. As shown in Figure 4, the distribution of highly cited publications is relatively even across the entire percentile score range.

Fiscal Year 2010 R01 Funding Outcomes and Estimates for Fiscal Year 2011

17 comments

Fiscal Year 2010 ended on September 30, 2010. We have now analyzed the overall results for R01 grants, shown in Figures 1-3.

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2010.

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2010.
Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2006-2010. The thicker curve (black) corresponds to grants made in Fiscal Year 2010. The success rate for R01 applications was 27%, and the midpoint of the funding curve was at approximately the 21st percentile. These parameters are comparable to those for Fiscal Year 2009, excluding awards made with funds from the American Recovery and Reinvestment Act.

Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2006-2010. The thicker curve (black) corresponds to grants made in Fiscal Year 2010. The success rate for R01 applications was 27%, and the midpoint of the funding curve was at approximately the 21st percentile. These parameters are comparable to those for Fiscal Year 2009, excluding awards made with funds from the American Recovery and Reinvestment Act.

The total NIGMS expenditures (including both direct and indirect costs) for R01 grants are shown in Figure 3 for Fiscal Year 1996 through Fiscal Year 2010.

Figure 3. Overall NIGMS expenditures on R01 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2010. The dotted line shows the impact of awards (including supplements) made with Recovery Act funds. Results are in actual dollars with no correction for inflation.

Figure 3. Overall NIGMS expenditures on R01 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2010. The dotted line shows the impact of awards (including supplements) made with Recovery Act funds. Results are in actual dollars with no correction for inflation.

What do we anticipate for the current fiscal year (Fiscal Year 2011)? At this point, no appropriation bill has passed and we are operating under a continuing resolution through March 4, 2011, that funds NIH at Fiscal Year 2010 levels. Because we do not know the final appropriation level, we are not able at this time to estimate reliably the number of competing grants that we will be able to support. We can, however, estimate the number of research project grant applications in the success rate base (correcting for applications that are reviewed twice in the same fiscal year). We predict that this number will be approximately 3,875, an increase of 17% over Fiscal Year 2010.

UPDATE: The original post accidentally included a histogram from a previous year. The post now includes the correct Fiscal Year 2010 figure.