Fiscal Year 2011 R01 Funding Outcomes and Estimates for Fiscal Year 2012

13 comments

Fiscal Year 2011 ended on September 30, 2011. As in previous years, we have analyzed the funding results (including percentiles and success rates) for R01 grants, shown in Figures 1-5. Thanks to Jim Deatherage for preparing these data.

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2011.
View larger image

Figure 1. Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2011.

Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2007-2011. For Fiscal Year 2011, the success rate for R01 applications was 24%, and the midpoint of the funding curve was at approximately the 19th percentile.
View larger image

Figure 2. NIGMS competing R01 funding curves for Fiscal Years 2007-2011. For Fiscal Year 2011, the success rate for R01 applications was 24%, and the midpoint of the funding curve was at approximately the 19th percentile.

Although the number of competing R01 awards funded by NIGMS in Fiscal Year 2011 was nearly identical to those in the previous 2 years (Figure 3), the success rate declined in 2011. Factors in this decline were a sharp increase in the number of competing applications that we received in 2011 (Figure 4) along with a decrease in total funding for R01s due to an NIH-wide budget reduction. For more discussion, read posts from Sally Rockey of the NIH Office of Extramural Research on NIH-wide success rates and factors influencing them.

Figure 3. Number of R01 and R37 grants (competing and noncompeting) funded in Fiscal Years 1997-2011.
View larger image

Figure 3. Number of R01 and R37 grants (competing and noncompeting) funded in Fiscal Years 1997-2011.

Figure 4. Number of competing R01 applications (including revisions) received during Fiscal Years 1998-2012.

Figure 4. Number of competing R01 applications (including revisions) received during Fiscal Years 1998-2012.

Below are the total NIGMS expenditures (including both direct and indirect costs) for R01 and R37 grants for Fiscal Year 1995 through Fiscal Year 2011.

Figure 5. The upper curve shows the overall NIGMS expenditures on R01 and R37 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2011. The lower curve (right vertical axis) shows the median direct costs of NIGMS R01 grants. Results are in actual dollars with no correction for inflation.

Figure 5. The upper curve shows the overall NIGMS expenditures on R01 and R37 grants (competing and noncompeting, including supplements) in Fiscal Years 1995-2011. The lower curve (right vertical axis) shows the median direct costs of NIGMS R01 grants. Results are in actual dollars with no correction for inflation.

In Fiscal Year 2012, we have received about 3,700 competing R01 grant applications (including revisions) and anticipate funding about 850 of these with the Fiscal Year 2012 appropriation. We expect the R01 success rate to be between 24% and 25%.

Because the competing application numbers and the total R01 budgets for Fiscal Years 2011 and 2012 are similar, the funding trends will likely be comparable.

13 Replies to “Fiscal Year 2011 R01 Funding Outcomes and Estimates for Fiscal Year 2012”

  1. For figure 2, my understanding is that the percentile/funding line is for a particular submission, whereas the percent cited (e.g. 24% for 2011) is higher because it includes those applications that were resubmitted within the calendar year and funded as A1s. Is this correct?

    1. NIGMS has a longstanding policy of asking our Advisory Council to approve the funding of any grant that would result in a PI having more than $750,000 in direct costs to his or her lab: http://www.nigms.nih.gov/Research/Application/Pages/NAGMSCouncilGuidelines.aspx. Some of the skipped applications in Figure 1 were due to this policy. However, other common reasons for skipping over an application are scientific overlap with a PI’s other projects, programmatic balance and comments in the summary statement that indicate concerns not reflected in the score.

  2. I’m impressed by the growth of the difference between total costs and direct costs over the last 15 years. Schools are getting far too much in indirect costs relative to what the investigator is getting to actually do the science. When you calculate in the amount of salary coverage required by the schools from your direct costs, there is very little left over to do the actual science. When is NIH going to address the rate of growth in indirects relative to the stagnation/inflationary decrease in direct costs to the investigators who actually do the work?

  3. Competing renewal applications peaked in 2007 and have been declining pretty steadily since. I wonder why that is.

    1. This is a consequence of the “NIH doubling” which ended in 2003. In FY 2003, NIGMS funded 1106 new and competing research grants. This fell to 978 in FY 2004 and 963 in FY 2005. Since these are primarily four year grants, most of these come back as competing renewals four year later, hence the peak in 2007.

      1. This may be the case but the noA2 policy is very likely to put downward pressure on Type2s. A sustained effect and possibly an extinction trend.

  4. It would be also valuable to know if in the competing grants whether renewals were more successful than initial RO1 applications. Figure 3 in the recent Feedback Loop would seem to indicate that they were about the same. Is this true?

    1. Competing renewals fare *much* better on peer review than new grants. You can see this in the second graph of this post: It is not clear to what extent this refects differences in the standards of peer review applied to Type 1 and Type 2 grants, versus the self-selection of PIs only choosing to submit Type 2 grants when they were extremely productive in the prior funding period. It is surely a combination, but the relative contributions are hard to tease out.

  5. At the risk of seeming uninformed, I note that Figure 1 stops at the 40th percentile. Obviously no grant that scores worse than that will be funded. However, my understanding is that grants with overall impact scores worse than a certain value (maybe 5?) are not discussed at study section. Are those “triaged” grants counted in the calculations to estimate success rate? Or is a grant only counted if it undergoes “peer review” (is discussed at study section)? I just wanted to be sure I understand how the success rates are calculated and was a bit confused by the semantics.

    1. Several years ago, the Center for Scientific Review (CSR) began the practice of having study sections not discuss applications to which reviewers, prior to the meeting, assigned poor impact scores. This policy is reasonable in view of constrained budgets and the limited likelihood that poorer applications will be funded. It also has the advantage of making the study section meetings more efficient and cost-effective. As always, however, any reviewer may request that an application be discussed, despite its initial score.

      Applications with scores poorer than the 40th percentile are rarely funded. For an explanation of success rates and how they are calculated, please see http://www.nigms.nih.gov/Research/Application/Pages/SuccessRateFAQs.aspx.

Leave a Reply to mat Cancel reply

Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.