CSR’s Percentiling Recalibration

12 comments

The Center for Scientific Review (CSR) has recently adjusted the base that a number of permanent study sections and special emphasis panels (SEPs) use to calculate percentiles for certain application types. Applications that were reviewed in a SEP meeting for the May 2013 Council round will be assigned a percentile ranking calculated from the new base. In some cases, revised summary statements will be issued for those applications missing percentile scores or with an incorrect percentile. The percentile ranking will also appear in the eRA Commons account of the principal investigator(s).

We here at NIGMS are often asked to explain how percentile rankings factor into determining which awards to make. We do not rely solely on a percentile cutoff; we consider a number of additional factors, including an investigator’s career stage, a laboratory’s other research funding, NIGMS research portfolio balance, and other programmatic priorities. For more information, see http://www.nigms.nih.gov/Research/Application/Pages/SuccessRateFAQs.aspx.

12 Replies to “CSR’s Percentiling Recalibration”

  1. “will be assigned a percentile ranking calculated from the new base”

    when will this new assignment take place?

    1. The new CSR base was calculated as of March 8. Summaries released prior to then from SEPs will have a new percentile and will be re-released.

  2. Very glad to see the news. The review score sometimes is confused. For example, one application received a score of 50. However, the score from three reviewers is 32, which is quite different from the final score. This difference is apparently from the discussion session (other panel members find more flaws or the three reviewers change mind). In general, the statement summary does not provide the explanations or reasons how the final score was determined by the panel. Could it be possible for the study section to include that information (how to determine the final score) in the summary statement?

    1. The FAQs, especially the last one, in the blog post Why Overall Impact Scores Are Not the Average of Criterion Scores cover this topic.

  3. The announcement from CSR is confusing. They say that bases will be recalculated “for a number of permanent study sections and Special Emphasis Panels (SEPs)”. But then they say that new percentiles will be calculated “for applications that are reviewed in SEPs this round and should be percentiled”.

    So is it correct that no recalculation of percentiles occurred for grants reviewed in standing study sections? Is it correct that this was nothing more than a recalibration of the “all-CSR” base for non-recurring SEPs?

    1. The new CSR percentiling is for R01s only at this time. For standing study sections where the new percentiling caused a significant shift in the score-percentile alignment, the new schema was used to avoid disadvantaging PIs. If the alignment was not altered very much, the usual percentiling format (current round plus two previous rounds) was used. More than half of the standing study sections used the new percentiling this round. This same practice applies to recurring SEPs, which have their own bases: where the score-alignment is discrepant, the new percentiling is used. Where the alignment is not significantly altered, they will use the current and two previous rounds as their base.

      All SEPs that use the CSR percentile base will be using the recalibrated CSR base, which includes only the current round.

  4. I note that in the power point presentation regarding this recalibration, it is emphasized that a rationale for “spreading the scores” is that the study section members should not want Program to be making the selections of what is funded. This seems a tacit admission of what we all know: that despite repeated warnings that reviewers are not making funding decisions and the mandate that we not ever use the “f” word, in fact the reviewers are very much making those decisions. Indeed, the 2 or 3 assigned reviewers pretty much make that decision within minutes of perusing the grant, and the rest is just justification for that decision. Which is why so many of the bulleted remarks in the critique are vague and useless. Rarely is there any reasoned decision discussed by the study section as a whole. Most frequently, this is driven by wisdom/perspective/bias of 1 or 2 people.

    It is really time that the NIH admitted this, especially when funding is so tight. The failure to allow at least 2 revisions has led to so much anxiety, lost productivity and careers, and just plain injustice. Not to mention that it is the antithesis of science – ie, “I am right, you are wrong, and I am not much interested in what you have to say about it”.

    Finally, perhaps the reason for the existence of score compression is not a failure on the part of the reviewers to do their job properly, but rather their instinctive understanding that they cannot distinguish among many of the applications in a given range, and that such distinctions are arbitrary – ie, give it a 3, and it has a chance, if not this round, then in revision, but a 4 makes it very difficult, and a 5 is usually the end of its chances. This latest fine-tuning of the process will create confusion -different criteria and “calibration” will be used by reviewers at the same review session, and even at different times of that review session. I do not understand how it will be determined whether the change in distribution is significant or not and needs adjustment. But it sure seems likely that someone will be screwed, and someone will benefit, but it will not be very rational.

  5. If the rationale for these changes is to take the decision away from Program staff, then have the study section rank the grants at the end of the meeting. We used to do this at NSF and it was a great way to recalibrate at the end. It did not add that much time to the meeting, and I felt better about the fairness of that process than I have in over 15 NIH study sections I have participated in since.

  6. I understand that the recalibration is for RO1 percentiling but were impact scores for K awards affected as well?

  7. In the council, how the percentiles from standing sections are compared with SEPs? At least intuitively many SEPs have more senior investigators so usually the competition is tougher. in other words, 20 percentile in a SEP might be actually much more difficult to get than 15 in a standing section. So how this is accounted for?

  8. The percentile indicates the relative ranking of a specific grant application with respect to other applications reviewed in a particular study section. Council members are asked to provide oversight of the initial review process. For NIGMS research applications, most SEP percentiles are calculated by reference to the overall CSR voting patterns (146 of 158 applications this round) while most study sections have their own percentile base. However, these are roughly comparable, so Council typically weighs other considerations (e.g., career status) rather than which percentile base was used.

Submit a Comment

Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.