Wanted: Input on Consideration of Sex as a Biological Variable in Biomedical Research

Although researchers have made major progress in achieving a balance between male and female subjects in human studies—women now account for roughly half of the participants in NIH-funded clinical trials—a similar pattern has not been seen in pre-clinical research involving animals and cells. To inform the development of policies that address this issue, NIH has issued a request for information (RFI) on the consideration of sex as a biological variable in biomedical research.

As NIH Deputy Director for Extramural Research Sally Rockey wrote in a recent blog post announcing the RFI, “Sex is a critical variable when trying to understand the biological and behavioral systems that fundamentally shape human health.” Appropriate representation of animals and cells is also relevant to NIGMS and NIH efforts to enhance scientific rigor and data reproducibility in research.

And in her own blog post about the RFI, NIH Associate Director for Research on Women’s Health Janine Clayton said that while many scientists have already expressed support for the policy change, she has also heard from many sources that it needs “to be carefully implemented, as a true benefit to science—and not become a trivial, bureaucratic box to check.” She noted that comments in response to the RFI will guide NIH in creating “meaningful change in a deliberate and thoughtful way.”

Since NIGMS supports a significant amount of basic biomedical science that utilizes animal models and cells, we encourage our grantees to submit their input on this topic by the October 13 deadline.

UPDATE: The deadline for submitting input has been extended to October 24.

Funding Opportunity to Create Training Modules to Enhance Data Reproducibility

In February, we asked for input on training activities relevant to enhancing data reproducibility, which has become a very serious issue for both basic and clinical research. The responses revealed that there is substantial variation in the training occurring at institutions. One reason is that “best practices” training in skills that influence data reproducibility appears to be largely passed down from generation to generation of scientists working in the laboratory.

To increase the likelihood that researchers generate reproducible, unbiased and properly validated results, NIGMS and nine additional NIH components have issued a funding opportunity announcement to develop, pilot and disseminate training modules to enhance data reproducibility. Appropriate areas for the modules include experimental design, laboratory practices, analysis and reporting of results, and/or the influence of cultural factors such as confirmation bias in hypothesis testing or the scientific rewards system. The modules should be creative, engaging, readily accessible online at no cost and easily incorporated into research training programs for the intended audience, which includes graduate students, postdoctoral fellows and beginning faculty.

The application deadline is November 20, 2014, with letters of intent due by October 20, 2014. Applicants may request up to $150,000 in total costs to cover the entire award period. For more details, read the FAQs.

Hypothesis Overdrive?

Historically, this blog has focused on “news you can use,” but in the spirit of two-way communication, for this post I thought I would try something that might generate more discussion. I’m sharing my thoughts on an issue I’ve been contemplating a lot: the hazards of overly hypothesis-driven science.

When I was a member of one study section, I often saw grant applications that began, “The overarching hypothesis of this application is….” Frequently, these applications were from junior investigators who, I suspect, had been counseled that what study sections want is hypothesis-driven science. In fact, one can even find this advice in articles about grantsmanship Exit icon.

Despite these beliefs about “what study sections want,” such applications often received unfavorable reviews because the panel felt that if the “overarching hypothesis” turned out to be wrong, the only thing that would be learned is that the hypothesis was wrong. Knowing how a biological system doesn’t work is certainly useful, but most basic research study sections expect that a grant will tell us more about how biological systems do work, regardless of the outcomes of the proposed experiments. Rather than praising these applications for being hypothesis-driven, the study section often criticized them for being overly hypothesis-driven.

Many people besides me have worried about an almost dogmatic emphasis on hypothesis-driven science as the gold standard for biomedical research (e.g., see Jewett, 2005; Beard and Kushmerick, 2009; Glass, 2014 Exit icon). But the issue here is even deeper than just grantsmanship, and I think it is also relevant to recent concerns over the reproducibility of scientific data and the correctness of conclusions drawn from those data Exit icon. It is too easy for us to become enamored with our hypotheses, a phenomenon that has been called confirmation bias. Data that support an exciting, novel hypothesis will likely appear in a “high-impact” journal and lead to recognition in the field. This creates an incentive to show that the hypothesis is correct and a disincentive to proving it wrong. Focusing on a single hypothesis also produces tunnel vision, making it harder to see other possible explanations for the data and sometimes leading us to ignore anomalies that might actually be the key to a genuine breakthrough.

In a 1964 paper Exit icon, John Platt codified an alternative approach to the standard conception of the scientific method, which he named strong inference. In strong inference, scientists always produce multiple hypotheses that will explain their data and then design experiments that will distinguish among these alternative hypotheses. The advantage, at least in principle, is that it forces us to consider different explanations for our results at every stage, minimizing confirmation bias and tunnel vision.

Another way of addressing the hazards of hypothesis-driven science is to shift toward a paradigm of question-driven science. In question-driven science, the focus is on answering questions: How does this system work? What does this protein do? Why does this mutation produce this phenotype? By putting questions ahead of hypotheses, getting the answer becomes the goal rather than “proving” a particular idea. A scientific approach that puts questions first and includes multiple models to explain our observations offers significant benefits for fundamental biomedical research.

In order to make progress, it may sometimes be necessary to start with experiments designed to give us information and leads—Who are the players? or What happens when we change this?—before we can develop any models or hypotheses at all. This kind of work is often maligned as “fishing expeditions” and criticized for not being hypothesis-driven, but history has shown us just how important it can be for producing clues that eventually lead to breakthroughs. For example, genetic screens for mutations affecting development in C. elegans set the stage for the discovery of microRNA-mediated regulation of gene expression.

Is it time to stop talking about hypothesis-driven science and to focus instead on question-driven science? Hypotheses and models are important intermediates in the scientific process, but should they be in the driver’s seat? Let me know what you think.

Give Input on Training Activities Relevant to Data Reproducibility

Data reproducibility is getting a lot of attention in the scientific community, and NIH is among those seeking to address the issue Exit icon. At NIGMS, one area we’re focusing on is the needs and opportunities for training in areas relevant to improving data reproducibility in biomedical research. We just issued a request for information to gather input on activities that already exist or are planned as well as on crucial needs that an NIGMS program should address.

I strongly encourage you and your colleagues to submit comments by the February 28 deadline (no longer available). The information will assist us in shaping a program of small grants to support the design and development of exportable training modules tailored for graduate students, postdoctoral students and beginning investigators.

UPDATE: NIGMS and additional NIH components have issued the Training Modules to Enhance Data Reproducibility (R25) funding opportunity announcement. The application deadline is November 20.