Hypothesis Overdrive?

59 comments

Historically, this blog has focused on “news you can use,” but in the spirit of two-way communication, for this post I thought I would try something that might generate more discussion. I’m sharing my thoughts on an issue I’ve been contemplating a lot: the hazards of overly hypothesis-driven science.

When I was a member of one study section, I often saw grant applications that began, “The overarching hypothesis of this application is….” Frequently, these applications were from junior investigators who, I suspect, had been counseled that what study sections want is hypothesis-driven science. In fact, one can even find this advice in articles about grantsmanship Link to external web site.

Despite these beliefs about “what study sections want,” such applications often received unfavorable reviews because the panel felt that if the “overarching hypothesis” turned out to be wrong, the only thing that would be learned is that the hypothesis was wrong. Knowing how a biological system doesn’t work is certainly useful, but most basic research study sections expect that a grant will tell us more about how biological systems do work, regardless of the outcomes of the proposed experiments. Rather than praising these applications for being hypothesis-driven, the study section often criticized them for being overly hypothesis-driven.

Many people besides me have worried about an almost dogmatic emphasis on hypothesis-driven science as the gold standard for biomedical research (e.g., see Jewett, 2005; Beard and Kushmerick, 2009; Glass, 2014 Link to external web site). But the issue here is even deeper than just grantsmanship, and I think it is also relevant to recent concerns over the reproducibility of scientific data and the correctness of conclusions drawn from those data Link to external web site. It is too easy for us to become enamored with our hypotheses, a phenomenon that has been called confirmation bias. Data that support an exciting, novel hypothesis will likely appear in a “high-impact” journal and lead to recognition in the field. This creates an incentive to show that the hypothesis is correct and a disincentive to proving it wrong. Focusing on a single hypothesis also produces tunnel vision, making it harder to see other possible explanations for the data and sometimes leading us to ignore anomalies that might actually be the key to a genuine breakthrough.

In a 1964 paper Link to external web site, John Platt codified an alternative approach to the standard conception of the scientific method, which he named strong inference. In strong inference, scientists always produce multiple hypotheses that will explain their data and then design experiments that will distinguish among these alternative hypotheses. The advantage, at least in principle, is that it forces us to consider different explanations for our results at every stage, minimizing confirmation bias and tunnel vision.

Another way of addressing the hazards of hypothesis-driven science is to shift toward a paradigm of question-driven science. In question-driven science, the focus is on answering questions: How does this system work? What does this protein do? Why does this mutation produce this phenotype? By putting questions ahead of hypotheses, getting the answer becomes the goal rather than “proving” a particular idea. A scientific approach that puts questions first and includes multiple models to explain our observations offers significant benefits for fundamental biomedical research.

In order to make progress, it may sometimes be necessary to start with experiments designed to give us information and leads—Who are the players? or What happens when we change this?—before we can develop any models or hypotheses at all. This kind of work is often maligned as “fishing expeditions” and criticized for not being hypothesis-driven, but history has shown us just how important it can be for producing clues that eventually lead to breakthroughs. For example, genetic screens for mutations affecting development in C. elegans set the stage for the discovery of microRNA-mediated regulation of gene expression.

Is it time to stop talking about hypothesis-driven science and to focus instead on question-driven science? Hypotheses and models are important intermediates in the scientific process, but should they be in the driver’s seat? Let me know what you think.

59 Replies to “Hypothesis Overdrive?”

  1. Perhaps it could be framed as “goal” driven science. For instance, one could say, my goal is to understand the function of the MafP1 signalling system in mammalian development. Then you are not bound by the questions you can think to ask. You can just keep asking and answering questions until you reach your goal.

  2. I think your point is well taken, but has two problems, one theoretical and the other practical. I would imagine that most grant proposals pertaining to basic research are inspired by the applicant’s desire to know “how X works”. To answer such a question you have to have some ideas worth testing, both to organize your research and to convince reviewers of the likelihood that you are approaching the question in a sensible way. These are working hypotheses. They need not be so specific that you couldn’t use broadly based approaches, such a unbiased genetic screens or megasequencing of appropriate genomes or broadly based analyses of gene expression in cells growing in different conditions. The alternative is to simply gather data without any preconceived notion of what kind of information you are looking for and without any sense of what data would potentially be useful. That’s the theory. In practice, most review groups that I have been part of or that have reviewed my proposals are predisposed against research that is not hypothesis-driven. Therefore, even if your assessment is correct as presented from a theoretical point of view, how will you convince reviewers to drop their own preconceived notions of what should be fundable? Changing the mindset of reviewers would only work if NIH established firm guidelines about what can and cannot be criticized. Doing so would alienate most reviewers.

  3. I agree. “Hypothesis-driven” research has value. But methods development, pathway elucidation, mechanism elucidation, and structure determination also have value…and typically have higher significance, higher impact, higher citation half-life, and higher reliability.

    1. Sadly, many comments here confirm the widespread science illiteracy that is nowadays so prominently at display in peer-review panels. Of course we need (implicit or explicit) hypotheses. A hypothesis is just another way of framing a rigorous formal scientific question. Hence the distinction between question-driven and hypothesis-driven science is moot. At least this shows that Jon still appreciates true hypotheses in the true meaning of the word. Jon’s disdain is understandable because so many investigators use the term hypothesis to refer to approaches that do not deserve that label.

      If Galileo or Darwin did not have hypotheses we would still be documenting the star positions night by night or collect specimen of dead or fossilized animals, describe their anatomy in some mechanistic manner, and catalogue them – without understanding what we are doing.
      I truly miss the truly hypotheses-driven proposals. The majority of proposals that get funded are about technologies, new pathways, new molecules, new big data – anything with a material or concrete objective, but rather thoughtlessly superficial. We are dangerously entering into an era in which everyone constructs devices, collects data, correlates data points, but no one connects the dots – like Darwin did. For the latter, you need hypotheses (or questions – which is the same). The former are all needed, of course, but not sufficient. But the vast majority of NIH RFAs are about new technologies, platforms, tools, as if piling up data equals understanding. Even the BD2K (Big Data to knowledge) initiative ironically fails to promote asking questions and erecting a hypothesis. NIH is killing good traditional science.
      Unfortunately, many young investigators do not know anymore what is a hypothesis. Of course one can frame every goal as a hypothesis: “The overarching hypothesis is that inhibiting XYZ can benefit patients with ABC…”, “The hypothesis is that we can model this process using that approach…” This dilutes the notion of a good, solid hypothesis.

      These are not a hypotheses: the former example is an ad hoc assumption – perhaps meaningful, perhaps not, the latter is a goal squeezed into the format of a hypothesis – both are often used to fate a hypothesis.

      Thus before engaging into the discussion about hypothesis-driven science, one should better learn what is a formal hypothesis: The generation, from empirical facts, and using logical reasoning, of a testable proposition.

      1. Well put…the difference between a good hypothesis and a good question is really just semantic. The equivalent of confirmation bias for the latter would simply be asking the wrong question. I’ll also point out that a hypothesis has real value for the reader. I personally find it much easier to understand a system if it is set up as a series of clear hypotheses….maybe that is just me.

      2. I concur. It’s a mistake to throw out Platt’s ideas about Strong Inference. Even though they may be an ideal, they’re well worth shooting for. The problem with the “overarching hypothesis” described in the initial post is that it’s often devised post-hoc rather than as a starting point. Furthermore, Glass, Firestein, and others completely misconstrue Newton’s “hypothesis non fingo” remark – he meant only that he had no idea of how gravitational attraction worked, and would postpone the search for causal explanation until he had described it mathematically. He had no problem constructing hypotheses in his work on light (as described in Opticks).
        I urge students to read Platt’s 1964 classic, and I often re-read it with them – it is a helpful refresher to consult before writing a grant application. There should be forseeable alternative outcomes to each experiment, and they should seem equally plausible in advance of your performing the experiment. The alternative outcomes should take you down different paths, and change the way you think about the system or phenomenon you are studying. If this is the case, you are clearly posing an important description. If not, you’re just describing, not answering anything – and the bar for funding should be higher. I can imagine that the applicant may have some technical advance that s/he needs funding to develop, but as a reviewer I will give the edge to the person with the hypothesis every time.

  4. Jon, you are asking an excellent question. Even my kids in elementary school already learn how important hypotheses are for the scientific endeavor, but should they be the main standard by which grant applications are evaluated? As a naturally somewhat technique-driven biophysicist I can certainly appreciate the idea of exploring a system by all technical means possible with less regard to probing one specific model (hypothesis). Ultimately, however, the biggest question is what drives scientific discovery the fastest, and here a central hypothesis (that goes beyond a self-evident fig leaf of a hypothesis such as “I hypothesize that my technique will reveal something important”, which is sometimes seen in the applications of newcomers and is inevitably downgraded in the review) has some added benefit as it will force me to think through my research at a level deep enough to come up with (at least) one such model that I can critically use as a test bed for my experiments (a benefit that persists even if I later have to modify my favorite hypothesis). Yet, thinking outside that “box” of a single guiding model/hypothesis is equally important to avoid the confirmation bias you describe. So perhaps there would be a way to balance between “hypothesis” and “goal” driven research by explicitly encouraging PIs to cover alternative models sufficiently in the “pitfalls and alternative routes/hypotheses” section that has become commonplace (or mandate such a section in some form).
    Thanks for engaging this important discussion! Nils

  5. My comment is that when the majority of study section members come to consensus that this is the way that we should express our scientific desires, then we are stuck with the “fishing expedition” criticism. What we do, which is to ask questions and get answers, in the way you describe, and get answers, whether or not they agree with our preconceived thoughts on what the outcome may be, outside of “big science” with special funding like the genome project or the Broad funded efforts, there has to be a hypothesis that you start with or one would not start at all. even with “big science” the hypothesis is that the new information will lead to new hypotheses.

  6. I agree completely with you. If I want to find out if my system does A or B after introducing a certain mutation, why should I have to express a bias toward one of the possible outcomes? However, and that is a big HOWEVER, based on my experience a significant portion of your reviewers insist on hypotheses. Unless that changes, what shall I do?

  7. Jon,

    Good points but I’d take things a step further. My former mentor Lee Hood even talks about “hypothesis limited science” and of course he promotes discovery based methods (such as large-scale ‘omics approaches or “systems biology”). Personally, I think the “scientific method” as it’s typically presented is actually a far cry from reality. E.g. the scientific method is often presented starting with a testable hypothesis from which experiments are generated, etc. My claim is that for all science, hypotheses are generated from data (one’s own previous data, the data of others, etc) and then refined/eliminated with further data. Many researchers (and clinicians in particular) seem to be uncomfortable with funding discovery based work – whether that be genomics studies or clinical observations – in the absence of a hypothesis. Such studies get referred to as “fishing expeditions”. As a result, observational science is downgraded and often the data necessary to generate new hypotheses isn’t created.

    This perception plays out in other ways to limit science. For example, it is often that case that large repositories of clinical data are tightly held by those who collected that data. In order to access these data repositories, it is usually necessary to go through some gate keeper (often the statistician on the team). Often the requirement is that you have to propose a hypothesis to test with the data and then only the data necessary to test that hypothesis is sliced out. Such an approach prevents exploratory analyses that may find novel relationships between variables within the data and ultimately limits our understanding.

  8. Rather than reeling from the current mindless over-emphasis on gathering large data sets with no obvious end in sight, as is currently rewarded in so many ways by the NIH and its leadership, this fellow recoils from hypothesis-driven science, which somehow seems overly intellectual to him. How very sad to see such mindlessness extend its reach deeper and deeper into the body of science.

  9. Thank you, thank you, Jon. I feel that with students and post docs, often I have to correct for overzealous science fair teachers who trained students always think in terms of the hypothesis they are proposing and whether the data support it. I think this is a very dangerous approach for exactly the reasons you state. I tell students that if you do propose a new “model” then the thing to do it to try to shoot down your own model before someone else does. It is when people think they have to “support their hypothesis” that we get closer to the slipperly slope of only seeing data that is supportive. This (and many, many other forces) contribute to the irreproducibility issue that you refer to. If we can get rid of the “support or disprove the hypothesis” mode we will ALL get ahead. This is what I tell students and post docs around me. (though still I was not able to make inroads into the middle school science fair last month where my daughter was judged on whether she stated the hypothesis and then at the end whether she said if it was proved or disproved). We can move away from the science fair model, and actually do some science. Your comments are most welcome.

  10. I tend to agree with you. However, a primary problem with proposal writing is that reviewers and study sections do NOT seem to agree on the relative importance of hypothesis-driven, question-driven and unbiased exploratory research. I may write a proposal involving question-based language or a “fishing expedition” that seems necessary, but some reviewers might want to see me test an “overarching hypothesis”. How in the world are applicants supposed to anticipate how study sections will receive their approaches unless NIH comes up with a set of guiding principles that reviewers are always reminded of?

  11. Jon,
    Kudos for addressing this topic. As a study section member I too saw many grants deliberately framed around hypotheses as if applicants were coached that “this is how you have to do it.” This approach is not necessarily bad, but many of these proposals didn’t do well for the reasons you mention. On the other hand, I also saw panel members criticize proposals for “lack of a clear hypothesis”, which I often found irrelevant to the merit of the proposal. I hope your commentary will help us as a community focus on our perception of the intrinsic merit of the proposal and less on whether proposals fit a particular format.

    Dan Leahy

    1. I agree with you Dan, and I think this is particularly true of postdoc and predoc proposals that are too often set aside by reviewers because they don’t contain a hypothesis statement.

  12. Hypotheses can be a very effective organizing and motivating devices for planning and stimulating experimentation. I don’t think there is anything wrong with hypothesis-driven research. The problem is, as you say, bias in interpreting the results. I have witnessed entire labs so wedded to a hypothesis that they are blind to contradictory results. The hypothesis-driven investigator should be most excited when data inconsistent with the hypothesis appears. That is precisely when something new can be learned and the hypothesis replaced with one closer to the truth. This is a little like your competing hypotheses idea except that the competing hypothesis is generated in response to the new finding and is therefore more informed.

  13. Jon I totally agree with you. We enter into grants with preconceived hypotheses that might be totally wrong. But if we discover our hypothesis was incorrect, we will feel as if we have failed even if the data have led us into a completely new and exciting direction. I think it is time to stop this practice, it is destructive to science and I think, leads people into false conclusions because the are sure that if their hypothesis was incorrect, they will not get funded again. This is certainly not the right set of incentives nor the best way to propel science forward.

  14. Dear Jon

    I believe you (and your study section colleagues) are fundamentally mistaken. How the biological system (or any other system for that matter) “really” works, we shall never know. Theories cannot empirically be justified or proven. (This was already known to Hume.)

    However, experiments can tell us which of our hypotheses or theories are incorrect or false. Scientific knowledge, therefore, only grows by the refutation of theoretical possibilities (hypotheses); and the replacement of earlier theories by better ones. All scientific knowledge is theoretical. The collection of “data” in the absence of a theories (or hypotheses) does not make a contribution to our knowledge whatsoever – but the empirically refuted hypothesis did!

    Science thus understood also provides a safeguard against what you call “tunnel vision”; for experiments are understood as attempts to refute or test hypotheses, and not to prove them.

    I do recommend, specifically, chapters 1 (Science: Conjectures and Refutations) and 10 (Truth, rationality and the growth of scientific knowledge) in Karl Popper’s “Conjectures and Refutations” for a lucid discussion.

  15. I’ve seen the same problem in study section but from my perspective the problem involves two problems. First, the “hypothesis” in these grants is often not hypothetical at all, but only phenomenology dressed up as hypothesis; a hypothesis is a universal statement that cannot be directly verified by observation so the famous “the sky is blue” is not a hypothetical at all. Many grants present this kind of phenomenological, observational science as hypothetical. I believe it is driven by the desire to be “hypothesis driven”. Second, your point that a hypothesis might be proven to be incorrect is a problem only if the experimenter has no other hypotheses; so, the aim should be to provide a set of hypothetical explanations of the phenomenon under study and if one is proven wrong the experimentalist will have actually made progress toward understanding the problem. This comment may raise the ire of those who oppose Karl Popper’s view of science and I will simply point out that Popper minimized the importance of phenomenological science in his work improperly in my view. Most of what we scientists do is phenomenological and my feeling is that the trend toward “hypothesis driven” science unfairly minimizes the importance of this kind of work.

  16. Finally, someone who understands the dangers of an over-reliance on hypothesis-based science. I’ve argued about this since I began as an assistant professor. If you can form a hypothesis, you already know a lot. The most interesting things are those which you know absolutely nothing about, and it is impossible to come up with a reasonable hypothesis. One needs to gather information first. I call this exploratory science. Some people call it fishing. The danger is if you fish where there are no fish, or if you don’t know when you’ve got a fish. I think that a balance is needed between exploration, and formulating and testing models (after all, hypotheses are just models).

    I’ve finally gotten to the point in my research where I’ve been able to develop very detailed, and well justified, (but not proven) models. The problem is, the reviewers tend to think that my arguments are so airtight that actually testing the models is “incremental” (the kiss of death). However, they are not incremental at all. If the designed experiments work, the model would be soundly established – a major breakthrough. But the experiments may not give the expected results (they rarely do), and I’m far from convinced that everything will work. So that means we will either need a new model, or the old one needs to be revised. I’m fine either way – I just want to know how it works. But, somehow, reviewers don’t get this. I’ve got clear hypothesis-based questions and they are convinced I already know the answer, but in fact, I don’t. So why the heck do they always ask for hypotheses, and when they get them, they feel that there is not much else to be learned? Sounds like they really want exploration, but they’ve been conditioned to look for hypotheses…In other words, they really want balance…

  17. Thank you for this blog. I completely agree. People often get wedded to their hypothesis and try to prove it right (as opposed to wrong). I think if we can emphasize question driven science to an equal or greater extent than hypothesis driven science, we will in the long run do our students and science a favor.

  18. I strongly support tempering hypothesis-driven research with a healthy dose of intelligent investigation of a question. Our goal should be to discover the answer, but it is arrogance to think we can guess at the answer before we begin…unless so much is know about a system that further investigation is no longer worthwhile.

  19. Some of the most productive and interesting conversations begin with a proposed question because when people engage in thought exercise through inquisition, their minds are receptive to broad possibilities, thus the best answer can be objectively determined. In contrast, uncomfortable discussions are those one-sided lectures during which one party or person attempts to prove a point. A conversation may end in vain because, as mentioned in Glass’s insightful opinion piece, “only a single negative example is required to disprove the hypothesis.”

    To avoid dead ends: how might professors of graduate education encourage the more liberal question/model practice? Specific examples of current publications that utilize this method (whether knowingly on not) will certainly help dismantle hypothesis-generating machines.

  20. EXACTLY RIGHT. Maybe you could shoot this piece over to the folks at CSR for distribution to all study section participants. Should be required reading. Nice piece.

  21. This is the most clearheaded corrective that I have heard in a long time to the advice now given to investigators from all sides, by “faculty development” administrators, “deans of research”, and a growing professional grant-writing advice industry, about the need to structure grants around formal hypotheses. Many of us were attracted to careers in life science because we were excited to exercise our creativity in pursuit of a deeper understanding of how biological systems work. Asking good open-ended questions, using everything at one’s disposal to address them, and adopting an open-minded stance when observing and describing experimental outcomes and interpreting their meaning, has always been the most fruitful approach to making timely progress toward understanding how things work and making new discoveries. All good scientists understand this intuitively. They also understand that the most challenging and often most insight-laden moments involve figuring out “what just happened?” when an experimental outcome falls into the class of anticipated alternative outcomes known as “none of the above”. Such moments can take one’s investigation in previously unanticipated directions. Asking good questions whose answers rule out a class of explanations and narrow the focus of our next questions and next experiments are key to making progress toward understanding. But the insistence on confining the trajectory of this activity in rigidly hypothesis-centered grants before-the-fact often tends to ossify the dynamic and frankly unpredictable process by which science proceeds, and which is no less powerful for its inherent unpredictability. Unfortunately, a related consequence of “hypothesis-centered” thinking is the view that one must complete every aim originally proposed in one’s grant in order to have any hope of renewing it, since “that is what you were funded to do”. The task of judging which are the most promising grants is no easy business. But investigators who feel compelled to frame their grants as entirely hypothesis-centered, regardless of how mature their understanding of the phenomenon under investigation, may run a greater risk of confining their creative inclinations unnecessarily than those who organize their thinking around good open-ended questions and experimental design. After all, discovering the answer to an important question is the most deeply satisfying result that a scientist can hope for.

  22. Your comments are spot on. Most scientists I believe would not bias their interpretation of results to confirm their hypothesis. That would create a tangled web. But the need to couch our research in terms of a focused hypothesis is artificial. Often we have to make something up, not believing it to be the truth. We don’t know the truth, why should we be forced to guess? Instead, describing a number of reasonable outcomes (a fishing expedition) is far more intellectually honest. We could then describe the data needed to distinguish between them, and as you mention, this does remove a source of possible bias that could otherwise creep in.

  23. I agree. The question should drive the research. The hypotheses should be more like tools to focus experiments and always, always in the context of alternative hypotheses.

  24. I learned about hypothesis-driven science at university. In my Ph.D. it was stressed that good science is dependent on a good question. When we submitted a paper which was data-driven/hypothesis-generating it got slaughtered before peer review because of this.

    What I ultimately learned doing a Ph.D. is that approaches and methods don’t really matter when producing science (they may matter when raising grants). If you can use a hammer instead of the latest microscope, do it (and save money while you’re at it). The same seems to go for the way you drive research. If you have a good hypothesis, test it. If you have a good question, try to answer it. If you can infer interesting models from hypothesis-free data, create them and prove them right. If you manage to explain a tiny bit of how nature works, you have succeeded. On the other hand, I think one can become enamoured with a flawed hypothesis as well as with a faulty model or the incorrect answer to a question. The ways to cloud the mind seem countless.

    In the end, no matter what, all approaches are hard, because science is. At least to me.

  25. Unfortunately the study sections are split on this issue. I believe a question driven approach is indeed superior, but when funding hovers around the 10th percentile the review process is really not about hypotheses nor questions but about marketing. Unfortunately China will overtake the USA unless this problem is fixed in the near future. Even well funded people are tired of the grant process in the USA and are leaving here.

  26. Jon, you make an excellent point. I and Cell Biologists I know or knew (including those no longer with us) grew out of the tradition of asking questions. The word ‘hypothesis’ was a term seemingly necessary for grant applications. ‘Question’ wins over ‘hypothesis’ in Nature (81,096 vs 34,720 articles, commentaries, etc); but surprisingly not J Cell Biol (8138 vs 8336) nor J Biol Chem (50301 vs 59325), although similar.

  27. Respectfully, there are two important points that I would make in response to this column: The first is that the objective of hypothesis testing is properly intended to *disconfirm* the hypothesis under scrutiny. A scientist must make every reasonable effort to do so. That is the mechanism that ameliorates confirmation bias. And, in my humble opinion, the scientific community is not doing this with vigor because the price of disconfirmation is often lack of funding. The second point that I would respectfully make is that it is much more difficult to set up an objective hypothesis testing experiment than to mine data.

  28. Bravo! I think you are saying out loud what most of us have been thinking ever since hypothesis-driven research became “necessary” to get grants funded. Certainly you have to have hypotheses, but also recognize in design the possibility, even probability of surprises. All of the really cool things my operation has found in 40 years have been surprises found through keeping open to the unanticipated.

  29. Although the use of a “hypothesis-based” presentation style is certainly over-used, long experience has also shown that some forms of enquiry just are not well-supported by the study-section mechanism of grant review. One of the approaches described in the lead-in article by Dr. Lorsch was genetic screens, and I challenge any of you to tell me about grants based on new screens that were funded, unless the screen had already been done, and a preliminary analysis of the outcome proved that interesting mutations could be obtained. This means that anyone relying on forward genetics (which can still be the best method of finding true biological connections, as opposed to just testing ‘likely candidate genes’, which never gets you beyond what you already suspect is true) is better off cheating the system, by spending grant money designated for a previously funded project to run a first screen and get preliminary results.

    Past track record of success using genetics is rarely taken as evidence that the PI can design a screen that will work. As the emphasis at NIH swings ever more toward translational science and mammalian/human-centered experimentation, and the support for genetically tractable model organism research becomes restricted, we run the risk of losing valuable stock centers and resources that have been built up over many years, as well as the novel insights their use can give us.

  30. Very insightful comments. While hypotheses or “working models” can help with experimental design, a totally rigid, hypothesis drive approach can indeed be very limiting. The best experiments are those that significantly enhance understanding of the biological system being studied regardless of the outcome. I also support the idea that well-designed exploratory studies may be required in order to generate testable models. Or as a colleague once told me, “if you want to catch fish, you need to go fishing!”

  31. Thank you Jon for bringing this up, and for starting this discussion. I encourage those who teach grant writing classes to graduate students and postdocs to incorporate these points. I also encourage SRO’s to remind their panels that hypothesis shouldn’t rule over everything else. After all, where DO hypotheses come from if not from asking open-ended questions?

  32. I could not agree more. I tell my students that a hypothesis is simply the favored answer to a scientific question, and by forming a hypothesis you bias yourself. In my view, you should find and justify an interesting scientific question, think of all possible answers and then design experiments to distinguish between them. This is a British view of the world (my original nationality), but my students have had to make up hypotheses for their preliminary written exams to satisfy their thesis committees. If they don’t start with a hypothesis, they often get negative overarching reviews from my colleagues….”Where is the hypothesis focusing the experiments???” and have to revise the proposal accordingly. Luckily, study sections don’t take that view (or at least the ones my grants are reviewed by), and I have been very successful by posing and justifying scientific questions. I did have a junior colleague who thought she could not put in an R01 application because there was no obvious hypothesis driving the science, as the research was very much moving into the unknown. When I told her I had never written a hypothesis-containing grant and my proposals had been funded, she wrote and submitted her R01 application. I am glad to report that she was funded on the first submission and went on to reveal some really ground breaking science.

    So please let us get away from hypothesis driven research, and promote exciting scientific inquiry instead.

  33. I see a problem with the review of the strongly identified hypothesis test type of grant proposal that hasn’t been raised here. It is that once the hypothesis has been stated, the reviewer feels free to oppose the hypothesis and use that as a reason to oppose the grant. Frequently applications will go down in flames simply because the reviewer disagrees with the predictions of the applicant. When it should be obvious to all that the only way to find out who is correct is to do the experiments and find out. It is this factor, countering predictions about the outcome of studies which have not been conducted yet, that is the most detrimental.

    On a tactical level, for those who are newer to the game, I suggest that you be very upfront and identify experiments or entire Aims as such. Use language such as the “exploratory Aim” or the “hypothesis-generating experiment”. It doesn’t always work because the reviewer can always think the exploratory direction is a waste of time. But it sure does blunt the “where’s the hypothesis” criticism.

  34. Thank you for starting this conversation, Jon. Even at the prelim level, too many students fixate on developing a hypothesis without considering if the question they are asking is worth answering in the first place.

  35. Question driven is similar to curiosity driven. Curiosity should be one of the important characteristics of scientists.

  36. I feel that hypothesis-driven applications do not serve the best interests of either science or the tax-payers who support such science. There is nothing wrong with hypothesis testing but, too often, funding depends not on the quality of the science, but on whether or not the hypothesis is likely to be supported. Its all well and good to say that science advances by refuting hypotheses, but try starting a grant with a statement such as “I fully expect my hypothesis to be wrong” and see how far it gets. I would love to see the emphasis changed to a question-driven flow-chart syle of application. Presently, each step in the grant is dependent on the previous steps working. How many have seen the comment “It’s nice, but if Aim 1 doesn’t work, then there is no reason to contunue”. This approach constrains the science to only what is largely known (i.e., incremental science), or is very safe (i.e., already done), but is not really amenable to new discovery. Addressing alternative outcomes often becomes somewhat superficial – “I need to say something, but since I really don’t think the experiment will give me anything other than what I predict, I’ll make up a strawman”. With a question-driven approach, each step in the study is dependent on the outcome of the previous step. This approach not only would more accurately reflect the true discovery process, but would produce more thoughtful and comprehensive grants since multiple alternative directions would have to be fully developed.

  37. A breath of fresh air. Having written grants for 25 years, I think we do overemphasize well-articulated hypotheses and cringe when diametrically opposite results are possible by tinkering with a biologic process. I learned that first hand just recently because I ventured to suggest in my hypothesis that two completely opposite outcomes were possible – the reviewer’s rap was that “the hypothesis was uncertain”! I like the idea of question-driven science with alternate “working” hypotheses/models.

  38. I have had a poster on my office wall for some time now: it says “F#$% the hypotheses, let’s just discover something!” I have also shared with my students the beauty of the just-for-the-hell-of-it approach, where one does something pretty much on the spur of the moment – and then destroys any evidence that it didn’t work afterwards.

    I do, of course, also teach how to generate post facto hypotheses to explain their results B-)

  39. I think one of the big drawbacks of the traditional hypothesis is that it requires a yes-no answer, which can lead to bias. The difference between using hypotheses and question-driven science is that the question-driven science does not require a yes-no answer.
    In my freshman biology class, we covered question-driven science, and the question had to be a) testable and b) probably give conclusive answer. I think yes-no hypotheses were an attempt to get conclusive answers. However, you do not have to have a yes-no question to get a conclusive answer. For example, the question “what did John have for dinner?” can be answered conclusively by asking John.
    As for “fishing expeditions”, those are are attempts to collect data/observations to ask new questions, but they seldom give conclusive answers. When I was trained to write grants, I was taught that having one “fishing” experiment was fine because the data gathered can open up great avenues of research. However, the meat of your grant should focus on a testable question that will hopefully provide a conclusive answer.

  40. These are good words of wisdom from the director of NIGMS. But can these words reach the Center for Scientific Review? Debating how to formulate scientfic inquiry in a productive manner is an interesting subject when you are in your lab trying to do your work and move knowledge forward. It is a different matter when proposed research projects are being judged by study section following the guiding principles from the CSR.

  41. For some research projects designing a research plan is like fishing paths surrounding a lake: they will all lead to the lake – some slightly better and quicker than others but they all get you to a decent fishing spot. Therefore it is not worth standing around, bringing out the maps, and discussing which path to take- just take one, start walking and get a line in the water.

  42. I think it is wonderful that this discussion has been initiated. In my view, a balanced approach towards scientific discovery is critical to foster healthy developments in science. While hypotheses-driven approaches have certainly proven to be valuable in the past, one only has to look at how the major scientific discoveries – from Darwin to Watson/Crick – were initiated to realize the value and the power of other approaches towards scientific discovery. I do hope that the NIH will take more balanced approach in the future and will work towards instituting appropriate and seamless guidance for reviewers, including a consideration of justified “fishing expeditions”. It would be a shame if creative individuals capable of important discoveries are excluded just because they are unable to frame their ideas in a hypothesis-directed manner.

  43. This editorial is fantastic. The hypothesis overdrive is pandemic in research. As a member of numerous young investigator/postdoctoral selection committees, the myopic consequences of hypothesis overdrive are extremely prevalent in most applications. In addition, this editorial should be mandatory reading for all member of study section. It is sad the number of genetic screens that have been trashed because of the “fishing expedition” criticism.

  44. There are several ways in which one can frame a research project and setting up an hypothesis is one of them. This is usually done by carefully reading the literature, doing some experiments based on what you’ve read, plus some new ideas of your own, and then coming up with a scheme (hypothesis?) on how things are working that you want to test (your specific aims). So, I don’t find this hypothesis-driven approach unusual, and, in fact, it is most likely to be the way most of us set up a research problem and a grant application.

    What I do find disturbing (depressing?) is that with the decrease in grant funding for research there has been a major increase in the flow of articles on “How To Write a Better Grant for the NIH,” in this case from a Director of GM, no less, which I find particularly egregious. Most of us, even young people, do not require this type of advice and are pretty well-seasoned in the art of writing grant proposals and, right now, writing a better grant proposal is not our principal problem. There are plenty of good fundable research proposals being submitted. There is, quite simply, not enough money to fund even some of the best ones, and the number of researchers submitting these proposals is continuing to increase. Short of increasing funding for research, which is not going to happen soon, it means some drastic changes in the way we fund and do research, especially in universities and medical schools. In part this means cutting down on the number of graduate and postdoctoral students we are training for research careers, i.e., reducing the number of people in our laboratories (try that idea out on some of your colleagues with large labs!), and it also means that universities and medical school are going to have to contribute more of their own funds to support research, including the paying of faculty salaries or at least a higher percentage of those salaries. The days of having two NIH grants, one to do the research and the other to pay one’s salary, are almost over.

    And, finally, a word about peer review panels and what they want in a grant proposal. I have sat on enough of these panels in my career to know that they are all different, and that they are changing all the time, depending on their personnel, how broad an area they have to cover, and how large the community is that applies to that panel. Some may like hypothesis-driven work and some may not, and I don’t think it’s important to even have such a discussion. And some like to toss around phrases like “fishing expedition” (I go fishing to catch fish!) and “too ambitious,” and “translatable research,” all unhelpful phrases for a grant writer. But when funding levels fall below 10% of total applicants to a panel then one can question whether such a panel meeting is really necessary in the first place. Let each of the panel members pick one grant out of their pile, and phone in the results. The current state of peer review is another story entirely, but it does require re-oganization in light of current funding.

    It is too bad that a world-famous and thriving research community has fallen on such hard times. The country should be able to support better and, indeed, deserves better.

  45. Thanks to everyone who responded to this post. I thought the discussion was extremely interesting and useful, and I am looking forward to continued dialogues in the Feedback Loop forum.

    I suspect that no post on an NIH blog has ever elicited so many references to David Hume and Karl Popper! For those of you interested in this line of discussion, I recommend reading the lucid and concise essay “A Brief History of the Hypothesis” by David Glass and Ned Hall.

  46. One of my favorite Einstein quotes says it all: “If we knew what it was we were doing, it wouldn’t be called research, would it?”

    One of my recent NIH proposals that could lead to very important discoveries was trashed by the reviewer who gave only one-liner comment “fishing expedition”. However, as someone commented earlier: the only way to catch a fish is by “fishing expedition”

    We need to promote research to discover the TRUTH, rather than just to prove one’s thought/hypothesis even though it could be deadly wrong

  47. Your comments are absolutely on target! Part of the problem is our grant writing courses for graduate students [and the emphasis that the NIH T32’s apparently put on this, or at least the T32 Directors at our institution]. If they don’t use the word “hypothesis” then the student is downgraded. The obvious trap then is the student tries to prove the hypothesis, and if they can’t prove it they assume they have failed in the research. I see this much more in grants from junior faculty then I used to [but have also seen it in grant reviews, including on my own grant: “the hypothesis is not clear” if the word hypothesis is not used by the PI.

    My own hypothesis has always been “I want to know how this works”, and the grant is to do everything I can think of to figure it out.

  48. I totally agree! However, as a young investigator I am writing small portions of larger grants with more senior investigators and I am being forced to construct my grants in a hypothesis driven manner. They seem to take the NIH’s “The Grant Application Writer’s Workbook” as gospel, and this workbook hammers away at the need to do hypothesis driven research. I quote a section that is italicized for emphasis “most reviewers of NIH grant proposals demand hypothesis-driven research”. Based on my last round of reviews from a grant which I wrote in terms of questions and ideas (but not hypotheses) I would also agree with this statement.

  49. This is a great and useful discussion — but, of course, not a new one.

    Darwin himself repeatedly confronted the dichotomy between gathering data and testing hypotheses. Ayala has a good brief essay on the tension as faced by Darwin here

    I also heartily recommend Ghiselin’s superb book, The Triumph of the Darwinian Method.

Leave a Reply to Stork Cancel reply

Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.