Everything Is a Hypothesis If You Are Brave Enough

I often hear the phrase “I was testing hypotheses” during discussions of research processes. Our backlog consists of hypotheses, and research reports often begin with “X out of Y hypotheses were confirmed.” Everyone talks about hypotheses: refining, approving, selecting, or revisiting them. Research activities revolve entirely around hypotheses. Hypotheses dominate the thoughts of both managers and researchers. Even the proverbial toaster tells us, “A product is a hypothesis, and 9 out of 10 products fail.”

The concept of a hypothesis has become so commonplace that no one questions, “What is a hypothesis?” You might think, “Isn’t it obvious? A hypothesis is a proposition.” But the obvious often turns out to be the most difficult to fully understand. Let’s dive into this sea of trivialities and try to answer the question, “What exactly is a hypothesis?”

So, What Is a Hypothesis?

At first glance, it seems crystal clear: “A hypothesis is a proposition.” But for it to function correctly, some clarifications are required.

First, a hypothesis is a proposition that can be falsified or confirmed. This is crucial. There must be no third option. Boolean logic is an essential characteristic of a hypothesis. The goal of any research is to obtain new information. Statements like “The hypothesis is neither confirmed nor refuted” are useless. If we can neither confirm nor refute a hypothesis, we’ve likely wasted time testing it.

“But what about partially confirmed or partially refuted hypotheses?” you might argue. Unfortunately, this is not entirely accurate. Most often, partial confirmation points to either an additional factor or an incorrect conclusion. For instance: “Office workers experience Problem X.” During interviews, only office managers reported this issue. Is the hypothesis confirmed? A meticulous researcher might say, “The hypothesis is partially confirmed,” and they’d have a point.

This conclusion adds nuance and extra information. However, I often encounter statements like “Less than half of respondents experienced this problem, so the hypothesis is partially confirmed” or “None of the respondents experienced this problem, so we can neither confirm nor refute the hypothesis.” In the first case, the hypothesis is indeed confirmed: the problem exists among office workers. We cannot infer its prevalence based on the interviews, but the existence of the problem is proven. In the second case, we are being overly optimistic. Yes, it’s possible that none of the respondents encountered this problem or that we didn’t ask the right questions. But from the product’s perspective, it’s better to classify the hypothesis as refuted and focus on finding more widespread problems. Otherwise, we risk falling into an eternal loop of Russell’s Teapot.

The second important aspect is that a hypothesis must be testable. Every hypothesis must be operationalized. Hypotheses that cannot be tested are useless. This brings us to the complexity of hypotheses. A hypothesis that cannot be tested is often simply too complex to test. And complexity means expense. For example, “The product addresses Problem X” can be tested by having respondents use the product to solve the problem. However, “The product effectively addresses Problem X” is far more complicated. Defining effectiveness, finding a way to measure it, and comparing those metrics with competing products rapidly inflates the research budget. In a world of cheap and quick studies, this spells certain doom. When formulating a hypothesis, we must consider the cost and feasibility of testing it. Otherwise, we waste time and gain no new information.

Another pitfall is the alignment of hypotheses with research methods. Each research method implies its own set of assumptions. For instance, surveys assume respondents can form opinions about the question. Usability testing assumes respondents may make mistakes or fail tasks. Such implicit assumptions, often taken for granted, increase the complexity of hypotheses. Neglecting this can lead to several useless studies. For example, conducting a survey on which font—serif or sans-serif—is better for a website.

A novice researcher might default to the simplest approach: “Let the users decide.” But users may not know or care about the difference between serif and sans-serif fonts. Users aren’t stupid; it’s simply not important to them. User reflection has its limits, which we should respect. Attempting to cram the uncrammable into research risks combining the incompatible.

For example, trying to assess the likelihood of product purchase during a usability test. Surveys are much better suited for this, and pilot sales are even more effective. Usability tests aren’t ideal here. Not only is the sample qualitative, but the respondent is already “spoiled” by knowledge of the product. However, adding one extra hypothesis can’t hurt, right? In the end, we risk falling into the illusion of awareness.

If you’re still reading, I assume you’ve realized by now that hypotheses are delicate instruments. A novice researcher or product manager could easily shoot themselves in the foot without noticing. And we’re not even done.

The Transformation of Hypotheses’ Meaning

When working with stakeholders, we often ask them to propose their own hypotheses. This transforms the hypothesis from an operationalization tool to a communication tool. We start conversations with hypotheses, using them as briefs. “What’s wrong with that?” you ask. It increases flexibility, reduces redundant goal-setting steps, and accelerates the process. Plus, hypotheses add granularity: you can gather them in a basket and test them one by one.

However, this approach introduces several pitfalls. Hypotheses require constant moderation, and stakeholders need ongoing education. And that’s just the tip of the iceberg. Using hypotheses as a communication tool layers additional meaning onto them. Hypotheses can now be taken personally, express doubt, or even praise or criticize an idea.

For example, usability test hypotheses often focus on identifying usability issues. This is implicit in the study’s purpose. As such, hypotheses should target problems. Crucially, proving the absence of problems is exceedingly difficult. The optimal approach involves hypotheses like “The respondent won’t find Button X.” By formulating such hypotheses, we pre-identify weak points, as testing every interface element would be prohibitively expensive. However, presenting these hypotheses can unintentionally communicate negative feedback to designers, which some may take personally.

The modern startup culture’s mantra, “A product is a hypothesis,” further distorts the concept. Essentially, it means we shouldn’t overly trust a product’s success. Yet people often interpret this metaphor literally, resulting in a Frankenstein’s monster called the “product hypothesis.” Such hypotheses often retain none of the defining characteristics of a true hypothesis.

Let’s examine the process of testing such a hypothesis. The phrase “Every product is a hypothesis” encapsulates five layers:

  1. We assume there’s a need.
  2. We assume this need is widespread.
  3. We assume the product addresses this need.
  4. We assume the target audience will learn about the product.
  5. Finally, we hypothesize, “People will buy the product in sufficient quantities to cover our costs.”

However, we usually simplify the process and focus solely on the last step. Simplification is beneficial, but it sacrifices an understanding of the necessary research depth for successful product launches.

On the one hand, this aligns with the philosophy: “Create as many MVPs as possible, and one will succeed.” But as practice shows, the need for research doesn’t disappear. We all hover somewhere in the middle, striving to conduct research as if we’re not conducting research. This paradox leads to a curious consequence: the complexity of testable hypotheses keeps decreasing.

So, Do We Need Hypotheses?

We need hypotheses as a research lens. They focus our efforts on specific aspects. A hypothesis is a tool for operationalization, helping us move from abstract concepts to concrete metrics, actions, and respondent statements. It’s indispensable but should remain a researcher’s tool. Misusing hypotheses as communication tools or metaphors diminishes their value as instruments. A skilled researcher knows when hypotheses are unnecessary—such as in exploratory studies or theory development. Modern mixed-method strategies often involve studies whose sole purpose is hypothesis generation. When hypotheses become briefs, we risk falling into a vicious cycle: generating hypotheses requires having hypotheses.

What Should We Do Instead?

In my view, we often overlook the power of research questions. A good example of a research question is: “What challenges do accountants face?” Research questions are often overshadowed by goals and tasks, relegated to the background. But a well-formulated question is often enough to draft a preliminary research plan or even a script…

Rozum Sergey
Rozum Sergey
UX-researcher, sociologist