Qualitative research, gathered through methods like user research, helps designers understand why products succeed or fail. These insights can shape many aspects of a product. However, when delivery teams launch new or updated products, they may consider an array of quantitative questions too. This might include:
What can we do to help reduce the percent of people who encounter an error with our product?
Will adding an example in the hint text help increase the number of people who enter the correct input in an open text field?
Combining qualitative and quantitative research helps teams conduct more valuable research, ultimately informing product improvements.
A practical way for teams to gather quantitative data is through small, data-driven experiments. These iterative updates can enable teams to break down the effort of solving a problem. They also engage full teams and complement qualitative findings. This toolkit offers an incremental approach for designers, product managers, and engineers to make evidence-based updates that focus on impact, manage risk, and optimize resources, without requiring advanced data science skills.
This toolkit can help you:
- Implement a lean process to test product improvement ideas and enable swift adjustments based on results
- Align on process and goals for a product update
- Build team momentum by achieving several small wins
- Build stakeholder trust by implementing small, data-backed product updates while demonstrating potential for scaling
Implementing the small-scale approach
Step 1: Determining if this approach is right for your team
This small-scale approach isn’t always the right solution. Before diving into this research method, it’s important to determine whether a small-scale approach can help you.
The small-scale approach can be most useful when:
A team has gathered quantitative and qualitative data and needs a structured process to turn those insights into actionable product improvements.
A team has a data-backed idea for a product update but wants to verify that it will deliver the intended impact before rolling it out.
A team wants to develop a product update but doesn’t have enough data to understand how users might interact with it.
A team has multiple promising ideas and wants to prioritize their efforts by testing the impacts of small product updates.
A team wants to test a product update for an application that’s relatively linear. In other words, it’s helpful to be able to track experiment results in a cause-effect manner.
The small-scale approach may not be helpful when:
A team is about to launch a new, robust product update with defined and clear requirements.
Note that teams can apply the small-scale approach to improve a feature after it has been launched and the team has gathered initial user research feedback and data tracking results.
A team wants to conduct experiments with services that have many variables or dependencies, making it difficult to track cause and effect.
Step 2: Defining the problem
When embarking on a product update, it’s likely you’re making that change to address a problem or problems. Start by describing the problem and how it impacts your goals, users, and product.
Generating a list of assumptions
Create a list of assumptions for why this problem is occurring. Data, user feedback, or patterns from previous iterations of this or other products can inform your assumptions.
Once you’ve created your list, work with your team and other stakeholders to determine which assumptions to test. When deciding which assumptions to test, you might consider:
Supporting evidence from quantitative data, such as funnel data and event-based metrics that help identify user pain points, bottlenecks, or drop off points.
Supporting evidence from qualitative data, such as learnings from user research.
Product and program goals, and whether your stakeholders are interested in testing certain assumptions.
Generating a hypothesis
Before you can test your assumptions, write out a hypothesis for how you can achieve your desired outcome. For example, “If we do X, we’ll be able to achieve X.”
Writing a hypothesis can help shape a product update by clarifying its intended outcome. Teams can also use hypotheses as a starting point to develop user stories, promoting human-centered design.
Ultimately, your experiment can prove or disprove your hypothesis. Disproving a hypothesis isn’t failure; it’s a natural and valuable part of the testing process that can help teams find the right solution.
Step 3: Defining the product update
Now that you’ve established a hypothesis, begin to brainstorm the details of the product update that might address the problem you identified. These can range from updates that require more effort to those that require little effort. You might verbally describe these updates, write them down, or map them out.
Don’t limit yourself to low-effort updates during this exploration. Although you’ll likely prioritize low-effort updates for small-scale experiments, more complex ideas may inspire you. You also might be able to simplify complex updates as you narrow down your options.
As a team, begin to prioritize your product updates. For small-scale experiments, the goal is to select options that require the least amount of effort while providing high value, like addressing a key issue or answering an important question.
Here are some considerations that can help you prioritize product updates:
Can you start small to gather early insights and test more broadly later?
Is your update technically feasible? Does it require a low level of effort to implement?
Will your update improve the user experience?
Is this update solely within your team’s control?
Can you limit the area of the update? For example, you might target a specific part of the user flow that has the highest drop off rate.
How will your team’s capacity, resources, time, and competing priorities affect your ability to test the update?
Based on our experience, we recommend avoiding product updates that:
Introduce new problems to solve
May lead to complex downstream effects
Have a lot of dependencies
May require a lengthy security or approval process
Once you’ve prioritized an update, your team’s designers can focus on making higher-fidelity mockups, while engineers can refine technical details.
Step 4: Defining and conducting the experiment
Identifying key questions
When defining your experiment, focus on what you need to measure and learn. A great way to start is by identifying both immediate (leading) indicators and longer-term (lagging) indicators. For example, the percentage of users who click to start an application could serve as a leading indicator of engagement, while application completion rate would be a lagging indicator that reflects overall success. Including both types of metrics can help monitor early signs of impact and assess long-term effectiveness.
Next, define the questions that you want to answer. These questions will help determine whether your product update and hypothesis hold true. For example, a key question might be: “With this product update, did we increase the number of people that successfully entered information with the correct format?”
In identifying key questions, you might ask yourself:
How will you define the success or failure of the product update overall?
How will you define success metrics and counter-metrics? Counter-metrics are metrics that help monitor unintended consequences of your product update.
How will you measure impact?
How will the new metrics, collected once you've launched your update, compare to metrics you already have?
Are there other factors that could affect the data? If so, how much of an impact could they have?
Which user types do you want to target?
What is the scope of the experiment?
What are potential risks? Are there unintended consequences you want to avoid?
Defining the testing method
There are several different ways to test your product update, such as releasing the update and monitoring it’s impact, a survey, A/B testing, remote usability testing, or first click tests. From our experience, gathering data from a brief experiment and comparing it to pre-test results was useful. When deciding on your desired testing method, think about how you want to conduct your experiment and for how long. Remember, the goal is to test the smallest update that can have the biggest impact.
When defining your testing method, you might consider:
How long you will run the test for.
When the best time to run the experiment is. This will ensure your experiment takes place in a “controlled.” environment that's not affected by external factors such as seasonality or new features being released in parallel.
How you will ensure that the results are due to your update.
Your team’s capacity.
If it’s applicable to your project, consider security and approval constraints.
How to mitigate risk. For example, you might make your experiment temporary or limit user traffic.
Which factors may contribute to higher or lower confidence in your experiment.
Conducting the experiment
Using the testing method you selected, launch your experiment and begin to gather initial results and data. The execution of your experiment will vary based on the chosen testing method. For instance, A/B testing may require coordination with your engineering team to enable a temporary feature to run on a website. To conduct a survey, you'll need to select a suitable survey tool or platform and manage its distribution to your target audience. It's advisable to thoroughly research your selected testing method for specific details on how to run the experiment.
However, at a high level, you will generally need to:
Coordinate as a team to prepare the feature change, materials, or test environment.
Schedule the experiment to go live.
Once the experiment is ongoing, ensure that you’re successfully collecting results and data.
Analyzing results
Work with your team to analyze results according to the experiment’s established timeline. Here are some questions that may help your analysis:
Did the update help address the problem you identified?
Do the experiment results help to prove or disprove your hypothesis?
What are new insights that you’re learning from the results? Are there unexpected results? Are there new areas to explore or new hypotheses?
Based on the results of your experiment, decide whether to continue, modify, or discontinue the experiment. When making this decision, assess whether the timeline and experiment need adjustments. For example, is your timeline appropriate given how many people use your product? Have you collected enough data?
Testing new hypotheses and iterating
After conducting the experiment, you may find that the update produced a positive result. If so, consider ways to expand the update’s footprint or refine it in a future iteration.
It’s also possible that your update doesn’t produce any significant change, or it produces a change that has the opposite effect to what you intended. In that case, your team might think about iterating on the update to see if it creates positive results, or removing the update altogether.
Regardless of whether your update led to its intended effect, we recommend conducting follow-up experiments to validate or further explore the findings from your initial test. This can help build a more comprehensive understanding of user behavior and preferences, ultimately helping you create more human-centered products.
Conclusion
Small-scale, iterative experiments are a great way for teams to quickly gather quantitative data on a product update. To successfully stand up small-scale experiments, teams should first determine if it’s the right approach given their circumstances. They can then align by defining the problem they’re trying to solve, defining the product update, and deciding how to conduct the experiment. Finally, teams can conduct the experiment to gather quantitative data, analyze the data, and then use those insights to conduct more small-scale experiments. This iterative process can ultimately help teams build more human-centered products and services.
Special thanks to Harlan Weber and Kelli Ho, who contributed to this article.
Written by
Designer/researcher
Product manager