User research reports: talking about the research approach without the yawns

Tips on how to set context and explain the research approach in a user research report without putting your audience to sleep.

Maryia Rusanava
6 min readMar 31, 2023
Photo by Pixabay

As part of a course on qualitative analysis and synthesis I presented my recommendations on the basic elements to be included in a research report. One member of the audience commented that spending too much time on methodology will make for a ‘dry’ presentation.

While I agree with the “too much” aspect of it (too much of anything is not good), I always include a slide that helps explain the origins of the data used for the report. I view this part as a crucial component of context-setting and always include it at the beginning of the report, not in the appendix, for the following reasons:

  • It helps familiarize people who were not part of the project with key details, and remind those who were
  • It helps future users to assess whether the insights are applicable to the problem they are investigating
  • It helps explain why the results came out as they did and to set correct expectations
  • It helps us get ahead of some of the questions the audience might have
  • It can help us prevent some instances of misuse of research data
  • It alerts to the limitations of the study and will help you weave in those limitations later in the presentation where necessary

I should mention that I try to optimize a single report for both a live presentation and asynchronous viewing, for the audience that consists of colleagues with a direct stake in the results, people with a tangential interest as well as anyone who might ‘stumble’ on this report months or even years later. I’m also lucky to work in an environment where my colleagues take great interest in user research and don’t hesitate to ask questions.

Below you’ll find the five elements I include in my reports with example of past projects where presenting those details early mattered a great deal.

1. Methodology

A market research survey and a small-sample usability study will generate very different types of data. First of all, qual and quant are best at answering different questions: quant is great for question that start with “how many”, “how much” and “how often”; qual excels at “how” and “why”.
Highlighting the methodology will help explain why we don’t have certain data points or why descriptive data is lacking numbers. For those who have at least some understanding of research the mere mention of the methodology will help set the right expectations. For those who don’t, exposure to this information can help educate them on the correct uses and limitations of research data collected via different methods.
Of course, the right amount of detail is key. There is no need to get into methodological weeds unless there is a nugget in there that can materially affect how the data is perceived. Do your audience a favour and simplify.

— For a user journey report, I highlighted not only the interviews I had conducted specifically for the project but also a few other qualitative and quantitative sources I triangulated with, to inspire confidence in the results.

— For a concept test, it was important to highlight that participants evaluated a high-fidelity but static prototype which explains why the report had minimal usability insights.

2. Number of participants

A skilled researcher can judge how reliable the results are by comparing the sample size with best practices. But what business do stakeholders have knowing this information?

I think instinctively stakeholders understand that confidence in the results rises with the number of research participants. A solid sample size can help lend credibility to the findings. On the flip side, any report that confidently recommends drastic changes to company strategy or a major redesign based on a tiny samples size should be taken with a grain of salt.

A small sample size can also help the researcher defend why, after speaking with only a few people, they are not reporting on a niche behaviour or cannot conclude that a particular observation does NOT occur.

The size of the sample is especially important in quantitative studies. There are many articles out there laden with statistical formulas that boil down to one thing — how confident we can be that the results are valid.

— In a recent survey, my colleague and I made the decision not to present results filtered by a specific segment that has become the focus for the company after the survey had been fielded. With a small sample size for this segment, we could not confidently report that the results did not occur by chance.

— A test on the low-fi prototype with only 5 participants produced some inconclusive results as well as new questions. However, the testing strategy was envisioned as a continuous discovery. No major redesign decisions can be made after just one round of testing.

3. Participant profiles

Who the data is coming from is just as important. Did we hear directly from customers or employees interacting with customers? How representative are the participants of our target population? Did we manage to recruit a broad representation of people, or did we underrepresented or omitted certain segments? Did we test with non-users or existing users who already have habits around using our product? If multiple profiles were included, how many of each did we hear from?

I include the minimum amount of detail that is pertinent to how the participants were recruited and screened. For example, highlight if the participants were users vs. non-users but skip their gender and country of residence, unless those dimensions were relevant units of analysis.

— For one survey, it was important to highlight that respondents were current users — one should expect that their responses were coloured at least to some degree by the experience with our product.

— In another qualitative study, I could not make any conclusions as to whether discoverability was the culprit behind low feature adoption. After all, I spoke only with users who have used the feature — they obviously have discovered it.

4. Field dates

Foundational studies tend to age gracefully as people’s preferences, habits and expectations change gradually. Evaluative studies have a shorter shelf life. A usability study from two years ago may not be relevant anymore if the design has changed significantly since.

I always include both field dates and the date the report is finalized. The latter helps future readers determine how old the report is and the former may prove essential in light of internal company or even world events. Feature launches, marketing campaigns, PR crises, competitors’ moves, and pricing changes can punctuate the research timeline. Data collected before or after a particular event can differ in meaningful ways.

— One of my older studies had field dates coincide with the start of the COVID-19 pandemic, scuttling our recruitment efforts and forcing us to change methodology from in-home visits to phone interviews. As a result of the uncertainty this global crisis precipitated, our field dates had to stretch considerably.

— In another study on low feature adoption I highlighted both the feature release date and field dates. This helped put in perspective how long users had to discover, interact with and develop a habit around the feature.

5. Link to the research plan and research instruments

For those interested in digging a bit deeper, I include the links to the documents that contain the research plan (with objectives, key questions and recruitment criteria), screener, moderator guide, test script or survey questionnaire. The information in them may be too detailed for the presentation but having access on demand allows your teammates to self-serve (just remember to set broad access permissions!).

Examples of past work are a great place to start for anyone learning about research or trying out a new methodology, especially in organizations onboarding new researchers or pursuing research democratization.

While you can’t anticipate every question and need, I recommend including these details in each report as a standard practice. This part of the report does not need to be long. I often squeeze all of these details into a single slide that takes no more than 2 minutes to present.

If your team or organization has done a good job of making past research accessible and promoting reuse, a little information on the research approach can help future readers assess whether they are looking at apples or oranges: in other words, whether your particular findings apply to their situation.

Writing clear research reports is also part of being a good teammate and ensuring that your colleagues have a good experience with research.

Written by Maryia Rusanava

--

--

Maryia Rusanava
Maryia Rusanava

Written by Maryia Rusanava

UX researcher connecting the dots

No responses yet