Developing the Practice of Release User Testing

Mary Knight
6 min readJul 19, 2021

2018–2021

Context

This practice was developed over the course of 3 years with many improvements along the way, based on feedback and suggestions from many co-workers across the team.

My Role

Design Lead

Challenge

How might we consistently and effectively understand how our users are experiencing newly released features?

Process

The process of conducting user testing is not new and has been well documented. If you’re thinking this is another article about the process of conducting user testing, think again. However, if you’re interested in learning the process we created surrounding release testing — the who, how, where, when — keep reading. To be clear though, the actual testing steps we took, is as follows:

  • What’s the objective?
  • Write interview questions
  • Test
  • Listen and take notes
  • Synthesis
  • Share
  • Act: make improvements
  • Repeat: did fixes improve the user experience?

In the beginning, as the only designer, conducting research was enough. However as time went on, more priorities arose, more teammates were added, and the design team grew as well. Suddenly this testing that was so simple became more complex. I started questioning each piece.

  • Who is responsible for release testing and at what cadence?
  • Which is better — moderated or unmoderated release testing?
  • Where and how to document research notes and insights?
  • Where to share and with whom?
  • How to advocate for needed improvements?

Over the course of three years, I tweaked the process continually. Below I will outline the answers to the above questions and more.

Responsibility and Cadence of Release Testing

Moving from a team of one to a team of 4 designers meant reevaluating the way I and my team did work. It no longer made sense for me to do all of our release testing independently. I sat down with the team and we made a plan.

As Design Lead, I would still be accountable for making sure the work was done. As a team, we would alternate who was responsible for release testing on a biweekly basis following the app release schedule. This meant each of us was doing release testing about every two months instead of me doing it every two weeks.

Moderated vs. Unmoderated Release Testing

To be honest, interviewing users of our app consistently was a constant struggle. Towards the end of my time at Meal Hero, we developed a pipeline of users that were interested in testing by creating a way to sign up in our app. In hindsight, I wish we had done that a lot sooner. In the beginning, we emailed our users about testing opportunities with little response and even fewer folks who would actually show up at their scheduled time.

For release testing, ideally, we wanted testing to be done within the week new features were released so that if there were any major bugs or UX issues that weren’t caught in QA, they could be fixed before the next release. For this reason, we heavily relied on unmoderated testing, specifically utilizing Usertesting.com and the panel of users they have available. This also relieved our design team from all of the extra time and effort it takes to schedule interviews and arrange incentives for moderated tests.

Usertesting.com video screenshot from testing

Documentation of Research Notes and Insights

When I first started out doing user research at Meal Hero, I tried out a bunch of solutions to: ‘Where do we document research notes and insights?’ Below are a few:

  • Physical boards and post-its
  • Word document
  • Colored coded excel sheets
  • Google slides

All of these solutions had issues. Our ever-increasing remote team could not see our physical boards and in order to keep these insights, they needed to be documented digitally anyway. No one had time for double work, especially not a solo product designer. With Word docs and Excel, it became difficult to look for patterns and improvements between releases and no one wanted to click through a 200-page google slide deck. This is where Miro comes in. Miro provided everything I was looking for:

  • Digital space all teammates could easily access and comment on
  • Lots of space for ongoing release testing
  • Easily arrangeable stickies that can be moved and grouped to show patterns across releases
  • A place the research notes and insights can live side by side, keeping everything in one place

All hail Miro! But in all seriousness, Miro became an integral part of our work. It’s the platform we leaned on to document our work, brainstorm, and collaborate. When the Covid-19 pandemic hit and a lot of businesses struggled to move to a remote environment, our team was just fine.

Here’s how we set up our boards:

  • Updates since last release testing
  • Synthesis
  • Notes from testing with links to the Usertesting.com videos
Synthesis, notes, and links to videos are all documented together for easy access and reference

Here’s what our release testing file looked like:

Having all of the release notes and insights in one place made it easy to see patterns over time.

Sharing Release Testing Bugs and Insights

So now we had our notes and insights and had found a good place to document them, but all of that was for nothing if we didn’t share the insights. Again, I tried many different avenues:

  • General Team meetings
  • Various slack channels
  • Product and design team meetings

General Team meetings seemed like a good solution for a short time while we were small (under 20 employees) but as we grew it became clear that the granularity of the feedback from release testing didn’t make sense to share in such a broad setting most of the time.

I also tried sharing top-level insights in various different slack channels. The problem was that we didn’t have one channel that both our product and tech teams used and this information didn’t make sense in one or the other.

We would typically review release testing insights in our design chapter meetings. But then the insights and improvements needed didn’t get communicated to the other team members that needed to be aware.

Eventually, our Head of Data Science approached me about working together to create a place where both qualitative and quantitative insights could be shared. The slack channel #userinsights was created and we worked together to encourage both the product and tech teams to join this one channel for all major insights on our users.

After much experimentation, here’s how we shared insights:

  • Debrief on testing and insights at our weekly Design Team meeting
  • Post top-level insights, a link to the Miro board documentation, and 1–2 usertesting.com video links to the slack channel User Insights
  • If anything major arose, alert the appropriate PM immediately and share at the weekly team meeting

Advocating for change based on release testing

I would say above everything else, this was the most important step. If we were testing, documenting, and sharing but not making any improvements then what was the point? As a team, we went through many iterations to determine how to respond to these insights. What we learned collectively is that not all of these insights should be treated equally or with the same urgency. Below is the system we used to help guide the next steps after release testing:

  • Major bug: Speak to the PM directly to consult
  • Minor bug: File a bug report, then slack the PM to make them aware
  • UX issue: Consult with that Pod’s PM on priority and get this work into that Pod’s backlog
  • Positive Feedback: Share broadly
  • Negative Feedback: Share broadly and consult: have we heard this feedback before? If no, wait to gather more data. If yes, how many times? Should we escalate?

Examples of bugs found in release testing:

Feedback about a major bug (left) vs. Screenshot of minor bug — recipe titles over 5 lines distorts recipe card (right)

Conclusion

In the beginning, I often felt like I had to shoulder all of the work and be the voice of our users in every conversation. That is a heavy burden to carry. After many years, I’ve realized my role is to be accountable for the work getting done, share our users’ needs and empower my teammates to be the voice of our users.

This looked like clearly communicating and sharing insights from testing, documenting insights in an easy-to-find and accessible place, and providing the evidence — in this case, the usertesting.com videos — so that they could hear and learn from our users. When I heard my teammates reference release testing and our users’ needs, I knew I had done my job well. I didn’t need to be in the room; I needed our users to be in the center of the conversation.

--

--

Mary Knight

I’m a Product Design Leader and Researcher working at the intersection of social impact and technology.