Converting Web Traffic to App Installs

Mary Knight
11 min readJul 19, 2021

July-September 2020

Context

When this work began, we had a web product and a mobile app. They had some similarities and some differences. Our web traffic was significantly cheaper but our mobile users were more likely to retain so we started wondering how we might convert the web traffic to our mobile app.

My Role

Design Lead, Project Manager

Project Team

Growth Pod — Arti Mansbach (Growth Marketing), Jason Reeder (Engineering Lead), Mercy Muchai (Engineer), Lucy Hunt (Content Marketing), Product Manager

Challenge

How might we motivate Meal Hero web users to download our native app?

Process

At Meal Hero, we started every project by completing a Discovery Kickoff Document with all members of the team. This helped us align all members of the team from the very beginning and set expectations accordingly. Below is an abbreviated version of the document we came up with together.

Discovery Kickoff

Purpose

  • To lower the cost of acquiring native app users and retain users
  • To understand how effective web to native app acquisition is

In scope

  • Mobile & web app
  • Content
  • Messaging
  • Adjust ad campaigns

Out of scope

  • Marketing site (blog)
  • Other web apps, landing pages

Outcome

  • As a home chef, I build the habit of nurturing my family with home-cooked meals with meal hero.

Success metrics

  • 5% of meal hero web users download meal hero native app in the last 2 weeks of Q3

Since there wasn’t an official project manager assigned to our team and I was already leading and scheduling the various design thinking sessions, I took on this role. I always found it helpful for myself and to communicate with others, to lay out what a project timeline and deadlines would look like. In the case of this project, our goal to get 5% of our Meal Hero web users to download the native app, was also our OKR so it was imperative that we shipped on time so we could run an experiment and measure the impact before the end of the quarter.

Current Funnel

As mentioned earlier, our web and app products were similar in some ways and different in other ways. Since we were attempting to convert our web users to download our mobile app, I was interested in understanding what differences our users would notice when switching from web to downloading the app. Was the content different? Was the branding different? Was the messaging different? As it turns out, the answer to all three of these questions was yes.

We didn’t end up starting here because as we got into ideation we thought other hypotheses had a higher potential to have more impact but I still found it helpful for myself to visualize the journey our users would take.

Thoughtful Execution Framework

In June 2020, just before starting this project, I read about the Thoughtful Execution Framework that came out of Spotify and I was fascinated. Here’s how Annina Koskinen, in her article “From Gut to Plan: The Thoughtful Execution Framework” describes the framework:

Thoughtful Execution invites you to leverage data and insights in a way that leads to identifying multiple problems or opportunities that could be solved, and advocates for going wide in hypothesis generation and design explorations before zooming into a single solution.

Previously most of our ideation happened in the solution phase but as a growth team, I really felt we should be exploring many options throughout the entire product development process. So I proposed the framework to the team and they agreed to give it a try.

We started by gathering data and insights. Since this was a new project we had little data of our own (a different team had previously worked on this briefly). So we looked to the industry to establish a baseline and to understand how other teams were executing on this type of conversion funnel.

Then we turned those insights into opportunity statements, using the ‘How might we…’ (HMW) phrasing.

At this point, we discussed which of these opportunities could best help us reach our goal and make the biggest impact and chose two:

  • HMW utilize smart banners to drive installs of the native app?
  • HMW limit functionality of the web to encourage app installs?

Next, we wrote hypotheses for each opportunity, starting with the smart banner opportunity.

Opportunity #1 — Smart Banner

After ideating on hypotheses, since no one on my team had experience with smart banners, before going into solutions or even choosing a hypothesis to move forward with we decided to do more research. Our PM came back with a few options:

  1. Build our own smart banner
  2. Use safari smart banners
  3. Buy a tool that would give us this functionality (Branch)

After much discussion, we landed on building our own. We already had a piece of code that was similar to smart banners that we could reuse and implement quickly. This would allow us to get a baseline for the install rate.

We released our banner on the search results page, a page that 90% of traffic would see. The result: Our banner on the search results page had a 1% install rate. Even though that seems like a really low percentage this was encouraging. The team that had previously worked on this had a 0.2% install rate. So this was a drastic improvement from that and only the first solution we had tried.

We knew there were other experiments we could run to improve the smart banner experience but we thought the other opportunity we had chosen would probably have a bigger impact so we went back to our thoughtful execution framework to ideate on hypotheses for the second opportunity.

Opportunity #2 - Limit Functionality

After generating hypotheses, I facilitated an ideation session for solutions. After years of facilitating such sessions, I realized that while our designers are very capable and often prefer sketching, that may not be the format in which my other team members felt most confident bringing their ideas to the table. So I encouraged everyone to ideate in the format that best suited them — sketches, quick digital drawings, images, etc.

After ideating silently, we came back together to share our ideas. As the design lead, this gave me the opportunity to understand what everyone wanted to communicate. Then after the ideation, I sorted through all of the ideas and synthesized them.

I realized we had two types of ideas:

  • UI solutions (what should the solution look like and how will the user interact with it?)
  • Triggers (when and where should the UI solutions show up?)

After weighing complexity and impact with the PM, we decided the next thing we wanted to test was a half-page modal. I made a quick wireframe and then scheduled and facilitated a brainstorming session with my Content Marketing teammate to discuss messaging.

Next, I took the messaging and added that to the wireframes, and finalized the design.

All of this ideation kicked off a series of experiments, which I’ll outline below.

NOTE: For each experiment, I worked with engineering during implementation to answer any questions that might arise. When the engineers were done, they would ask for my design review.

Design Review is a process I co-created with Jason Reeder (Engineering Lead) early on at our time in the company, in which at the same time engineers are asking for a review from other engineers they also ask a designer to review the work and give feedback on the visual and interaction aspect of the work. In this way, Designers are able to give feedback during the implementation process and not at the end when engineers feel it’s complete.

Experiment #1 — Banner vs. Modal

Hypotheses

  1. Displaying Modal will have a higher click-through rate(CTR) than displaying Banner
  2. Displaying Banner/Modal on Recipe Detail page will have a higher CTR than displaying on the Search Results page

Participants — Mobile web users landing on the Search Results page

Design

  • Control: Banner on the Search Results page
  • Variant A: Banner on Recipe Detail page
  • Variant B: Modal on Recipe Detail page
  • Variant C: Modal on Search Results after scrolling
Experiment #1 Variants

Results
Note: During this experiment, we could only measure installs from Android, this was fixed in later experiments.

Analysis completed by Sivan Aldor Noiman & Michelle Huang

What this data doesn’t show is the sudden influx of feedback from our web users about the modal.

Feedback pulled from Hotjar

In short, a lot of users were unhappy because they couldn’t figure out how to dismiss the modal and they perceived the modal as blocking the web experience completely. This was a case of ‘most fast and break things’. I knew the modal needed a more obvious dismiss with an ‘x’ button but there was a decision to ship without it and add it later.

With the data we had from the experiment and the feedback from our users, we decided to move forward with the Modal on the Recipe Detail page and we would add an ‘x’ button to the modal before shipping the next experiment. Even though it had a slightly smaller install rate than the Modal on the search results page, those percentages were not statistically significant and we reasoned that users that clicked on a recipe were more engaged and therefore more likely to install our native app.

Experiment #2 — Modal copy test

Hypothesis
By showing in-context copy about the recipe detail page, more HCs will click to download

Participants — Mobile web users landing on the Recipe Detail page

Design

  • Control: Modal from experiment #1
  • Variant A: Control page copy with ‘Free’ copy CTA
  • Variant B: Recipe copy with control copy CTA
  • Variant C: Recipe copy with ‘Free’ copy CTA
Experiment #2 Variants

Results
There were +18,000 user sessions in the experiment. Variant C was the only variant to show statistical significance. Variant C had a conversion rate of 4% compared to the control conversion rate of 2.5%.

Experiment results from Optimize — Variant C/3 shows a statistically significant result

Experiment #3 — Modal vs Action Sheet

Hypothesis
If users are given an option of how to view recipe content, more HCs will install the native app

Participants — Mobile web users landing on the Recipe Detail page

Design

  • Control: Winning modal copy from experiment #2
  • Variant A: Action sheet with ‘x’ button
  • Variant B: Action sheet without ‘x’ button
Experiment #3 Variants

Results

  1. The action sheet variants had a higher click-through rate, and install rates remained relatively the same.
  2. The action sheet variant with no ‘x’ had a higher click rate.

Conclusions

We never reached our goal of a 5% conversion rate from the web to the native app. I think we could have gotten there with more experiments and more time. However, the install rate wasn’t the only metric we were interested in.

After running these three experiments, we paused to look at more data and reexamine our original purpose for this work:

To lower the cost of acquiring native app users and retain users

We were basing the experiments on install rate because that was a readily available data point. At this point though, we had enough users covert to also look at retention rate compared to app users coming from other sources as well as the cost of user acquisition.

What we saw was that while web traffic was cheaper, based on the small percentage of users that were converting to the app, it wasn’t a reliable source or a cheaper source of users than other direct ad-to-app marketing sources.

Looking at retention proved to be difficult. We had multiple tools that were not giving us the same retention rates which was frustrating. However, when we compared the retention rate of converted web users to other app users we saw slightly higher retention rates and we initially felt excited about that. But again, the number of users coming from the web was so small compared to the number of users coming from other sources we could not confidently compare these retention rates and claim a statistical significance.

After these analyses, the decision was to pause this work for the time being and focus on other more pressing priorities.

Final Thoughtful Execution Framework with Learnings

Key Learnings

  1. The Thoughtful Execution Framework enabled us to do our work even better. Using the Thoughtful Execution Framework was extremely helpful in helping our team go wide in exploration AND it became so much easier to explain to other people what our team was focused on and why. When showing other teams and leadership the framework we could easily explain our rationale for priority and work. I often found myself saying ‘Yes, that was one of the options we considered, but this hypothesis is based on a more compelling data point so we’re exploring this hypothesis first.’ The thoughtful execution framework became an integral part of our process and we used it on all new projects going forward.
  2. Clearly document and wrap up every experiment before moving on. We did this work over a period of many months, partly due to competing priorities but also because our experiments had to run for at least two weeks in order to see a statistically significant result. This made it difficult to continually pick this work back up. It seemed like we spent a week realigning the team every time. This was a valuable learning experience for me. After this project, I made sure we debriefed as a team after experiment results came in. That way we could align on the results, document, and discuss any next steps we wanted to take.
  3. Ship minimally valuable user experiences, not minimally viable. In the first experiment, we shipped a less than desirable user experience in which the half-page modal didn’t have a clear dismiss action. We quickly learned our users were really upset and we ended up with unrealistic experiment results because we shipped an unrealistic experience that no one wanted to keep after we saw how displeased our users were.

--

--

Mary Knight

I’m a Product Design Leader and Researcher working at the intersection of social impact and technology.