Case study: How to use network quality as a proxy for measuring mobile performance

As Steve Souders recently wrote, measuring mobile performance is hard. There are a number of reasons for this. Just a few:

  • The same event can report completely different timings on different browsers.
  • Depending on which measurement tool you use, you can get completely different results than the results generated by another tool.
  • And more central to the problem, no one can even agree on what metrics we should be optimizing for.

In my recent presentations at Velocity EU and Velocity China, I presented some real-world case studies that show the relationship between mobile performance and business key performance indicators such as revenue, conversions, and bounce rate. I was able to get a lot of my data thanks to a couple of Strangeloop customers who were as curious as I was. These customers were willing to segment a portion of their mobile traffic and serve them unoptimized, slower pages, then compare results against the optimized pages.

But this kind of experimentation isn’t always possible. How do you measure the impact of mobile performance changes when slowing down pages isn’t an option? Or if you want to see the impact of page slowdowns that exceed the 1s or so that my customers were willing to allow?

I’m a big fan of using proxies to identify performance trends, as you may recall if you read my post on how to use browser type and network connection as proxies, via Google Analytics, to see the relationship between site speed and revenue. (We’ve since turned this post into an 11-minute video.) So we here at Strangeloop decided to investigate possible proxies for mobile performance.

Methodology

We set our sights on network quality as a viable proxy. We gathered data on bounce rate and performance for iPad, Android, and iPhone users of a single e-commerce site, which had already been optimized using RUM. Using beacons, we also gathered data about the users’ latency and bandwidth.

We took this vast data set and divided it into cohorts based on network quality: 250 Kbps (300ms+ latency), 500 Kbps (200-300ms latency), 1 Mbps 150-250ms latency), and so on in 0.5 Mbps increments all the way up to 5 Mbps. In other words, we tracked how bounce rate changed between users with a really crappy modem to a really fast connection.

Findings

What happens to bounce rate and the average performance across these groups as network speed improves? As this animated graph shows, the dots for each group — iPad, Android, and iPhone — all start moving down and to the left:

In other words, as network connection speed improves, performance improves, and as performance improves, bounce rate improves. It doesn’t matter which line you follow, the trend is clear and consistent. This tells us that performance matters across the entire spectrum of users.

It’s also interesting to note that iPad users are much less patient at high speeds than they are at low speeds. When things are really slow, iPad users bounce at about the same rate as Android and iPhone users. But as network speed improves, iPad users tend to stay and their bounce rate gets dramatically lower — around 5% compared to 8% for iPhone users and 11% for Android users.

This is expecially interesting given what we now know about conversion rates for iPad owners versus other mobile owners. (Over the Black Friday weekend, shoppers using iPads converted at a much higher rate than other mobile consumers, 4.6% vs. 2.8% for users of all other mobile devices.) Clearly, keeping your iPad traffic happy should be a priority.

Takeaway

As I mentioned to the crowd at Velocity China yesterday, this case study might be boring to hardcore performance geeks because it doesn’t have code snippets, but it’s important because it will justify investing in the code snippets they want to write. This how-to is a really helpful way to present stats to a site owner and tell them, hey, measuring performance optimization isn’t about 500 ms increments, or even 1s increments. When we look across the entire scope of performance, we can see bounce rates going from 24% all the way down to 5% as network quality gets better.

Related links:

3 thoughts on “Case study: How to use network quality as a proxy for measuring mobile performance

  1. Pingback: Weekly #26 | fitml.com Blog

  2. Pingback: Got iPad 3 fever? Five things to know when it comes to tablets and mobile web performance

  3. Pingback: Case study: How effective are CDNs for mobile visitors?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>