Progressive vs. regressive enhancement: Where and how do you draw the line?

As Steve Souders has written in his blog “There are too many pages that are blank while several hundred kB of JavaScript is downloaded, parsed, and executed so that the page can be created using the DOM.” To address this problem, our community has broadly adopted the mantra of progressive enhancement, or delivering the page as HTML so it renders quickly, and then enhancing it with JavaScript. 

At one extreme, we know that blank is bad. At the other extreme, we have agreed for the most part that quick rendering, even if this means initially sacrificing a bit of functionality, is good. What has always interested me is the area between these two extremes.

I wrote a post almost a year ago about how Symantec was showing its visitors the wrong content first on a key landing page. An eyetracking study performed by usability expert Jakob Nielsen found that delaying the load time of a critical page element resulted in that element being virtually ignored by the user when it finally loaded.

How should front-end optimization affect the visual rendering of a page?

To this day, I think this question remains unanswered. Up until recently, there’s been a general assumption that web content optimization should never, ever, ever-to-the-power-of-infinity change the visual feel of a website. We have always worked under the proviso that we are allowed to make pages faster and we are able to move their functionality, but we can’t change their look and feel.

But on Strangeloop’s quest to make our clients the most money possible, we continuously question and test every assumption, no matter how heretical. Last week, we had an interesting experience with a customer that I want to share as a classic example of where being heretics and questioning basic assumptions might lead to making our clients more money.

Performing optimization heresy: How we did it

In our R&D lab, we are always rapidly building and deploying new features. We recently invested in a whole slew of sexy and cool progressive enhancement features that allow us to automate how a page loads visually. This includes the ability to automatically affect what an object looks like, as well as where and when items load.

Using the Strangeloop Site Optimizer and our multivariate segmentation platform, we built a number of home page variations, each of which applied different progressive enhancement techniques. The fastest variation ended up including, among other things, the deferral of the main nav bar. By deferring the nav bar, we were able to reduce start render and load time by roughly 20% above and beyond all of the other progressive techniques we had already applied.

However, deferring the nav bar had a slight effect on the user experience, as the bar nudged the rest of the page objects down by about 30 pixels when it did appear. To my eyes, it looked very much like how this Apple landing page loads:

Everyone involved in these experiments had their own take on whether this was a good thing or a bad thing:

  • Some thought that it wasn’t great, but that the 20% performance gain made it acceptable.
  • Some thought that it was actually an improvement, because it called attention to the nav bar, making it more likely that people would click.
  • Some didn’t even notice the difference.

But the prevailing opinion was that deferring the nav bar compromised the aesthetics of the design and the overall user experience. In the end, the final decision was to forgo the speed gains and leave the nav bar as it was.

All of this decision making was done in the absence of any real world testing, at the client’s behest. The version that went live included a number of our other new progressive enhancement features, all of which showed a strong KPI correlation.

When we have access to so many great tools, why do we still rely on hunches?

This example highlights the intersection between business, design, and performance optimization. At the end of the day, all three groups serve the same business objective: make more money.  (I desperately wanted to create a dorky Venn diagram to illustrate this. Please take a moment to appropriately admire my restraint.) With this measurable goal in mind, why then are people in all three of these spheres so quick to state hunches as facts?

Not only do our hunches get in the way of good testing, but the test tools themselves can be misleading.

To further complicate things, in our experiment we found that the client’s backbone Gomez scores took a bit of a hit based on the new enhancements, even though, as I’ve already mentioned, KPIs were universally positive. Relying solely on the backbone scores would have been a huge mistake.

Informed hunches used to be acceptable back when we were all relatively less sophisticated, but there’s no need, and no excuse, for them now. We have tools that let us perform real-world A/B and multivariate tests. We have agile development manifestos. We have automation tools that let us produce new HTML at the click of a button. With these tools and philosophies, all we have to do is pit different iterations against each other and let the results speak for themselves.

The best solution: Ask a simple question

Ideally, in the above scenario, the question of what is best would have been resolved with a MVT that asked the simple question: Which version earned more revenue?

When it comes to design and development, how does your organization make decisions?

Related posts: