Browser innovation and the 14 rules for faster loading websites: Revisiting Steve’s work (part 2)

A few months back, I posted part 1 of a 3-part post to answer the question:

“Aren’t many of the web performance rules described by Steve Souders in 2007 already outdated or made obsolete by browser innovation?”

Despite the fact that my company was founded on the premise that Steve’s rules are our bible, I still place a great deal of value on testing core assumptions. Fortunately, testing the first five rules revealed that these performance rules have stood the test of time and are as valid today as they ever were.

Today (finally), I want to look at the next five rules:

Methodology

1. As with my previous post, I used these test cases of the performance rules that Steve created five years ago, and which are still available. Using the Wayback Machine, I timestamped them to October 2007.

Steve Souders: 14 rules for high performing web sites2. Next, I ran each test case on Internet Explorer 6 on WebPagetest to get a sense of what Steve would have seen for performance rules 6 through 10. I did three runs of each test and used the median result.

3. Then I ran each test case on Chrome 19 to see what impact, if any, the rules have on a modern browser. Again, I performed three runs of each test and recorded the median result.

4. Using the above approach, I was able to see and compare the before-and-after results for each rule for both browsers, and then calculate the benefit. You can see the detailed test results, along with links to the tests, in this spreadsheet.

Important: The goal here is not to look at how fast Chrome 19 is versus Internet Explorer 6. We have looked at this in the past. This time I want to look at the relative benefit of each performance rule.

Findings

Here’s a high-level look at the results:

Rule 6: Move scripts to the bottom of the page

What this rule means: It’s better to move scripts from the top to as low in the page as possible. One reason is to enable progressive rendering, but another is to achieve greater download parallelization.

With scripts, progressive rendering is blocked for all content below the script. Moving scripts as low in the page as possible means there’s more content above the script that is rendered sooner. The second problem caused by scripts is blocking parallel downloads. While a script is downloading, the browser won’t start any other downloads, even on different hostnames. [From High Performance Web Sites]

Testing the rule: Steve experimented with placing scripts in various places on a page in order to see their performance impact. Using the methodology above, I was able to see the performance benefit (defined by the improvement in document complete time) of applying two of these techniques to IE6 and Chrome 19:

Rule
Benefit in IE6 Benefit in Chrome 19
Scripts at the bottom 0.13% 18%
Deferred scripts
31% 18%

Rule 7: Avoid CSS expressions

What this rule means: CSS expressions are an older technique, supported in Internet Explorer up until the release of IE8, which let you set CSS properties dynamically. The problem with CSS expressions was that they were evaluated more often than most people realized. As Steve pointed out in 2007, moving the mouse around the page can easily generate more than 10,000 evaluations.

This technique has fallen out of practice, largely because of findings like Steve’s. Nobody with any common sense is using this technique anymore, so it’s not relevant.

There may, however, be a modern version of this same issue. I’ve heard reports abut similar issues with JavaScript performance due to easy-to-use JS libraries such as JQuery. Similar to CSS expressions, it’s very easy to hook up functions to JavaScript events like moving the mouse and scrolling, causing similar tens of thousands of calculations to happen. I’m not sure how often this is actually happening out in the real world, but it’s something to be aware of.

Testing the rule: N/A

Rule 8: Make JavaScript and CSS external

What this rule means: If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, you could potentially benefit from cached external files.

Pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser’s cache. [From High Performance Web Sites]

Testing the rule: Below are the performance gains (defined as the improvement in document complete time in the repeat view) for both browsers:

Rule
Benefit in IE6 Benefit in Chrome 19
Cacheable external JS and CSS 37% 40%
Post-onload download 42% 45%
Dynamic inlining 38% 44%

Rule 9: Reduce DNS lookups

What this rule means: There’s no test case for this rule — because it’s a given that fewer DNS connections is a good thing — but I still want to review it here in order to talk about why it’s a good thing.

DNS lookup is when the browser looks up the domain of the object being requested by the browser. Think of this as asking the “phone book” of the internet to find someone’s phone number using their first and last name. DNS lookups are cached for better performance — on caching servers maintained by your ISP or LAN, on your operating system’s DNS cache, and in your browser cache.

Steve offers this advice: Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

It’s important to know that some newer browsers (at least Chrome does) do DNS prefetching as well.

Testing the rule: N/A

Rule 10: Minify JavaScript

What this rule means: Minification is the practice of removing unnecessary characters — such as comments and white space — from code. This improves JavaScript response time because the size of the downloaded file is reduced. Minification is fairly straightforward, unlike its sister practice: obfuscation.

Like minification, obfuscation removes comments and white space, but it also munges the code — converting function and variable names into smaller strings, thereby making the code more compact as well as harder to read. While munging wasn’t originally conceived of as a performance practice, it was found that it can help performance because it reduces the code size beyond what is achieved by minification. (Warning: Obfuscation can lead to bugs, unlike minification, which is relatively problem free.)

In addition to minifying external scripts, you can also minify inlined script blocks. Even if you’re already gzipping your scripts, minifying them will still reduce the size by at least 5%.

Testing the rule: Below are the performance gains for both minification and obfuscation of different-sized files:

Rule
Benefit in IE6 Benefit in Chrome 19
Small script minified 14% 16%
Small script obfuscated 14% 15%
Large script minified 22% 21%
Large script obfuscated 23% 21%

Conclusion

Once again, it’s reassuring to see how many of the performance rules have stood the test of time. I was really struck by how similar the results were for many of the test cases. I had expected that the rules would still be relevant, but to be perfectly honest, I had also expected that browser evolution would have resulted in more disparity between the results.

It was also interesting to see how a rule — avoid CSS expressions — has become so entrenched over time that it’s no longer an issue.

This has been a really interesting exercise so far. I’m looking forward to part 3.

Related posts: