16 Mar 2011
It was great to see above-the-fold time (AFT) on the agenda at yesterday’s Velocity Online. It’s been pretty widely acknowledged, here and on other blogs, that load time and doc complete time don’t fully cut it as measurement numbers. Our industry needs a user-centered approach to measuring page performance, one that tells site owners when visitors are able to see and interact with a significant amount of page content. AFT is a promising concept.
Coming up with a universal algorithm for such a nuanced measurement — an algorithm that can be applied to any site to consistently and accurately measure performance — is a gargantuan undertaking. Pat Meenan and the rest of the gang at Google who are working on WebPagetest deserve huge respect for being the first to tackle this hairy challenge.
Defining “Above the Fold”
In yesterday’s session on above-the-fold time, led by Google’s Jake Brutlag, AFT was defined as the moment when content “above the fold” (aka “what you see in your browser window”) stops changing and reaches its final state.
Put in its simplest terms, WebPagetest’s AFT algorithm performs the following calculations:
- Differentiates between content and ads by classifying static pixels as content and dynamic pixels as ads
- Identifies static pixels as those that change less than 5 times within a defined cutoff point
- Determines AFT as the moment after the last change (within the above-mentioned cutoff) of a static pixel
Testing AFT on real live sites
I was interested in seeing how WebPagetest’s new AFT option performs, so I ran tests on the top 20 Alexa-ranked retail sites.*
|Website||Load time||Above the fold||Difference (%)|
|Barnes & Noble||6.496||n/a||n/a|
Interpreting the results
As Jake pointed out in yesterday’s session, WebPagetest’s AFT measurement often corresponds approximately with onload/document complete time or fully loaded time. You can see this in the page tests for Overstock.com and Bodybuilding.com.
In contrast, I’ve highlighted all the instances where AFT exceeds load time. While in many cases, the difference isn’t huge, in some cases, the difference is considerable.
Macys.com, for example, has a load time of 6.760s, and an AFT of 16.8s. If you take a look at the waterfall and filmstrip view for Macys, you can see that one of the reasons for this discrepancy is an animated graphic in the main promo banner, which rotates every 3 seconds.
The test results for Amazon also showed a huge variance: 2.701s load time and 5.8s AFT. Looking at the waterfall and filmstrip reveals an animated graphic to be one of the culprits.
Some optimization techniques will give inflated AFT measurements
I applaud the folks at WebPagetest for taking this brave first step in tackling above-the-fold time. The main problem that I see with the current iteration of the AFT algorithm is that it doesn’t fully take into account two effective performance optimization techniques that are growing in popularity:
- Deferral – An example of this is deferring objects like the Facebook “like” button to load last. It’s a great optimization technique because the button is a piece of third-party code that can slow down a page, and it’s not crucial to how a page functions. However, because it can appear above the fold, it can result in an AFT measurement that is out of line with how a visitor actually perceives the usability of a page’s content.
- Progressive rendering — Developers are being encouraged to use this technique, which delivers fast, low-quality images right away and then substitutes them with high-quality images at the end of the page load. Like deferral, this technique focuses on how a user perceives a page’s performance. But like deferral, this technique can give a deceptively slow AFT measurement.
In Jake’s presentation wrap-up, he was very clear about the current limitations of AFT:
- Only applicable to lab settings
- Does not reflect user-perceived latency based on functionality
- AFT heuristics always need further iteration
In other words, while AFT is useful as a visual validation of other metrics, it is not yet a substitute for understanding functional readiness — how users actually see and use a page.
*Tests conducted on IE9, via the WebPagetest server in Dulles, VA.