In my travels, I’m seeing a strange dichotomy when it comes to performance:
- On one hand, site owners are pouring massive amounts of money and energy into site development and delivery.
- On the other hand, these same site owners are ignoring the out-of-control proliferation of third-party scripts on their sites.
The answer, more often than not, is this: There is no rationale. Most site owners are either unaware of this issue, or they don’t realize its seriousness.
If the issue of third-party performance is new to you, this post will help you:
1. Understand the impact of all those harmless-looking little snippets of code.
2. Regain control over rogue third-party content.
Background: Third-party scripts are everywhere. Some are fine. Most aren’t.
A few months ago, the folks at New Relic did some digging into the most popular third-party APIs used by the 200,000+ applications the company monitors and took a look at which ones performed the best. No surprise, these are all familiar names:
- Amazon Web Services (response time: 432ms)
- Twitter (response time: 832ms)
- Facebook (response time: 918ms)
- PayPal (response time: 1.788s)
This is good news for site owners… assuming these are the only four scripts you’re running. But third-party content — analytics, ads, trackers, social sharing widgets, etc. — are on the rise.
If you want to scare yourself, run a simulation using WebPagetest to see how your site would perform if one of its third-party providers went down. As a for-instance, here’s a simulation I ran, which shows what would happen to Staples.com if its third-party providers went down:
In this case, the main culprit is Omniture, which stalls the site for a full 30 seconds when it goes down. Note that about 50% of top ecommerce sites use Omniture. (I say this not to pick on Omniture, but to point out that if they ever go down, they’re going to take down a lot of sites with them.)
Outages are inevitable. As site owners and managers, our job is to figure out a strategy to mitigate these outages.
I dream of a world where all third-party providers offer clear service level agreements to users.
In an ideal world, all third-party providers would offer a clear SLA that, at the very least:
- Expresses their annual uptime guarantee as a percentage (ideally, as close to 100% as possible).
- Describes the process for reimbursing site owners (if site owners are paying for the service provided by the script) if uptime drops below the SLA guarantee.
Right now, most third-party providers don’t offer real-time monitoring of their scripts, nor do they offer meaningful service level agreements (SLAs). My hope is that, as site owners become more educated about the importance of page speed, they’re going to start demanding properly optimized scripts, as well as better monitoring, reporting, and accountability.
In the meantime, what can site owners do to take control of third-party performance?
Until recently, adding third-party code to your site meant giving up a lot of control over how your pages load. Not any more (sort of). Here are four options at your disposal:
In simplest terms, deferral is a front-end optimization technique that delays the execution of non-critical scripts until the rest of the page has loaded and rendered on the browser.
Pro: It’s a relatively easy fix.
Con: Deferral won’t work for all content. If your site hosts ads, your advertisers won’t be happy to have their ads show up last. Save deferral for third-party scripts like analytics beacons, tracking pixels, and social widgets.
2. Asynchronous loading
With asynchronous loading, third-party scripts load in parallel with the crucial page content. Async code can be tricky to program, which is all the more reason why it’s been gratifying to note its increasing rate of adoption among third-party providers. All the social buttons on this blog are async versions. (You can read about why I removed the non-async StumbleUpon button.)
Pro: Lets you display ads and other business-critical third-party content without blocking primary content.
Con: Slow third-party scripts will prevent onLoad event from firing. This post will give you a detailed understanding of how the onLoad event works. But the short answer is that a page’s onLoad determines its load time as measured by performance measurement tools. Too many delayed onLoads will mess up your results. If you’re tracking thousands of pages over extended periods of time, these messed-up results will make it a pain to pinpoint other potential performance problems.
3. Third-party timing and script killing
Also known as “tag management”, this technique involves establishing an allotted time for scripts to load. Then, if a script fails to load in that time, it’s either killed or deferred. Strangeloop has been an early pioneer of this technique in both our desktop and mobile FEO solutions, and it’s been gratifying to see it catching on.
Pro: Gives site owners the most control over third-party content.
Cons: Doesn’t lend itself to hand-coding. Best performed by an automated system.
4. Just say no
Ask yourself if you really need that widget. Perform a cost-benefit analysis of what a proposed new third-party tool offers your site and determine if it’s really worth the performance hit. Because make no mistake about it: there’s always a performance hit. And too many hits, no matter how small, add up.
Pro: Status quo is easy to maintain.
Con: Can be incredibly difficult to resist the siren song of the latest widget du jour.
And don’t forget to audit your third-party scripts.
- All the third-party scripts your site is running, and on which pages they appear.
- Which of the above performance practices (load asynchronously, deferral, timing/killing) each script follows.
- Does the third-party provider offer a service level agreement? If so, what are the terms?
What’s your experience with third-party snippets? Which ones are worth the performance hit? Which ones should send us running to the hills?