Are your website’s performance goals audacious enough?

“We want you to be able to flick from one page to another as quickly as you can flick a page on a book. So we’re really aiming very, very high here… at something like 100 milliseconds.”

~Urs Hölzle, Senior VP Operations, Google

Urs said this at Velocity this past June (you can hear it at 3:45 of this video), and it’s resonated with me ever since.

I talk a lot about the business value of performance and why there’s no such thing as fast enough. But to my knowledge, Google is the only big company out there that has truly internalized this philosophy. In fact, I was down in Mountain View at the Googleplex a few weeks ago, and I heard on more than a few occasions the 100 ms goal spoken about in real terms — things like “This project will help us get closer to the goal.”

I’m not trying to imply that the folks at Google are the only people who care about performance. If you’re reading this blog, you clearly care. But the folks at Google are the only people I’ve met who have a clearly stated performance goal: all pages on their sites will load in 100 ms or less. In fact, they take it one step further and have the audacity to try and get all pages on the world wide web down to 100 ms.

This is a fascinating contrast to the rest of our industry, which tends to focus on benchmarks as indicators of success:

  • The average website in Keynote’s “Business Top 40″ takes 2.34 seconds to load.
  • The average web page loads in 4.9 seconds.
  • The average Fortune 500 website takes about 7 seconds.

So, do you consider your site successful if it loads in less than 7 seconds? Less than 4.9? 2.34? How do you decide where to take aim? Or do you take the attitude that you’ll just sort of chip away at load time when you have time, and content yourself with the knowledge that everyone else has the same attitude?

Benchmarks are useful tools. They give us a sense of our place in the world, and how we perform relative to others. They’re a good starting point for self-analysis. The problem with benchmarks is when they tempt us to focus solely on how we compare to others, who may be as flawed as we are. Google has wisely stepped outside the benchmark arena and has created a goal that has nothing to do with their competitors and everything to do with the actual people who use their products.

Google’s decision to aim at 100 ms makes sense from a human factors perspective. 100 ms gives us the illusion of instantaneous response. 100 ms makes a web experience feel real.

Is 100 ms possible? No, not right now. Will it be possible? Yes, most definitely, largely through the efforts of people who didn’t settle for benchmarks.

My challenge to you is to ignore benchmarks. Create an audacious goal for your site. Evangelize it in your organization. And take aim.

Related posts:

18 thoughts on “Are your website’s performance goals audacious enough?

  1. Hi Joshua,

    I like a BHAG as much as the next guy… but 100ms?

    It takes me (from the UK) 179ms just to ping http://www.webperformancetoday.com.

    64 bytes from 76.77.72.178: time=169 ms

    The fastest download speed (OnLoad event) was 2.34s in Firefox 3.6. The initial render was 1.798s

    64 bytes from 76.77.72.178: time=169 ms

    IE6 onloadEvent = 4.182s
    IE7 onloadEvent = 5.114s
    IE8 onloadEvent = 3.178s
    FF2 onloadEvent = 4.410s
    FF3 onloadEvent = 4.121s
    FF3.6 onloadEvent = 2.340s

    So whilst 100ms might be nice John Rauser’s velocity talk would seem to indicate that even with fibre all the way the speed of light would dictate a cross-continental RTT is 37ms (http://www.stevesouders.com/blog/2010/07/13/velocity-tcp-and-the-lower-bound-of-web-performance/).

    Getting across the Atlantic would double that… lets call it 70ms.

    Even if all the other content is edged via CDN to be London local is 30ms realistic to generate and deliver the core HTML page (given that we regularly see data start times of 200msec for common CMS systems)?

    Wouldn’t we be better settings some community-driven benchmarks for realistic performance NOW?

    Initial render in 750ms? OnloadEvent in under 2sec? Page complete in under 2.5?

    What do you think?

    cheers,
    Steve

  2. I think 100ms is achievable even now. Load blank HTML page from memory on a server that is right next to you or I should say, load AJAX data from comet / websockets server next to you with already open connection and you’ll see under 100ms response.

    More over, load data from you local cache and it’ll be fast too.

    We shouldn’t limit ourselves by distance or by existing technology – it’s only a question of how we get from here to ther pe (distance is easy – use local distribution servers), DNS is easy – just use IPs, fast backends is easy, open connections are easy and so on.

    I believe with enough money and time for research for last components, it’s all achievable already.

    Sergey the idealist ;)

  3. It’s actually even possible without resorting to IP’s but it takes quite an infrastructure to be able to do it. It’s great to see Google investing in it and not just for themselves. 100ms is a BHAG but it IS achievable, even with dynamic content:

    - DNS – use Anycast, long TTL’s and make sure your DNS infrastructure is very close to the end users (technically the end user’s DNS server but hey, there’s Google DNS now)

    - Socket connect and request times – again, your content needs to be right at the edge and close to the users. As John illustrated in his talk there’s no improving the speed of light. Potentially with long-lived connections to the edge

    - Number of round trips – Google’s investing a lot of work here as well with SPDY and tweaks to TCP Slow Start you may eventually be able to deliver an entire page/user content in one round trip

    If everything comes together, even for dynamic content you’re talking 3 round trips for the worst case and one if you’re already connected. You have last-mile latencies to worry about (so they also have 1GBps FTTH they are going to experiment with) but at that point you just need to have edge nodes within ~30ms of your users (or 100ms if you’re willing to take a hit on the first connection).

    You probably want to leave a little time in there for the browser itself to parse, render, etc but they are investing heavily there as well.

    It seems ludicrous on the surface to have a 100ms target when most would be thrilled to get under 1-2 seconds but if you look at where they are investing it might actually not be that far off.

  4. Well Google have some work to do. We’ve been running tests using the Android browser to give us HTTP traffic performance reports. Testing their mobile home page it took over 12 seconds to load a 12k page. What’s also fascinating is that the browser downloads items sequentially. (Not very good for performance).

    As for the SPDY idea. It’s great, but upgrading 205 million web servers to use it is going to be tough. Also those new headers would not be recognized by any current web server.

    As far as the whole performance metric goes IMO people are missing the bigger picture. What you need to understand is how to deliver the very best content dependent upon the type and location of the connecting device.

    If you want to make the web go faster then you have to get involved with the browser, server side fixes will only take you so far.

    Cheers,

    Peter
    5o9 Inc

  5. Pingback: Shop.org Performance Index: Should you care how you rank? — Web Performance Today

  6. Pingback: TechCrunch: The slowest tech blog, or one of the fastest? Turns out, it’s both. — Web Performance Today

  7. “there’s no improving the speed of light”

    No. But there are ways to work around the speed of light. I think it was Brian Suda (optional.is/required) that made an analogy to Mars expeditions; When standing on the surface of Mars, you will have a 20 minutes delay in any communication with Earth. Add 20 minutes more for the response to come back. For a first communication, there is just no way around that. But for any following communication, bandwidth can be used to negotiate the problem.

    Let’s say that the astronauts asks for help diagnosing some kind of problem. They then receive a check list to follow. While the follow the check list, all possible responses for all possible results are sent ahead from Earth. What response is actually given to the astronauts will be determined by their own computers on Mars, depending on their check list responses.
    These responses are then sent back to Earth, so that the computers on Earth will be able to narrow down the tree of information sent to the sub-tree that the astronauts have traversed so far.

    This is analogous to prefetch, and have close resemblances to cache, CDNs and more.

  8. Peter – I don’t think anyone is neglecting the browser. In fact, Google created Chrome precisely to address your concerns – they made a browser based around speed. They are attacking every step of the process, DNS/bandwidth/TCP Stack/browser/image compression algorithms/etc. If we can make substantial gains in each area then the 100ms goal starts to look a lot more realistic. The mobile environment may be behind the desktop when it comes to performance, but it will catch up eventually.

  9. Pingback: Recent Reading (JS Natives Duke it Out, Regexp in jQuery, Performance, a New Image Format?) » HTML + CSS + JavaScript » Blog Archive

  10. I wrote a blog post about the different loading speeds of the current browsers.

    I tested with a script from Ryan Witt that I have installed on a server in germany (located at hetzner.de)… URL is http://faster.1click.at if someone is interested.

    It is a client side jQuery script that can compare e.g. http://www.facebook.com against http://www.google.com (your own bandwidth is used, not the webserver’s bandwidth).

    Google Chrome was fastest (from a spanish 20MB/1MB cable connection) with 89ms for a site I have developed.

    The post’s URL is http://www.cookielessdomain.com/blog/2010/10/the-loading-speed-of-the-different-browsers/

    When I code a site I am always trying to achieve Google Page Speed score 90+, I am working with cookieless domains, combining/minifying CSS/JS and more techniques.

    Not everybody is doing that, I wish everyone would…

  11. Pingback: The quest for the holy grail of website speed: 2-second page load times — Web Performance Today

  12. Pingback: Is web performance optimization a “green” issue? — Web Performance Today

  13. Pingback: Case study: PrintingForLess.com proves that mortal companies are no longer settling for “fast enough” — Web Performance Today

  14. Pingback: How to take your site’s performance SLA to the next level — Web Performance Today

  15. Pingback: 4 awesome slides showing how page speed correlates to business metrics at Walmart.com

  16. Pingback: Our need for web speed: It’s about neuroscience, not entitlement

  17. Pingback: The “performance poverty line”: What is it and why does it matter?

  18. Pingback: Summer reading

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>