A (somewhat) brief history of the web performance landscape

I’d like to enlist your help. As I’ve mentioned, last week I led a session on web performance automation for the members of the NY Web Performance Meetup group. For the session, I created a set of slides that outline my theory about how the front-end performance landscape has evolved over the past 15 years. Now I want to solicit your feedback and help me fill the gaps.

Evolution: From delivery to transformation

Most companies know that if site speed is an issue for them, the problem isn’t infrastructure, and throwing more bandwidth and servers at the problem isn’t the solution. As I understand the current solution landscape, the web performance problem can be approached in two ways:

1. Delivery
Delivery-based solutions are focused on getting the data from the server to the browser more quickly. This is a $4.2 billion/year market, encompassing CDNs, network devices/accelerators, and others:

  • CDNs
    Pros: Make sites faster by shortening round trips; easy to deploy
    Cons: Expensive; don’t take advantage of acceleration opportunities like roundtrip reduction and optimizing pages for browsers
  • Network devices/accelerators (e.g. load balancers)
    Pros: Proven technology; easy to implement and deploy
    Cons: Don’t address performance problems that occur at the browser level; very hard to configure which is why many sites using them don’t even use the basic features of compression and keep-alive
  • Other (TCP, DNS, etc.)
    Other delivery players exist, such as DNS solution and TCP optimization solution, but they are at the fringes of this market, and I consider them features rather than unique market segments when it comes to performance.

Here’s the diagram I’ve created to show the breakdown of delivery-based solutions and the major players in this space:

Includes companies like F5, Citrix, Akamai, Limelight, Cotendo and CDNetworks

2. Transformation
Transformation-based solutions focus on analyzing each page of a site from the browser’s perspective and optimizing each page so that it is delivered most efficiently to the browser. Thanks to teams at Yahoo and Google, there are emerging sets of best practices that serve as guidelines for this recoding.

Note that transformation is a complement to, not a replacement for, delivery-based solutions.

It is difficult to segment this emerging market, as very few players are actively involved in it. I choose to segment it by how transformation is delivered (via server, network or cloud) as this discussion seems to be a clear dividing line between the various players.

  • Server: In this category I put all of the tools that sit within the datacenter on the server. In this category we have the pure-play server plugs-ins as well as the virtual machines. I see a further distinction in this market between platform-specific products (i.e. this only works on Apache or IIS) versus solutions that work across all platforms.
  • Network: In this category I have placed all of the physical hardware devices that do transformation. You will see an eclectic mix of new and old, with 10+ year code bases like F5 and Cisco mixed in with modern transformation products.
  • Cloud: In this category I put all of the solutions you can subscribe to. This is a very small category. I really hesitated to include Akamai, as they do almost no transformation today, but they do parse HTML for the pre-fetching feature, which gets objects to the edge faster. (I also didn’t want to have a category of one.)

This is a first stab and I’m not convinced I have it right, however I am excited to put something down on virtual paper, so in three years I can look back and see how far our industry has evolved and realize how naive I was.

Server-, network- and cloud-based solution providers, including Strangeloop, Aptimize, Acceloweb, and Webo

Web performance timeline: Any trends here?

After organizing the solution providers in both the delivery and transformation camps, I thought it would be interesting to put the key players in front-end performance on a timeline and see if any patterns emerged:

Includes Gomez, Akamai, Strangeloop, SPDY, and Velocity

As you can see, in addition to showing solution providers, this timeline also shows when new browsers appeared on the market, as well as the appearance of widely embraced performance tools and reference materials. This is a brain dump, but I tried to capture they key elements that I think of when it comes to front-end performance.

This historical bird’s eye view corroborates my delivery-to-transformation theory of performance evolution:

  • The early web was all about the basics: seeing content (i.e. browsers) and getting to modems (Gzip and other server side tricks).
  • The exuberance of the late ’90s was made possible by huge investments in basic infrastructure and foundational datacenter technology. In our world, the key developments were the first load balancers (F5/Netscaler), the introduction of Akamai, and the development of measurement tools such as Gomez and Keynote, which set the standard for web performance measurement.
  • The late ’90s was a hotbed for innovation and produced the first interesting cloud play for dynamic content (Netli) and the first real transformation play (Pivia, which was subsequently bought by Swan Labs and then swallowed by F5; this 10-year-old technology is now branded as the F5 Web Accelerator).
  • 2000-2006 was a tough time for the front-end performance market. We did see some incredible innovation in related markets, such as the branch office acceleration market (i.e. technology that speeds up Outlook and Office between branch offices). The only interesting and key innovator in my eyes was Fineground, which blazed a trail in transformation but sold to Cisco and subsequently was killed.
  • With the recovery of the web economy came greater investment in new tools and research. In 2006, I co-founded Strangeloop and we filed our first patent on the technology that formed the basis for the set of solutions now known as Site Optimizer.
  • Shortly afterward, O’Reilly published Steve Souders’ book High Performance Web Sites. On its heels came a number of developer resources and diagnostic tools such as Webpagetest, and Browserscope, as well as the Velocity conference, which quickly became an unofficial hub of the performance community.
  • In more recent times, our industry has matured with more entrants into the transformation space and legitimization of the core premise with seminal moments like the inclusion of page speed as a key ranking factor in the Google search algorithm.

Your thoughts?

This is just my wide-angle take on the front-end web performance landscape. I’m very interested to hear yours. Is my classification scheme accurate? Have I left out any major developments or solution providers? Are there any gaps that need to be filled? Trends I’ve missed?

And what about the future of solution delivery? Given the trajectory we’re on, where do you see our industry going in the next few years?

Related posts

11 thoughts on “A (somewhat) brief history of the web performance landscape

  1. Josh,

    Thanks for the great article.

    On the server side, I’d recommend that you add web framework specific optimizations, such as:

    * Jammit and css_sprite for Ruby on Rails
    * the built in asset packager/minifier in Zend Framework
    * various packagers for Python/Django

  2. Hi Josh,

    What about the “browser performance wars”?

    Things like the launch of Chrome, javascript engine optimisation, pre-fetch for objects & DNS, the new Web Timing API, browser plug-ins like ad-blockers and flash blockers that can “speed up” the user experience perceptually etc

    Perhaps your model needs to be extended to cover optimisation “once the content has reached the browser”?


  3. A seminal moment (paradigm shift?) was the shift in focus to front-end performance, courtesy of people like Steve Souders and Yahoo. I would put a greater emphasis on that in the history of web performance timeline (e.g. the 14 rules, the book, the tools, the conference, etc.) which all build on that previously-underestimated now-obvious idea.

  4. Thanks for the feedback, all.

    @Marshall: Obviously the frameworks have a huge impact on performance. I will try and find a way to incorporate them into the story more effectively.

    @Steve: That is a very good point. I tried to capture some of that by calling out the browsers on the timeline history. I will try and find a way to be more explicit about the big technology leaps that have occurred.

    @Tal: I intentionally left the load testers out as it is an area I am not particularly familiar with. I will leave that to part to others.

    @Lenny: You make an excellent point. I tried to highlight that landmark change with YSlow, Velocity and Steve’s book, but I also agree it needs more emphasis.

  5. Pingback: Use of content delivery networks doesn’t correlate to faster websites for the Alexa Retail 1000 — Web Performance Today

  6. Pingback: Cloud performance: How do we retain control and visibility? — Web Performance Today

  7. Pingback: High performing websites [VIDEO] — Web Performance Today

  8. Pingback: WPO? WCO? FEO? Unmuddying the web performance waters — Web Performance Today

  9. Pingback: Google’s new Page Speed service: A handy resource for smaller site owners — Web Performance Today

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>