Automated front-end performance: How do you calculate developer ROI?

Last week, we were preparing for a meeting with a customer and, during the process, our sales team asked me this question:

Customer X just asked if we had any analysis of cost savings for the amount of manpower they would save with Site Optimizer. I haven’t seen anything like this, but I thought I’d ask. A blog post, case study, or anything along the lines of cost savings would be great.

The question of ROI from automating front-end performance from a developer perspective is something I haven’t written about. Part of the reason it’s taken so long to get to this topic is because I feel like the answer is obvious, but at the same time I dread doing calculations that might cost someone their job. (More on that later.)

But when I reflect on it, the answer is really only obvious to those of us who care about performance and have tried optimizing by hand. If you’ve never done this, you’d never suspect the pain that lurks behind the performance doors.

In lieu of simply believing that many of the performance best practices should be automated just because industry gurus say they should, let’s get specific on the cost savings.

Background: Who cares about developer ROI?

Understanding this problem starts with some basic assumptions about the type of client interested in developer ROI:

  1. Customers interested in ROI generally write code, they have coders on staff, they have code repositories, and they understand concepts like version control. If you rent/host a blog for $1.99 per month, you will not find an ROI on the developer side.
  2. Customers interested in ROI have some significant (at least for them) revenue that is either directly related to the site (think ecommerce) or indirectly related (think big brands like Johnson & Johnson). Alternatively, they might have productivity-related goals for internal applications. In all cases, the CEO knows the site(s) exist(s) and knows that his/her job could be on the line if it/they don’t work.
  3. Customers interested in ROI change their site(s) often. This is key because, as I’ll demonstrate in a minute, a significant amount of the cost on the dev side is due to website changes over time.
  4. Customers interested in ROI often have a very strong marketing/content organization that dictates functionality to build and functionality to incorporate (think third-party content). The costs would be way less if the developers were able to make the site look and act like Craigslist.

Quick case study: Sears.com

A good example of this type of customer is Sears.com. Their site matters, it’s dynamic, and they care about performance.

Trying to frame this problem is hard because you can look at it from many different perspectives. A good way to start is to look at a couple of key front-end performance rules as described by Steve Souders in the context of different perspectives. I’ll cherry pick a few to provide a picture of the complexity the problems and the ROI benefits.

Key rule #1: Make fewer HTTP requests

Perspective 1: One page, one browser

As I’ve already mentioned, the costs associated with performance tuning are significantly reduced if the site never changes… but good luck with employing that strategy in modern ecommerce warfare.

To observe how much sites change over time, I dug into the HTTP Archive and pulled tests over the last year from Sears.com.

Let’s take a look at this data from two perspectives:

Visual: Looking at the video below, you can see how much the site has changed from a visual perspective over a year.

Behind the scenes: The amount of change you see above is very common. The amount of change is equally dramatic when you peel off the good-looking exterior and look behind the scenes. This site is constantly changing. Every week — possibly every day — the content changes, the number of requests change, and the number of domains change.

Sears.com home page changes: ImagesSears.com home page changes: JavaScriptSears.com home page changes: CSSSears.com home page changes: DomainsAs you can see from these tables, the basic composition of this page changes dramatically over the year.

Why is this expensive to hand code?

One of the key ways we can make sites faster is to minimize server roundtrips. Many of the techniques to minimize roundtrips involve combining objects together into packages. If the content is always changing, then these packages also have to change. Keeping packages updated can be very time consuming and arduous. And as anyone who’s actually done this can tell you, this work is also fraught with the potential to make mistakes when done by hand.

The key ROI from automation for dynamic sites is the ability to offload all of the packaging time and effort onto a computer. Not only does this save a significant amount of time, but it also reduces error.

How to calculate cost savings

A few caveats on ROI calculation: Calculating cost savings is best done on a case-by-case basis. All of the numbers I cite in this post are from my experience working with customers that do this stuff by hand. Your mileage may vary. However, here are a few guidelines from my experience:

In one large organization that performs FEO by hand, I have seen three full-time developers in the resource management side of the house. These people not only deal with packaging but also with caching and versioning.

Formula: By implementing automated FEO for resource reduction in this context, I would calculate 5 person-days of dev/QA time saved for every major content change on the site.

Perspective 2: One page, many browsers

Changing packages as content changes is hard and costly to do by hand. Changing packages to take into consideration the performance nuances within each browser is a nightmare to do by hand.

Why is this expensive to hand code?

Browsers don’t all support the same standards. As we’ve observed in the past, browsers can also change at a moment’s notice with disastrous effects if you are not careful. If you’re going to hand code by browser, not only does your matrix of supported scenarios exponentially increase, but your QA surface also exponentially increases. You also need to stay on top of all new browser developments and patches.

For a modern website with a standard user profile, you would need to do separate packages for at least five browser groups.

How to calculate cost savings

In any large organization that performs FEO by hand (think Google, Facebook, etc.) they have a team of browser experts.

Formula: Depending on the size of your organization I would factor in at least one browser expert and a 1.5x increase in front-end development time and a 2x increase in QA time if you’re going to undertake a per-browser optimization path.

Perspective 3: One page, many desktops + mobile

You probably already get the point, but it’s critical to break out mobile. Mobile makes front-end optimization way more difficult. Resource reduction is done totally differently in the context of mobile, so you need a totally different skill set to conduct mobile FEO.

Why is this expensive to hand code?

Standards are changing quickly. Browsers are changing quickly. Handsets have very different capabilities. Testing is a big pain in the butt.

How to calculate cost savings

Most of the organizations we know that perform front-end optimization by hand have a totally dedicated mobile team (on the dev and QA side).

Formula: I would calculate at least at 2:1 ratio of front-end desktop to mobile team members and a 2x increase in workload if they are going to perform these tasks by hand. You also need to factor in the costs of all the different devices and related plans.

Perspective 4: Many pages, many browsers

As I have written about before, resource reduction is hard across pages and, if done improperly, it can actually slow you down. One of the key benefits of a good automated FEO tool is the ability to juggle all of these different packaging combinations and provide the optimal package.

Why is this expensive to hand code?

Given the amount of change seen in a modern site, I would suggest that hand coding resource reduction in any sophisticated way across pages is nearly impossible by hand. If you’re just going to put a few common files into a package and reference it everywhere, this problem is much closer to perspective 1, but this is not good enough to gain maximum performance.

How to calculate cost savings

Given how hard this is to do, I would suggest the ROI here cannot really be measured in person-power reduction, but simply in the value of site speed. If you are crazy enough to try to do this by hand, be prepared for months of extra development and a whole lot of pain.

Key rule #2: Add an expires header

Adding expires header is easy. All of the difficulty arises from having to add the right header to the right resource and then to deal with version changes.

Imagine a simple example in which you add an expires header of 24 hours to a resource called logo.gif. Now imagine three hours later that the image changes but the name stays the same.

You now have a problem because everyone who has logo.gif cached will not request it again for 24 hours (or, to be exact, 24 hours minus the time elapsed since they first received it). This is a big problem because now you have people seeing stale content, and stale content is not acceptable.

To deal with this issue, organizations often have poor caching headers or else they embark on the long process of building a sub-system to version objects and control expires headers. They also need a vigilant operations staff to find stale content and they need to keep someone glued to the CDN purge tool. (Purging on a CDN is typically done manually or else you need to burn development hours integrating with their APIs. This becomes more complicated if multiple CDNs are in use, since no two are the same.)

Why is this expensive to hand code?

Proper object versioning and header management is time consuming and expensive to code. Without it, you risk stale content or more roundtrips because your expires headers are not optimal.

How to calculate cost savings

Building a proper version control and header management sub-system is expensive (think many person-months of effort). You also have to factor in the time you spend manually purging CDN caches, as well as the damage that stale content does to your brand.

Other key considerations

This post just scratches the surface in terms of ROI benefits when it comes to automating FEO — and we’ve only covered two rules out of fourteen! I could go on forever on this topic, but for now here are a few other considerations:

Third-party content

As we know, managing third-party content is costly. It also requires a great deal of vigilance to ensure you don’t have SPOFs or poor performance because of the third-party content you are forced to add to your site. Automating performance offers significant dev ROI because good FEO tools will help manage your third-party tags and place them in the right order. Your FEO vendor should save you countless hours in the trial and error process of moving tags around.

Image sizes and resolution

Images are one of the biggest performance challenges and many organizations spend a great deal of time optimizing their images by hand. One of the big time savers from automated FEO is the ability to have all of this work happen automagically.

Using a CDN

Automatically renaming files so they can be served from a CDN can be time consuming. One of the benefits of automated FEO is the ability to quickly get onto and change CDNs, thus saving devs a lot of time.

Final thoughts: ROI is about more than just short-term savings

One of the reasons I’ve avoided writing a post on the person-savings from automated FEO is because I believe very strongly that automated FEO is a tool that is best used in conjunction with great developers – not as a replacement. I have seen benign ROI calculations like this one used by number crunchers to justify making damaging resource reductions. (Don’t be fooled, CFO guy. Your developers are a key asset.) If you run a successful modern ecommerce site, you need great developers working on hard problems. The purpose of automated FEO is to let them work on different problems — in some cases bigger problems.

Offloading the arduous task of automation frees your team to climb bigger mountains and to continue to innovate.

Related posts: