This past Wednesday, I had the great privilege of hanging out with the New York Web Performance Meetup crowd, where I led a session on web performance automation. If you’re interested, here’s the slideshow deck:
I wanted to identify what kinds of performance best practices are manually do-able versus which practices lend themselves better to automation. After going over performance-related terminology and concepts — such as waterfall charts, first vs repeat views, and concurrency — we jumped into a hands-on case study: optimizing the Velocity website. (Full disclosure: O’Reilly recently became a Strangeloop customer, though we’re not yet implemented.)
Delivery vs transformation: The history of performance automation
It’s important to note how performance optimization has changed over the years, from a delivery-focused approach to a transformation-based approach:
Delivery = “I will deliver what the server gives me as efficiently as possible to the browser.”
Transformation = “I will transform what the server gives me to optimize it for the user’s browser.”
This presentation is based on a similar session that I ran with Strangeloop’s VP Product, Hooman Beheshti, at the Velocity conference. Much like that session, I artificially “de-optimized” the Velocity site, then implemented a series of best practices, one by one, showing the speed improvements that accompanied each one.
We reviewed the major performance pains of the unoptimized site:
- Too many connections
- Too many bytes
- Bad caching for repeat views
- No CDN
- Too many roundtrips
- And some others
- Client-side caching
- Use a CDN
- Reduce roundtrips
- Reduce payload further
- Increase concurrency
Throughout this exercise, we talked about the pros and cons of each of these best practices, including the work involved in implementing and maintaining them.
My hope in doing this session was to demystify the automation process, and show how it can complement hand-crafted optimization, making it another useful tool in the developers’ arsenal.