The content delivery ecosystem is changing, and with the launch of our latest product — the Strangeloop Network Accelerator — we here at Strangeloop are excited to be playing a part in this.
In this post, I’m going to provide some historical background that takes us up to the present market readiness for end-to-end optimization (EEO). Then I’ll share a bit of how Network Accelerator works.
A brief history of time saving: From delivery to transformation to delivery PLUS transformation
Below is one of my favourite slides from presentations I’ve done at various events. This diagram lets you see at a glance the evolution in our industry:
As I’ve talked about many times, organizations that recognize the need to take their website’s performance to the next level need to change their basic assumption about acceleration. This change is not a 180-degree turn, however – it’s an evolutionary change. Delivery-based solutions, such as content delivery networks and dynamic site acceleration (DSA) tools, have formed a solid foundation. And in recent years, transformation-based solutions — specifically front-end optimization (FEO) products and services like the ones we develop at Strangeloop — have complemented this foundation.
If you’ve spent any time in this space, you know that these solutions, while complementary, have been available as more-or-less standalone options. But it was only a matter of time before someone found a way to consolidate DSA and FEO solutions, and then leverage CDN infrastructure to deliver these solutions closer to end users — all in one cost-effective package.
We’re happy that it was us.
What this means for telcos and cloud providers
Telcos and cloud providers are aggressively entering the content delivery space. Here’s the elevator recap as to why:
Pages are getting bigger. Web traffic has been growing almost exponentially. More data traveled over the web in 2010 than in the entire preceding history of the internet. Telcos have been paying to provide the highways for all this traffic — highways that traditional CDNs are leveraging for their own purposes, at a considerable profit.
Telcos have a vested interest in entering in this space. Traditional CDNs don’t necessarily create the most efficient pathways for content, and these inefficient pathways are expensive — specifically, they’re expensive for telcos. By moving content around more efficiently, and by caching content in their own PoPs close to end users, telcos stand to save a lot of money.
But saving money is just part of the equation. In recent years, providers have realized that they are in a strategic position to add value to customer content by offering:
- quality of service across the entire delivery ecosystem,
- economies of scale that confer cost advantages for content owners, and
- even faster content delivery than traditional CDNs, because they are closer to end users.
Ultimately, the big winners will be end users. Telcos and cloud providers are very aware of the growing end user demand for availability and quality of service — the user experience stats we cite all the time. They want to leverage how they manage their networks and their cloud so that they can offer their customers — and by extension all web users — better, faster, higher quality content.
They’re constantly building out their infrastructure, increasing their PoP density (aka “capillarity”, a term I love, picked up from Gonzalo de la Vega from Telefónica at the recent Content Delivery Summit), and shortening the journey from PoP to end user.
This is one part of the next-gen content delivery equation. Network acceleration solutions like ours are the next part.
So how does the Strangeloop Network Accelerator work?
We offer the first acceleration package that tackles the entire content delivery chain, piggybacked on network infrastructure. We call it end-to-end optimization (EEO), and it looks something like this:
1. It starts with Near-Client Edge Acceleration powered by our proprietary LightSpeed platform — acceleration nodes that sit at or near network PoPs.
At the edge, Network Accelerator receives connections and starts up browser rendering as fast as possible using the LightSpeed platform. The LightSpeed platform extends business logic to the edge, allowing adaptive acceleration decisions to be made in real time — without demanding greater infrastructure spend or complexity.
2. Then comes Middle-Mile Acceleration powered by our innovative DSA technology.
Network Accelerator uses our own next-gen version of DSA to optimize middle-mile delivery. Our unique visibility into content patterns allows Network Accelerator to perform content-level optimizations such as aggressive compression, HTML differencing, and symmetric acceleration techniques to reduce traffic to its leanest and most efficient form — typically accelerating content by at least five times.
3. And finally Near-Server Edge Acceleration, powered by our award-winning FEO platform.
Network Accelerator takes our best-of-breed FEO engine and enables it to power an infinitely scalable and redundant cloud platform. By adding advanced FEO at the PoP, rather than at the origin server, we can do extremely cool things such as facilitate adaptive and real-time information sharing between nodes and on-demand rewriting that scale in real time to changes in load.
Of course, it’s not enough to have infrastructure and a faster content delivery network.
It would be naive to think otherwise. Telcos and cloud providers need to evolve in other areas in order to successfully disrupt the CDN market:
- They need to learn to sell high-value products.
- They need to talk about more than price as a differentiator.
- They need to gain high-value technical expertise at layer 7.
- They need to provide responsive customer service.
- They need to accelerate processes both pre and post sales.
But for companies who have mastered these challenges, end-to-end optimization is a logical next step.
In my next Network Accelerator post, I’ll be taking a deeper dive and describing a few of our key features. In the meantime, if you have any questions, let me know in the comments.