We recently released our latest quarterly research into the performance and page composition of the top 500 online retailers. (The full report is available for download here.) Today, I thought it would be revealing to take a look at the ten fastest sites and the ten slowest sites and see what they have in common, where they differ, and what insights we can derive from this.
Every quarter, we use WebPagetest — an online tool supported by Google — to measure and analyze the performance and page composition of home pages for the top 500 ecommerce sites, as ranked by Alexa. WebPagetest is a synthetic tool that lets us see how pages perform across a variety of browsers and connection types. For the purposes of our study, we focused on page performance in Chrome over a DSL connection. This gives us a real-world look (as much as any synthetic tool can provide) at how pages perform for real users under realistic browsing conditions.
NOTE: When I talk about “fast” and “slow” in this post, I’m talking about a page’s Time to Interact (TTI) — the moment that the page’s feature content has rendered and become usable. If your focus is on measuring the user experience, TTI is the best metric we currently have. As the graph above demonstrates, TTI (represented by the green bar) can be significantly faster than load time (represented by the red bar).
1. Faster pages are smaller.
Among the ten fastest pages, the median page contained 50 resource requests and was 556 KB in size. Among the ten slowest pages, the median page contained 141 resource requests and was 3289 KB in size.
(Note that these numbers are for the page at the moment the onLoad event fires in the browser, AKA “document complete” time, AKA the amount of time it takes for most resources to render in the browser. Doc complete time shouldn’t be confused with fully load time, AKA the amount of time it takes for every resource to render. More on this dictinction later in this post.)
In other words, the median slow page was almost three times larger than the median fast page in terms of number of resources, and about six times larger in terms of size.
Looking at the range of page sizes offers a bit more perspective. For the ten fastest pages, the total number of resources lived within a pretty tight range: from 15 to 72 resources. The smallest page was just 251 KB, and the largest was 2003 KB. With the ten slowest pages, we saw a much wider range: from 89 to 373 resources. The smallest page was 2073 KB, and the largest was more than 10 MB.
If you’ve been reading this blog for a while, then the issue of page bloat and its impact on performance isn’t new to you. But it bears repeating, so I’m repeating it.
2. Faster pages have a faster Time to First Byte (TTFB).
Time to First Byte is the window of time between when the browser asks the server for content and when it starts to get the first bit back. The user’s internet connection is a factor here, but there are other factors that can slow down TTFB, such as the amount of time it takes your servers to think of what content to send, and the distance between your servers and the user. In other words, slow TTFB can be an indicator of a server problem or a CDN (or lack thereof) problem — or both.
Among the ten fastest site, the median TTFB was 0.254 seconds, compared to 0.344 seconds for the median for the ten slowest sites. This difference — less than 100 milliseconds — might not sounds like much to be concerned about, but bear in mind that TTFB isn’t a one-time metric. It affects every resource on the page, meaning its effects are cumulative.
3. Faster pages understand their critical rendering path and know what to defer.
Deferral is a fundamental performance technique. As its name suggests, deferral is the practice of deferring any page resources that are not part of a page’s critical rendering path, so that these non-essential resources load last. (The optimal critical rendering path has been excellently defined by Patrick Sexton as “a webpage that has only the absolutely necessary events occur to render the things required for just the initial view of that webpage”.)
Faster pages seem to have a better handle on deferral, which we can infer from looking at the difference between their page size metrics at doc complete versus fully loaded. As already mentioned, among the ten fastest pages, the median page contained 50 resources and was 556 KB in size at doc complete. But when fully loaded, the median page doubled in size to 1116 KB, and contained almost 50% more resources.
Compare this to the ten slowest pages. The median page grew by about 30%, from 3289 KB to 4156 KB, and from 141 resources to 186 resources. And in several cases, the difference between the doc complete and fully loaded metrics was either unchanged or only negligibly different, indicating that these site owners have not put any effort into optimizing the critical rendering path.
4. CDN adoption is the same among the fastest and slowest sites.
Seven out of ten of the fastest pages used a CDN — as did seven out of ten of the slowest pages. This finding isn’t terribly surprising, as it goes hand in hand with the finding in our spring report that using a CDN doesn’t always correlate to faster pages.
This isn’t to say that site owners shouldn’t use a content delivery network. If you serve pages to a highly distributed user base, then a CDN should be part of your performance toolkit. (I encourage you to read this post for further discussion of this issue.) But this finding is a good reminder that a CDN isn’t a standalone solution.
5. Adoption of other performance best practices is consistent (in its inconsistency) among the fastest and slowest sites.
Looking at the ten fastest and ten slowest sites, we see that they all enable keep-alives, while none use progressive JPEGs. Image compression was hit-and-miss equally among both groups. None of this is terribly surprising. Keep-alives are pretty much a default best practice at this point, and since they can be controlled relatively easily just by configuring your server to enable them, there’s no excuse for not doing this. Using progressive JPEGs (as opposed to baseline images), on the other hand, is still an uphill battle, despite the fact that some studies have shown that they improve the user experience by up to 15%. It’s surprising, though, to still see so many sites not fully leveraging image compression, as this best practice has been around for years.
If you care about delivering a faster user experience to your customers, then look to the fastest online retailers for insight. The most high-performing sites:
- contain smaller, leaner pages,
- understand the critical rendering path, and
- know what resources to defer.
The good news is that there are opportunities for every site — even the ones that are relatively fast already — to fine-tune performance by taking a more aggressive approach to front-end optimization. Why bother? Remember this stat:
As a former senior researcher, writer, and solution evangelist for Radware, Tammy Everts spent years researching the technical, business, and human factor sides of web/application performance. Before joining Radware, Tammy shared her research findings through countless blog posts, presentations, case studies, whitepapers, articles, reports, and infographics for Strangeloop Networks.