If you’ve ever been handed a pile of performance data and been stymied by the various measurement terms you encounter, you’re not alone. Even within our industry, standardizing our language is an ongoing challenge. In this post, we’ll walk through five of the most commonly used measurement terms, define them using language a normal person can understand, and talk about when you should care about each.
Time to first byte
What it means: Time to first byte is measured from the time a request is made to the host server to the time the first byte of the response is received by the browser.
Caveats: Time to first byte doesn’t mean anything when it comes to understanding the user experience, because the user still isn’t seeing anything in the browser.
When it’s relevant: For detecting back-end problems. If your website’s time to first byte is more than 100 milliseconds, it means you have back-end issues that need to be examined. (Web performance consultant Andrew King has written an excellent post about this, as has Google performance expert Patrick Meenan.)
What it means: Response time causes a lot of confusion. Depending on whom you ask, it can refer to any number of things: server-side response time, end-user response time, HTML response time, time to last byte with no bandwidth/latency, and on and on.
Caveats: If someone starts talking to you about response time, first ask them to clarify which type they’re referring to. Be wary of anyone who tries to sell you on the idea that there’s only one definition.
When it’s relevant: Different types of response time measurements tell you different things, from the health of your back end to the moment when content starts to populate the browser. You need to know what you’re measuring and why. For example, if user experience matters to you, ask how whatever type of response time you’re looking at relates to what the end user actually sees.
What it means: As its name suggests, “start render” indicates when content begins to display in the user’s browser. This term seems to have evolved as an alternative to “end-user response time”, but it’s not yet widely used outside performance circles.
Caveats: Doesn’t tell you if the first content to populate the browser is useful and important, or simply ads and widgets.
When it’s relevant: When measuring large batches of pages, or the performance of the same page over time, it’s good to keep an eye on this number. Ideally, visitors should start seeing usable content within 2 seconds. If your start render times are higher than this, you need to take a closer look.
What it means: This term is misused a lot — it frequently gets conflated with start render time. Properly defined, load time is the total amount of time it takes for all page resources to render in the browser — from those you can see, such as text and images, to those you can’t, such as third-party analytics scripts. (Geek version: “Load time” is also known as “document complete time” or “onLoad time”. It’s measured when the browser fires something called an “onLoad event” after all the page resources have fully loaded. No matter what you call it, it’s used as a primary measuring stick for site performance.)
Caveats: Needs to be taken with a grain of salt, because it isn’t an indicator of when a site begins to be interactive. A site with a load time of 10 seconds can be almost fully interactive in the first 5 seconds. That’s because load time can be inflated by third-party scripts, such as analytics, which users can’t even see.
When it’s relevant: Load time is handy when measuring and analyzing large batches of pages, because it can give you a sense of larger performance trends.
What it means: In the past two years, there has been a growing awareness that the four terms discussed above are not adequate for conveying the real-user experience. Two recent Google initiatives — Speed Index and Above-the-fold time (AFT) — have attempted to define a new user-oriented metric that better represents the time when a significant amount of usable content renders in the browser.
Caveats: Unfortunately, there are technical constraints in gathering AFT metrics in the real world. As Google performance expert Steve Souders states of the initiatives described above:
“In other words, it’s not feasible to perform these rendering metrics on real user traffic in their current form. That’s important because, in addition to incorporating rendering, this new metric must maintain the attributes mentioned previously that make window.onload so appealing: standard across browsers, measurable by 3rd parties, and measurable for real users.”
When it’s relevant: Assuming that the technical hurdles will eventually be overcome, above-the-fold time could be the ideal metric for measuring when a page’s primary content has rendered in the browser — in other words, the optimal end-user experience.
1. There’s no single “right” way to measure performance. Each measurement tells you something meaningful about how your site performs.
2. You need to understand the different performance measurement terms so that you can interpret your own data. If you don’t, sad to say some people will take advantage of your ignorance to mislead you for their own benefit. (For example, some performance vendors have convinced site owners to tie bonuses for key employees to backbone test results, which do not measure real-world performance.)
3. As a matter of due course, you always need to gather large batches of data about your site’s performance and rely on median numbers. But you also need to periodically get under the hood – using tools such as WebPagetest* — and take a real-world look at how your pages behave for real users.
4. Currently, there is no perfect metric for measuring the optimal real-world user experience. This is something the performance community is working to address.
*WebPagetest is a third-party tool that simulates how fast a site loads for real-world users using a variety of browsers.
As a former senior researcher, writer, and solution evangelist for Radware, Tammy Everts spent years researching the technical, business, and human factor sides of web/application performance. Before joining Radware, Tammy shared her research findings through countless blog posts, presentations, case studies, whitepapers, articles, reports, and infographics for Strangeloop Networks.