Ok, so maybe secrets is a bit superlative in this case, nevertheless, several of the topics touched on in this post remain a mystery to many clients and professionals alike, so we thought we should expose some keys to well-performing web sites in more detail. Before we begin, let’s address the first possible question of “who cares?” Well yes it’s true that almost everyone has broadband and even the new iPhone will be pretty speedy on the web, what will always be true is that users don’t like to wait. In fact, what we can be sure of is that as devices become faster, a user’s patience will dramatically decrease. So to fight the attrition (user’s becoming so frustrated with a site’s performance that they never return), often caused by slow web site performance, we must always keep web site optimization in the back of our minds. After all, nothing kills a killer app’ faster than slow performance.
There have always been great tools and resources that help web developers and the like improve the user experience of their sites by following some best practices. However, what are often difficult to come by are some specific techniques that not only satisfy the requirements of the best practices, but also address issues that are even more circumstantial. In other words, we’re going to share some techniques that resolve nearly all of the most significant performance issues that web sites and web applications can face.
Understand first how your page(s) load by using Firebug for FireFox 2+ or IEInspector for Internet Explorer 5+. For those interested in Safari, you should check out this post from the webkit (Safari) team. It’s straight forward to find the area within either plug-in that allows you to observe the HTTP transactions and understand the behavior of your page from a transactional standpoint. We recommend using Firebug because it’s free; however using the IEInspector will allow you to see the page render behavior differences between IE and FF. Some relevant issues that impact performance that we’re not going to address in this post are:
- Rendering performance — how does your markup and style sheet actually behave as the browser renders it and how does that impact the perceived speed of the page from a user’s perspective.
- Database latency or page parse time — Dynamically generated pages or assets called in a page, like using PHP for server side includes or generating a table of data from a database entries play a role in the performance of a web site and we’ll set those issues aside for now and assume that you’ve optimized these factors as far as you can using server-side script caching, database caching etc.
- External objects — that is objects that are not locally hosted on your domain like Google Analytics for example. Fortunately they do compress their JavaScript for us, no doubt using some of the techniques discussed later in this post.
As Aaron Hopkins said: “Try benchmarking common pages on your site from a local network with ab, which comes with the Apache web server. If your server is taking longer than 5 or 10 milliseconds to generate a page, you should make sure you have a good understanding of where it is spending its time.” We’re also going to assume that you’ve moved beyond the use of inline JavaScript and CSS; there are countless references and ongoing debates out there on how to deal with functionality semantics and presentational issues. Now there are some questions that need to be answered in order for you to proceed with effective use of the browser plug-ins we recommended:
- Who is your target audience and what are the limitations of their browsing environments?
- How much data would your server end up having to deliver if it was answering requests of thousands of concurrent users?
- Aside from the actual “horsepower” of your web server and the quality/limitations of your server’s bandwidth, what are the things that you can change about your site that will realize the biggest impact? In other words, let’s apply the 80/20 rule.
The following concepts satisfy nearly any conceivable answer to the questions above:
- Reduce file sizes of assets and reuse them as much as possible
Obviously this is the most simple of steps, and includes optimizing file sizes of: images, JavaScript files, CSS files, the HTML itself and so on. We won’t get into the techniques to optimize all of these because that’s a Pandora’s Box to be certain. Firebug’s “Net” tab will show you the weight (size) of all of the objects required to render the page you loaded. Take steps to reduce these as much as you can. Some concepts like: using strict DTDs, removing comments from your code, white space removal and the like to reduce file size are nice, but as you will find out for yourself are not pivotal to achieve the desired results. Again, remember the 80/20 rule, we want to improve our user experience without destroying our ability to maintain the site or make it accessible to as many user agents as possible. So instead of modifying your development process, take advantage of sound techniques as they relate to your CSS or JavaScript coding. Organize (and configure) your content to be cached. Which means avoid using: query string variables whenever possible, dynamically generated assets (images, CSS, JavaScript, markup etc), unless you mean to send the headers to the browser to force caching of your assets. Caching is definitely an imperative if your site uses query string variables or has other obvious issues that indicate to the browser that a document (page) should not be cached. Again we’ll leave that issue to another discussion since there are numerous solutions to that issue. Firebug will allow you to observe the headers of objects that are downloaded to review the headers associated with each object to make sure you’re getting the desired result. I’d encourage you to make sure you disable the browser cache (and any other non-essential plug-ins for that matter) using the Web Developer toolbar throughout your testing. - Optimize HTTP transactions
Now that file size is reduced and you’re confident that assets you desire to be cached are cached, endeavor to reduce the number of HTTP transactions. Again go back to the “Net” tab in Firebug and pay attention in particular to the number of transaction required to generate the entire page. From an image standpoint intelligent use of the sprites technique lends itself to image reuse, caching and optimized http transactions (a few larger files, rather than many small ones). As far as CSS, JavaScript are concerned, concatenate these files to further reduce HTTP transactions. - Further reduce the size of text-based assets
Let’s explore the benefits of HTTP compression. Many (at least more than in past years) web hosts support this “out-of-the-box” for the HTML MIME type. Server load aside, unfortunately compression of .html is simply not enough for high traffic sites that are not putting all of their CSS and JavaScript directly into their HTML documents (we don’t recommend optimization technique this for countless reasons). The effects of applying HTTP compression to a site is night and day, but unfortunately the leverage of this approach needs to be applied to all text based objects/assets required to render a page.In fact, HTTP compression really makes your AJAX applications really perform, but if you really plan things out you should be able to cache some of your AJAX events. - Reduce the number of files
We’ve learned how to compress our text based assets to reduce their weight and we’ve learned how combining related assets allows us to continue to use CSS Frameworks and/or compartmentalize our JavaScript so that our development style or preferences don’t impact the user experience. Now let’s finalize this process by pre-compressing our static content. There are scripts out there that are easy to find that will save you some time in achieving this result, but let’s be clear once again about what we’re up to in this step. Having the server do the heavy lifting of compressing your assets on-the-fly is great, but it doesn’t really scale. By combining and storing the compress version of the concatenated CSS or JS file, what you’re doing is further optimizing the performance of your web server, because what it’s now able to do is send static content, the very thing that all web servers excel at. This tip is vital to reaching that happy place we promised, when we said we would alleviate the most painful issues of most sites. - Put everything in its place
Web development fundamentals teach you to compartmentalize your CSS and JavaScript for maintainability and caching benefits, however, where is the best location of these external objects in your document? Most would agree that CSS belongs in the <head> of the page and they’d be correct, as for JavaScript, we encourage you to put only that code that’s required for accessibility of your interface in the <head> and everything else can be placed just above </body> at the bottom of the document. In this way, your presentation file is downloaded and cached and used to render the page, meanwhile users with fast connections can begin interacting with the page while the heaviest JS code is last to load (and then cached). Combined with the tip above, this approach allows you to avoid making sacrifices to make rich user experiences. - Scale to fit
Revisiting the issue of scale, now from a different point of view, use of a Content Delivery Network (CDN) has become a much more accessible solution to this problem. Since the days when Akamai was seen as an innovator and the “only” solution the problem of insatiable demand for a sites content (or to overcome poor developmental practices), the CDN has been instrumental in reducing latency in delivering objects to users by providing multiple regional POPs for your assets. There are a number of other more affordable leverage points for content delivery, nothing against Akamai, but these other options put this powerful solution in the reach of more people. When your web applications simply are not performing as well as you would like during peak times per day a CDN allows you to offload the busy work of delivering static assets and focus your web server on the thinking. Obviously point #4 should not be skipped when moving to this solution as you’ll see more leverage than you can imagine when these solutions are combined, not to mention save a tremendous amount on bandwidth charges (~60% usually). Meanwhile users will feel like your site or application has more speed because most of the assets a given user will be downloaded will come from the closest possible point on the web. - Throw some horses at it
For more complicated situations, you can look at throwing more hardware at the problem when the previous items have all be addressed and implemented. Specifically I’m referring to the Amazon Computing Cloud. This tip deals more with the web server component of solutions, so we’ll just consider this a bonus tip for those of you looking to make some computationally intense applications. This is a phenomenal offering from Amazon (and there are even others to consider from them) to be able to instantly scale and access a tremendous a lot of computing resources on-the-fly. Services like BrowserCam come to mind for solutions like this one.
So let’s see how techniques 1-5 combine to take shape:
Click the image for a larger view
It’s hard to argue with results!
A bonus tip is to use YSlow to get even more from Firebug! We’ve achieved some great performance with our home page:
Click the image for a larger view
But YSlow shows us where we can still improve:
Click the image for a larger view
Unfortunately YSlow doesn’t pick up on the pre-compressed content we send to users, we’ll have to play with our headers more to satisfy #3 and #4 at the same time no doubt. We will work on these things as we see the need; regardless the techniques we discuss (points 1-5) are demonstrated in the results shown in these screen shots. Many of you may be familiar with some classic tools like Andy King’s Web Page Analyzer are a great starting point for identifying some troublesome areas of your page, but in recent years yahoo’s developer network has really put in a single place the findings that we’ve uncovered ("the hard way") over the years. Unfortunately, as with this post, you’ll still have to develop your own solutions, nonetheless we’d recommend heading over to developer.yahoo.com, they’ve done a great job documenting best practices for creating optimal user experiences, including:
- Reduce HTTP requests (as stated above)
- Reduce DNS lookups
- Avoid HTTP redirects
- Make your AJAX cacheable
- Post-load components
- Pre-load components
- Reduce the number of DOM elements
- Split components across domains
- Minimize the number of inline frames
- Eliminate 404s (file not found errors)
For many sites and in most situations only a few of the above are of concern, but those of you out there with an older sites or applications may benefit from going through the pages of their content, server, cookie, CSS, JavaScript, mobile and image best practices. The only thing I should warn you about when delving into these best practices is that as with anything you can have too much of a good thing. So once again we suggest the 80/20 rule, do what’s required for the maximum gain. Nevertheless, Yahoo!’s developer network has grown into a great resource to say the least.
So tell us what you think, if you’re interested we can put together some examples for you and/or touch on server related optimization techniques as well.
No comments:
Post a Comment