Skip to content

Web Performance Optimization for Web Developers

Web Performance Optimization for Web Developers

In this post, we’ll look at key performance parameters, briefly go over how browsers work and find out what web performance tools we can use to improve your website load speed.

Improving your website’s performance can be as valuable as making it accessible and secure.

Web performance optimization is an intimidating topic. It involves a lot of different metrics, not to mention that understanding what exactly you need to do to improve the situation is tricky in itself! But, it doesn’t have to be so difficult.

In this post, we’ll look at key performance parameters, briefly go over how browsers work and find out what web performance tools we can use to improve your website load speed.

Quick rundown on how browsers work

Okay, first let’s look at how browsers actually work.

Basic rendering engine flow
Rendering engine flow

Parsing HTML

Browser starts by downloading that index file from the web, parsing it and prioritizing what to do with different resources. In Chrome today, there are a number of priority levels:

  • Highest. This priority level is generally assigned to an initial document, fonts, and CSS. All of that stuff is render-blocking. If you have some sort of network timeout, the page won’t render if the browser can’t receive your CSS or the initial document.
  • High. This one is for scripts in the head tag and any type of network resources like an API request Ajax/XHR/fetch().
  • Medium. Medium level is for script tags that can be down in the footer of the webpage.
  • Low. And finally there’s resources like images which have low priority because they won’t block the layout so the browser will download them in the end.

To see how Chrome is prioritizing requests, you can show the Priority column in the Network tab. It doesn’t appear by default, so you need to enable it from the ... menu. Once you do that, you can now have a little bit more clear picture on what content browser prioritizes while it pulls stuff from the server.

Parsing and execution of CSS & JS

In the next step, it executes all JavaScript that’s in the head that doesn’t have async or defer attributes on them. Those attributes basically tell the browser to run the script asynchronously or defer it.

At this point, browser constructs something that is called CSSOM (CSS Object Model). It is basically a “map” of the CSS styles found on a web page, a huge object that gives you an idea about how the styles are cascading.

Almost at the same time, it creates the DOM (Document Object Model), which is basically a bunch of nodes representing parts of the document. Next, it will combine CSSOM and DOM in a thing called Render Tree.

Layout

At that point, it’s basically created a layout, which in Firefox called Reflow. Then the browser calculates how much space is needed to put the elements on the screen, where to put the elements.

Keep in mind that anything that has to do with triggering layout (height, width, left, right, line height, font size, etc.) is a really expensive operation, so you ideally want your computer to do it only once.

You should always look to avoid applying any continuous changes like animations or transforms to these properties because it can lead to dropped frames and bad experience in general.

Painting the render tree and compositing

After it does the layout it goes to painting which is basically the process of traversing a render tree and painting each node using UI backend layer. This operation is pretty expensive too.

After that, it does compositing, which is a cheap operation. The browser moves around different layers, puts them on top of each other and actually writes them to the screen where you can actually see it.

Key website performance metrics

Alright, now let’s look at what metrics do we actually have for optimization.

  • Time to the first byte. It’s the time from when you click the link until that first byte hits your computer. If you have any delays here it means that you need to look at your backend. Try implementing effective caching strategy and CDN.
    Since most of our data travels through either copper wire or fiber optic cable, there’s a hard limitation for us. It can’t go faster than the speed of light. CDN physically positions your cache closer to where you are, which provides optimal performance.
  • Time to first meaningful paint. It’s basically the time when the most meaningful content that you’re loading becomes visible on the page. Of course, that can mean different things depending on what you consider the most meaningful thing on the page. But in general, it’s when you can see that your font and minimally necessary markup is loaded.
  • Time to interactive. This one shows how long it takes a page to become interactive. Have you ever experienced this situation when you open a website, start hitting the menus, but it really doesn’t do anything? What the hell? That’s because the website is still busy and trying to figure stuff out on the main thread. So, this is a time when you finally can start clicking around, navigating menus, pressing buttons, doing other stuff and the website is consistently interactive.
  • Speed index. It’s a complex metric that takes into account everything. It is being calculated from other metrics and represents the average time at which visible parts of the page are displayed.

Measuring the performance

Well, now we’re ready to look at what performance testing tools we can use to test our websites.

Chrome DevTools

One of the best tools to investigate performance bottlenecks in websites is Chrome DevTools. It allows you to do a ton of different audits.

Quick tip: Try using a browser without plugins installed when doing our profiling. Extensions like the ad blocker, password manager may affect your measurement.

So, launch Chrome Canary, open up the target page and go to the Performance tab. Once you’re there, click on the reload button and it’ll produce a performance profile with some screenshots where you can see various performance metrics. You might want to zoom in a bit with Ctrl + to get a better view.

Here you can see how the image appears on the screen, how the fonts get loaded and explore what’s happening in more details.

Lighthouse

Another great tool is an open-source project called Lighthouse. It’s located on the Audits tab and can perform various types of site audits like performance, accessibility, progressive web apps, and more. After performing an audit, you’ll be able to check out such performance metrics as general Speed Index, Time to Interactive, First Contentful Paint and others.

What I really like about this tool is that after performing an audit it gives you a very detailed list of suggestions that you can implement and see if it really benefits your website overall performance.

Performance optimization

As you can see there are a lot of important metrics to optimize there. Here are some common website optimization techniques that you can use to make your website faster.

Use less bandwidth

Generally, here you want to limit the use of large images and prefer responsive images, which work well both on desktop and mobile. If you’re using WordPress, it automatically creates something called a srcset, which will then serve different image for different screen sizes.

Limit network requests

Typically the more requests you have, the slower your website will load. In HTTP/1 that most of us use now, for every single file the browser needs to make a separate HTTP request. But most websites have a lot of files that need to be downloaded. The browser can only do 6 files at a time. Once those finished, it can pull 6 more.

However, there’s a new revision of the HTTP protocol called HTTP/2. Currently, only about 30% of all websites use it.

With HTTP/2 browser does something called multiplexing, which allows downloading multiple images at once. That can be a lot quicker, especially if you have a good internet connection. So, if you’re not on the shared hosting and have the ability to use HTTP/2, it’s worth trying it out.

Use PRPL pattern

Websites with PRPL architecture work well on smartphones and devices with crappy network connections.

PRPL stands for:

  • Push critical resources for the initial URL route using and HTTP/2
  • Render initial route
  • Pre-cache remaining routes
  • Lazy-load and create remaining routes on demand

Something that is really cool about the PRPL pattern is it caches the remaining routes.

So, if you have a link on your page to some other page, it will pull the data for that page and when you click on the link it loads instantaneously.

If you want to see how this works, check out GatsbyJS, which is a React-based static website generator. It’s a new paradigm, a really cool way to do stuff and it makes your websites feel instantly fast.

Resource hints

This is basically a set of directives that tell the browser how a particular resource needs to be delivered.

  • link rel=“preload”. You can optimize the delivery of a certain resource by preloading it.
    For example, if you’re using @font-face rules for downloading your web fonts, that can take a long time, because this operation includes a lot of steps.
    You may end up with the situation when your webpage will show up, but the text won’t show up.
    That can be fixed by preloading fonts. You basically tell the browser to download it immediately, and when it needs that font, it already has it and can render it instantaneously without the need to download.
    Of course, this works not just for fonts, but for images, styles and other things.
  • link rel=“dns-prefetch”. Another resource is called DNS-prefetch which reaches out to the server and does the DNS-lookup on it. This one is a little bit more widely supported than preload and preconnect. If you know that your website is going to be connecting to particular servers, it’s a good idea to pre-resolve DNS for them.
  • link rel=“preconnect”. Preconnect does the DNS-prefetch but it also connects to the server. If you’re using HTTPS, your browser has to connect and download all tls certificates which takes a little bit of time. By doing a preconnect, you’re saving the time on security negotiation.

What’s interesting here is that you can use these techniques immediately. If it doesn’t work, it just doesn’t work. It’s not going to break your website or anything like that. It has pretty good support and other browsers will have it coming soon. But this is probably 60-70% of your website, so you can dramatically speed things up there just by adding these tags.

Avoid chaining dependencies

Another thing that you may want to fix is to avoid chaining dependencies. Quite often you can see a situation where one thing needs to download another, which then downloads another one, pulling a whole network behind them.

Sometimes it can really be avoided. If you have ads on your site, that’s probably the first place where you need to look. Other than that, if you’re using ES6 imports, it’s worth looking at module bundlers like Webpack.

CSS optimizations

  • Inline CSS for critical path. The critical path is everything that it takes to render the initial page. You can take styles for the navigation, hero, embed that into a style tag into your head and put the rest of the CSS out of the head so it doesn’t block rendering. That will save some network requests. Then, as the browser parses the body, it will pull in those CSS files as needed. This optimization can take a lot of work, but it can be really worth it.
  • Avoid inlining images. Quite recently there was a practice of inlining images in CSS, which would take the image, base64 encode it, embed it as a string of numbers in a CSS file. The benefit of that is that your CSS files cached everywhere, so you have access to these images everywhere. But the downside of that is CSS blocks your critical rendering path.
    You want your page to render initially really quickly and base64 encoding eats up bandwidth and makes the file larger and more difficult to parse. So you really want to avoid having a large stylesheets when you can.
  • Don’t worry about selector performance. Today, browsers are very efficient at parsing selectors. Most selection methods are now so fast it’s really not worth spending much time over. Look at your selectors last to speed up your CSS, it’s not going to affect you in the real world.
  • Split big stylesheets. The general advice here is to chop up your big stylesheets and only load those on the pages where they needed.
  • Remove unused CSS. Starting from Chrome 69, it has the coverage tab that comes in handy when you need to find out which resources were unused. To speed up page load times, you could inline the used CSS into the page’s HTML and asynchronously load the unused code.
  • CSS animation. If you have CSS animations on your page, you only want to do the layout and painting once. Anything that triggers layout is pretty expensive for the browser. It’s better to animate transforms and opacity and to avoid animating things like height, width, left, right, font-size…If you’re animating, you want to make sure that you’re animating composite layers so they can be moved around without making your machine choke.

JS optimizations

As a general rule of thumb here, just keep an eye on what you’re loading on your pages. Sometimes you can surprisingly find out that you have a big gallery loading on all your pages, but it’s being used only on one. Some other things you can do here:

  • Identify shitty 3-rd party scripts. This is probably the biggest thing that affects website performance. People spend so much time getting their page just right and then they smear ad networks all over it. The problems with 3rd parties are so hard to find, diagnose and then fix. You need to be extra careful with things like Google Tag Manager because of the possibility of someone from your marketing department putting code that you haven’t reviewed into your site.
  • Code splitting. With things like Webpack, you can split your JS code into bundles which then can be loaded on demand or in parallel, which will improve page load times. The great thing about it that you can set this up to do it automatically. That’s really useful because you’re only loading JavaScript where it needs to be.
  • Profile. Since JavaScript is single-threaded, it’s important to be able to identify what processes are running on the main thread. If your main thread is occupied for a very long time you need to learn how to profile it to identify scripts that take a long time to parse and evaluate. There are tons of great stuff available about it on the internet.

There are some best practices that you need to follow to keep your website’s performance high. Keep in mind, that what’s working now may not work tomorrow. Things like HTTP/2 and fast-evolving browsers make certain adjustments to the best practices. But most of them are staying the same.

Of course, there are a lot more things you can do, but implementing the above steps will unequivocally allow you to speed things up.

See you in the next post!

About The Author
Owlypixel

Owlypixel

Written by Owlypixel, who likes to blend web development and design into creative art with the goal to make it simple and accessible for others. You mayfollow him on Twitter.

Latest Posts

Migrating Owlypixel Blog From GatsbyJS v.2 to v.3
Build a Store With Shopify and Gatsby From Scratch
Migrating Gatsby Markdown Blog to MDX
Build Your Own Serverless Writing Pad with Gatsby, Netlify, and FaunaDB
100 Days of Gatsby - Display a List of Orders From a CSV File
Exploring Blogging With Scully - the Angular Static Site Generator
A Web Design Checklist to Make Your Side Projects Look Awesome
What Are the Benefits of Using a Front-End JavaScript Framework