Performance Monitoring and Optimizing TTFB

Performance Monitoring and Optimizing TTFB

Every website today needs to be optimized up to a certain level to perform well, be it page-speed insights, GTmetrix or any other tool to measure the websites performance. Google gives a website with a performance score of above 90, an excellent rating, and the site needs improvement below this score. And this is where the need to improve the website's performance arises. Dynamic websites with tons of data need to be optimized to perform well, in this blog we see how we optimized our TTFB, CLS and other PageSpeed Scores.

What is Performance Monitoring?

Performance monitoring is a set of processes that ensure you have information about how your website works in a performance measurement test. By analyzing performance data for the system over multiple performance tests, you can define a baseline, that is, a range of measurements that represent acceptable performance under typical operating conditions. Below this baseline is a set of problems you must rectify to maintain a certain standard for the site. Regularly monitoring the performance score allows you to troubleshoot issues faster, as performance monitoring gives you information about the behavior of the resources.

Performance monitoring is crucial to ensure the website works well in a lab and a field environment because this is where the main difference between Crux and page-speed insights arises. Google's page speed insights measure the lab data, and Crux collects data in the real world on real browsers. Performance monitoring helps understand all the aspects that the website may lack.


CSR VS SSR : What we decided?

On an architectural level, one of the main decisions of web developers is where to write all the logic for the application. This is a difficult one as there are many different ways to build a website but to understand which will serve the best is a crucial aspect.

Client side Rendering

Client-side rendering means rendering pages directly in the browser with JavaScript. All logic, data fetching, templating and routing are handled on the client rather than the server. The effective outcome is that more passed to the user's device from the server, and that comes with its own set of trade-offs.

The primary downside to client-side rendering is that the amount of JavaScript required tends to grow as an application grows, which can have negative effects on a page's INP. This becomes especially difficult with the addition of new JavaScript libraries, polyfills and third-party code, which compete for processing power and must often be processed before a page's content can be rendered.

Server side Rendering

Server-side rendering generates the full HTML for a page on the server in response to navigation. This avoids additional round-trips for data fetching and templating on the client, since it's handled before the browser gets a response.

Server-side rendering generally produces a fast FCP. Running page logic and rendering on the server makes it possible to avoid sending lots of JavaScript to the client. This helps to reduce a page's TBT, which can also lead to a lower INP, as the main thread is not blocked as often during page load. With server-side rendering, users are less likely to be left waiting for CPU-bound JavaScript to run before they can use your site.

Our Thoughts

For product-based companies like us, SEO is a significant factor. We want the Google bot to index content on our site faster and load the website more quickly for the user. The reduction of the first load time of a site is one of the most important tasks for a developer. Although, a CSR website can give us better initial load times than an SSR website, but search engines may have difficulty indexing content that is rendered client side.

Another potential disadvantage of client-side rendering is that it can be more resource-intensive for the client's web browser. Because the browser has to process and render the website's HTML, CSS, and JavaScript, it can require more processing power and memory, leading to slower performance and longer load times for the user. It just means that the site may take more time to be interactive.

Server side rendering was a better option in order to eliminate these issues. Server side rendering involves rendering the site on the server, meaning the server will execute all the necessary code to load the website rather than the users device. This resulted in faster loading times and and better indexing for the website.


Page Speed and why it matters ?

Page speed itself is not a single metric, it is a set of different performance measures of a site. It doesn't matter what website you currently are developing, understanding the impact of page speed and the implementation of necessary optimization techniques can be a game changer.

Why page speed matters:

  1. SEO - one of the crucial factor when it comes to SEO is how fast the website responds to requests. It has now become one of the most important factor, for better ranking of the page as users tend to spend more time on faster and more responsive pages than slower pages.
  2. The User Experience - the user experience matters in term of how the website/brand will be perceived by the users, and directly influence how they tend to use the site. Page speed is one of the factors for user experience, yes, it matters how fast the website opens, how fast the tab animations and other animations play out. From March, 2024, a new metric INP, will be introduced, which will measure the user experience of the page. Some of the ways in which page speed affect UX are:
     a. Better page speed can lead to more conversion rate as the user has a faster and   seamless experience.
     b. A decrease in page speed can lead to higher bounce rate as the users tend jump to   another website.
  3. Mobile Experience: SEO considers mobile browsing as different from its' desktop counterpart and ranks it separately. For mobile browsing, page speed is a critical aspect because mobile devices tend to have slower network connection compared to desktops and the rankings get affected because of this.

How to measure page speed and metrics responsible

There are many different tools which measure the performance of the website like Google's own page speed insights, Debugbear, GTmetrix and many more.

Core Web Vitals metrics

These tools measure all the various metrics for measuring page speed and give a defined score for the performance of the page, along with all the improvement recommendations and warning make these tools a gift for the developer. Chrome based browser also provide an inbuilt automated tool to measure the page speed called Lighthouse, which works exactly page speed insights but can also work for local hosts.

Performance metrics:

There are six main factors that define the web performance of a page. They are:

  1. First Contentful Paint: It measures the time from when the page starts loading to when any part of the page's content is rendered on the screen. For this metric, "content" refers to text, images (including background images), <svg> elements, or non-white <canvas> elements.
  2. Largest Contentful Paint: reports the render time of the largest image or text block visible within the viewport, relative to when the page first started loading.
  3. Cumulative Layout Shift: CLS is a measure of the largest burst of layout shift scores for every unexpected layout shift that occurs during the entire lifespan of a page.
  4. Time to Interactive: The TTI metric measures the time from when the page starts loading to when its main sub-resources have loaded and it is capable of reliably responding to user input quickly.
  5. Total Blocking Time: It measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness.
  6. First Input Delay: FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

Other than this, in March 2024, one other metric is going to come into play called
Interaction to Next Paint: INP is a metric that assesses a page's overall responsiveness to user interactions by observing the latency of all click, tap, and keyboard interactions that occur throughout the lifespan of a user's visit to a page. The final INP value is the longest interaction observed, ignoring outliers.

Our Journey

Page speed, as we have learned through our journey depends not only on how well optimized the code is, but on many other factors as well. One such factor and a big one is the server used to host the site, and how well it's configured. The load on the server also affect the page speed scores. To optimize all this performance monitoring through all of this is very important, to get the baseline where the website is performing.

Sometimes, the developers would have done everything they could to improve the scores, but the they would still need not budge—problems from the server, load on the site, google scripts or many other things. Thinking an out-of-the-box solution is what separates the good from the best. For example, we know that scripts from Google affect the page speeds drastically, increasing the sites' TBT, SI, LCP, FCP and all the factors, sometimes reducing the performance score by around 50 points. Could there be a way to exclude them?

If there was a way to do it, spoiler alert, there is, this would still only affect the page speed score not the core web vitals. Core web vitals collect the data from the users, in case of the users, all these scripts must be loaded, initial loading times, time to interactive and other metric will plummet. In order to prevent this we implemented CDNs, where we serve the page to the user from cache and in case of any bot, the page is served from the server. This turned out to be a neat trick to serve the pages to the users faster. Improving Core web vitals and user experience for the website. This also effected majorly in improving the TTFB score as the data is coming faster due to caching.

It is not only the developers fault that page speed isn't what is needs to be but it plays a huge role. Writing clean, understandable code, with code optimizations is a must. No one likes to clean up someone else's mess. If you don't like to do it then don't just write anything to make it work for now, in my opinion, this is one of the biggest problems with developers, we think about optimizing the code later, it's wrong. Think about what you have to code before coding, think about different ways it could be written, then figure out the best way to do it.


Cumulative Layout Shift (CLS)

This is one of the most important metric on the page speed measurement guidelines contributing 25% in the overall score.

CLS scores

CLS completely depends on the developer. A layout shift occurs any time a visible element changes its position from one rendered frame to the next, caused mainly by one of these issues:

  1. Data loading on client side and populating the page after load.
  2. Shifting of the images on responsive designs.
  3. CSS lag on the page.
  4. Font swapping

Skeleton loaders, alternative content, fixed heights and widths are just some of the few factor that are contribute majorly in fixing CLS. Let's understand with an example, you called an API from the client and you have to render a block with this data, just a simple UI, some text, Card components, some text again, the number of cards to be rendered is based on the API response. Then what happens is, the API takes some time to send the response, your page which already has some of the content loaded i.e the text content, and is waiting for this response from the client, as soon as the response is received suddenly between those texts some cards appear. This is a classic case, where CLS is occurring due to client side rendering a component, along with many other problems, for now lets focus on CLS.

To avoid these problems later in production, some checks must be performed during development. Chrome lighthouse is a tool which is an automated tool to help improve performance integrated with the Chrome browser. The developer must check the CLS and other metrics regularly to confirm any such issues while coding and work on improving the scores. If not, Lighthouse, performanceObserver can be used to access the metric data and debug these measures from the console. As for CLS, one such script is:

new PerformanceObserver((entryList) => {
  let sum = 0;
  for (const entry of entryList.getEntries()) {
    const allValues = {
      value: entry?.value,
      startTime: entry?.startTime,
      sources: entry?.sources
    }
    sum = sum + allValues?.value
    console.warn("CLS Caused", allValues)
  }
  console.warn("CLS VALUE: ", sum)
  if(sum > 0.05) {
    console.warn("CLS VALUE TOO HIGH!!! PLEASE REDUCE CLS")
  } else {
    console.warn("GOOD CLS SCORE")
  }
}).observe({type: 'layout-shift', buffered: true});

You can now see the CLS causing components on your console.


Wrapping Up

Page speed in many ways is like a game, that every developer should play seriously, beast mode from the start, and every game has it's set of rules, that the player must follow in order to win. So here are some of the rules that must be followed to win in this game of page speed:

Reducing media size and use of the right formats

Please, please, compress your images and videos before using. No one likes to wait for the images or videos to load after all the content is loaded.  Those slowly loading images, waiting for them to load, users just ignore those images in that case leaving a negative impact on them. Images should be compressed and use of new-gen formats for images like webp format is a must. There are many tools available on the internet to convert these images, so please use them.

Size comparisons for the next generation image formats

Minify your Scripts and CSS

Minification (Minify), is a programming process which removes all the unnecessary characters from the source code of the site. Remember, this will not change the functionality of the code, but just remove characters like white spaces, line breaks, comments. These characters help with the readability of the code but increase the size of files, which in turn takes more time to be downloaded on the server. Minifying reduces the file size, allowing more efficient transmission over the internet. Using webpack 5, production environment already minify the code automatically. So, minifying helps in reducing the size of the HTML, CSS, and JS files, leading to faster page loads and better transmission of data to the user.

Use a CDN Service

A content delivery network (CDN) is a geographically distributed network that delivers web sites and any other web content to the end user. Basically, it delivers a website’s static content like HTML, CSS, images, and also JavaScript through web servers that are closer to a user’s physical place.

For example, if an origin server is located in the USA, and a user opens the website from Singapore, the website will take longer time to load as it needs to funnel that information further. A CDN has many data servers in different regions throughout the world. Allowing you to open the website from Singapore and have the website load from the Singapore data center instead of the USA one.

Make your websites mobile responsive

Optimizing the website for just desktop, is not enough now. SEO does mobile indexing differently as compared to desktop indexing, network speeds differ when compared to desktop devices and many more differences. It is now important to make sure the website is mobile responsive, to check for this, one such tool is Google's mobile friendly test.

Reduce the number of HTTP(s) server requests

HTTP(s) is a request/response method used by a web browser to bring files from the web server. The more external requests your website makes the more number of HTTP(s) server requests. Little by little, your site slows down. One such idea to reduce the number of server requests is to make your JS and CSS render in one single file, so that server doesn't have to make two different requests to fetch them.

Conclusion

These are just some rules that you should remember, there are many other, like Leverage Browser Caching, Code Splitting and Bundling, Reduce server response time, and many more. It is important to experiment among these and find the points where the website is lacking.
These steps are something that should be thought of since the beginning of the implementation of the website, making sure to follow the best practices and avoid the downsides that it can bring. Although necessary at the beginning, in a digital culture, it should be constantly monitored and improved, because it changes over time and for sure can impact your business.