"GeloCubed" Logo 2

JavaScript SEO in 2023— Things you need to know

To ensure search engine success, its imperative you know how to identify any issues with your site, know whether your site is being indexed and rendered and, of course, how to make it SEO friendly!

In this blog, we’ll dive into JavaScript SEO, leaving no stone unturned. Heres the menu for today, for your knowledge consumption:

  • What is JavaScript?
  • What is a JavaScript website?
  • How does the Google crawl bot work?
  • Different rendering types?
  • JavaScript SEO issues, and how to clear them

What is JavaScript?

JavaScript, or more commonly referred to as JS, is a programming language for websites and apps

JS sits on the same shelf as HTML and CSS, and enables website to experience a greater level of interactivity, through animated graphics, sliders, buttons, loaders, maps, and many other features.

Websites built on JavaScript are becoming ever more popular with developers. in 2019, 67.8% of developers were using JavaScript. Frameworks like Angular and React being used to create mobile and web apps, and single and multi page websites.

The creativity and interactivity that JavaScript enables doesn’t come without its draw backs and difficulties as we’ll see below

What is a JavaScript site?

As mentioned before, previously JavaScript had only been used to add the icing on the cake, and really bring a website to life on top of any primary content. Websites where the JavaScript is used to modify or add critical content to a page are considered JavaScript sites

How does the Google crawl bot work?

To understand one of the issues with JavaScript sites, we need to look at how Google crawls and indexes websites. It start by visiting and crawling and passes the content it sees onto the indexing stage, where its parsed and stored in the index. When links are detected, they’re passed back to the crawler and the process continues.

But what happens when some of the content is generated by JavaScript?

An extra step is added, called the rendering stage. Google’s crawl bot executes JS when rendering the page, which is resource intensive. By separating indexing and rendering, Google can index non-JS content as fast as possible, to then come back to complete the process with JavaScript content later.

Heres a diagram courtesy of Google to show the process.

With over 130 trillion web pages out there, by delaying the execution of JavaScript, Google can review most of the content quickly whilst not overlooking JavaScript content. Having JavaScript websites rendered in a 2 step process ultimately means it takes longer, and how much longer is a question that can’t be answered.

Different types of rendering

If getting your content indexed fast is a priority, you’ll need to ensure your render type is suitable to your needs.

There are a few different types of rendering:

  • Server Side Rendering (SSR)
  • Client Side Rendering (CSR)
  • Dynamic Rendering

Server Side Rendering (SSR)

Server Side Rendering, which also goes by the names universal or isomorphic rendering, is a rendering method which generates a static HTML markup of the JavaScript on the server, allowing your browser to receive a fully rendered HTML page, which can be easily crawled and indexed, and then serving the client with said rendered HTML page

This type of rendering is preferred for SEO, as it takes little time to fully parse the HTML markup, as it doesn’t have to go through the rendering stage, meaning content can be indexed faster.

However, SSR can be complicated, and adds extra difficulty for developers, but there are tools for different JS frameworks that are available to help implement this rendering method, such as Next.JS (React), Angular Universal (Angular) and Nuxt.JS (Vue.JS).

Client Side Rendering (CSR)

Client Side Rendering sits on the opposite side of the spectrum. JavaScript is rendered on the clients end, by their browser using DOM (Document Object Model). It is here where the challenges of indexing, rendering and crawling occur, as Google bots try to understand the content.

Essentially, the client receives the JavaScript file + the standard HTML document, and it all has to be rendered then and there.

There are benefits to CSR. Although it generates HTML on demand, it won’t re-render or refresh the whole page, as otherwise seen with regular HTML pages, which saves a lot of computing power. Moreover, with Client Side Rendering, UI components can be used across multiple pages, without having to send requests each and every time, which improves on-page performance. Overall, means CSR can perform better than SSR.

There are still the issues with Client Side Rendering, namely the 2 wave indexing which persist within, and caching issues that occur as HTML is not part of the initial render, meaning browsers cannot cache the HTML structure of the page.

In a very simplified example, the difference between these 2 render types is similar to purchasing a new office chair, you can either purchase a prebuilt one from the store, which an inconvenience if anything for the staff (Server Side Rendering), or you can buy an IKEA chair and do it yourself, which is more time consuming and resource intensive (Client Side Rendering).

Dynamic Rendering

Dynamic rendering is a hybrid technique where the rendering switches between Client Side and Server Side, depending on the user agent.

It’s extremely useful for resource heavy websites who don’t want to compromise the performance of the site. By being able to deliver a proper Client side rendering experience to users with all the benefits, and delivering the static HTML content for crawlers, getting as much content across fast as possible, you have the best of both worlds.

heres a another diagram to portray what happens behind the scenes.

But is that not cloaking? Well no, if the content processed is all the same, then it’s not cloaking. If the content served was vastly different, then it would be cloaking. The only differences in the content would be in the level of interactivity, as the crawler will get a static HTML document, where as client browser gets the whole shebang.

JavaScript SEO issues, and how to clear them

when facing a JS site as an SEO, its not unheard of to encounter issues, here are a few that we’ve personally encountered in the past.

1. Infinite scroll issues

the infinite scroll not only looks cool but does have some advantages over traditional pagination, the issue is that crawlers cant access all the content as they cant interact with pages, and, well, scroll.

To resolve this, some parallel pagination with static links will need to be implemented, to ensure all content can be indexed.

2. Tool crawling issues

Your tools wont be set up to crawl JS sites, and so its important to configure them properly! Here we’ll show how to configure the Screaming Frog SEO Spider to crawl JavaScript sites.

firstly, open up the spider and hit the configuration tab.

Screaming Frog

The hit spider, and switch to the rendering tab.

Screaming Frog rendering settings

By default, the rendering will be on Text Only, so make sure its changed to JavaScript, now you will be able to crawl JS sites, and get to work on your technical SEO!

Note: you can also bypass robots.txt files in the settings too if you are still having trouble crawling the pages

3. Robot.txt file blocking resources

sometimes your robots file can block the crawling of important resources, so we must ensure that it allows for JS, here is a standard input into the file to ensure no resources are blocked.

User-Agent: Googlebot

Allow: .js

(Would also do the same for CSS)

Allow: .css

4. Client Side Rendering issues

JS rendered on the client side is unable to return any server errors in a similar fashion to SSR content, this is another reason Dynamic Rendering is favourable for JavaScript sites.

5. Pages not being indexed

As we have seen, JavaScript as a whole has issues with page indexing when running on CSR. Another way pages don’t get indexed is when URL’s contain a #, its important to clear your URL’s from hashtags!

www.gelocubed.com/#/services ❌

www.gelocubed.com/services ✅

Static URL’s should be used otherwise your pages may not be indexed.

As you can see, JavaScript, while elevating the web development game, is a real pain for SEO’s! Hopefully, this breathes some insight into the world of JavaScript SEO, and how to deal with JavaScript sites.

here are some more blogs on the topic which will provide some more insight into the matter!

Ahrefs JS SEO guide

SEMrush JS SEO guide

ContentKing JS SEO best practices

If you’re having seeing low levels of search enginge traffic, and want to boost your rankings, get in touch with us on TwitterLinkedIn, or through our site today!

Table of Contents

Ramsey Shallal

Co-Founder + Strategy

Ramsey has been in the Web3 space for 3+ years, and is leveraging that experience to create generative marketing solutions, that drive organic growth through improved brand experience and authority

Latest Posts

News & Blog

Related articles

LEarn from Experts

Subscribe to get the Latest Industry Insights

Get a Free Audit

Let us understand your requirements and we will ensure that we cultivate an experience that leaves customers feeling good about your project.

Get in touch

Let us understand your requirements and we will ensure that we cultivate an experience that leaves customers feeling good about your project.

The Feedback Loop from Hell

1

Reliance on hype and the performance of the bull market to gain visibility

2

Excessive use of guerilla marketing and other unsustainable practices centred on the short term

4

Weak brand image and foundations makes it excessively difficult to scale organically

3

Lack of focus on building foundations for the long-run.