Can Googlebots Crawl JavaScript and Read the DOM ?

Accounts of Google successfully crawling JavaScript goes back to 2008. At that time though, the crawling was restricted largely.

Fast-forwarding to 2014, Google has not only progressed in the terms of which types of JavaScript it can process, but it has made noteworthy treads in interpreting complete web pages.
In addition, the way Google has evolved itself in reading the DOM elements is not shot of praise either.


Here is how Google handles JavaScript and the DOM :

JavaScript Based Redirects

Google interprets JavaScript Redirects just like 301 redirects. The speed of indexing met exactly as that of 301s. The same goes for the ranking point view of JavaScript redirects.

As with 301s, the pointing URL replaces redirected URL in Google’s search index.

Although Google counts JavaScript redirects as legitimate, especially when you don’t have access to the website server, Google still recommends using a 301 redirect whenever possible.

Links Inside JavaScript

Contrary to SEOs’ perspective, GoogleBots can render and follow most links embedded inside JavaScript including those out of href Attribute-Value Pair but within “onClick” tag or called within the pair. The same goes for URLs that are triggered by an event handler.

Links inserted as a normal string inside JavaScript can too be crawled as a link by Google.

Dynamically Inserted Content

Google can crawl dynamically inserted content like the one texts and images loaded inside a DOM element. Google recommends using AngularJS framework and the HTML5 History API to be compatible and discommend using AJAX for SEO purposes.

Dynamically Inserted Meta Data & Page Elements

Google can crawl all tags inside the DOM as if they are placed in HTML elements in the source code. The tags include title elements, Meta description, Meta Robot and Canonical tags.

The Conflict: Source Code vs the DOM

Suppose a nofollow tag is placed in both source code and in the DOM for a link. Which one Google will consider?

The nofollow in source code will work as expected. The URL, however, inside the DOM element worked unexpectedly. In spite of nofollow tag, the link was processes as dofollow and indexed. No, Google is not preferring particular elements in this case. Due to its complicated nature, Googlebots can crawl a regular HTML code much faster than JavaScript. Thus, the link was processed much faster than the JavaScript function that adds nofollow tag.

When both link and the tag are inserted inside a DOM element. The link was processed as expected.

The Change in Scenario

SEO industry is generally obsessed with plain text in source code and is afraid of anything inside an element container. With this development, though, SEOs don’t have to worry about putting plain text all over the place. They can now place any content in whatsoever element they prefer whether a JavaScript file or a DOM element inside a webpage.

However, if you also consider Bing and Yahoo, then reconsider using non-plain text as both of them still only capable of handling plain text.

Author Bio (Byline) : –

Jignesh Parmar is the digital marketing analyst & strategist at Intesols. For the last three years, he is helping small businesses and start-ups to gather the right online marketing tactic. He is passionate about everything digital, and his life revolves around online marketing world. You can follow him on Twitter or reach out to him via Linkedin.

Default image
Guest Author

This article was written by a Guest Author. If you want to guest post on this blog, please go through the Write For Us page.