Cute PuppyA few website owners have awoken to an email from Google this morning, alerting them to the fact that Googlebot cannot access CSS and JS files on their websites.

It is just like having a loved and eager dog pawing at the door so it can go outside to play.

These messages have been a long time coming.

They signal a move from Google HQ to make sure that its ultimatum from last year that we don’t block the search engine from getting its paws all over our websites is obeyed.

Here is what all the fuss is about.

The robots.txt file and an obedient Google

Since the emergence of web index services and search engines, webmasters have used little files called robot files to give instructions such indexing services to keep their little paws out of certain parts of the website.

For example, every website has a host of files that your site needs for functioning but are meaningless to humans, such as style.css which contains code telling browsers what colour to make text appear, how much space to leave between items on a page, etc.

Common practice from the early days of web design has been to help crawlers from search engines to spend their time indexing the good stuff (like this article you’re reading now) and keep away from the code because it would be pointless in search results.

But over the years, search engines have become smarter and today they know what to present as search results and what to ignore.

Fetch, Google. Good, Google

However, as the mobile world has descended upon us, Google has been anxious to read website code to know how mobile-friendly a website is so it can decide to show it or hide it from search results on handheld devices.

And being the ever faithful, ever obedient search engine that it is, when Google finds a robots.txt file telling it to stay away from certain parts of a website, it stays away, thus losing its ability to fully trust the website.

The good news is, just like a fully trained dog is able to be let off the leash and still behave itself, Google has convinced webmasters to trust it without a robots.txt file in place (except for some very rare circumstances beyond the scope of this article the majority of our client base) – we can remove them or replace them with just a blank robots.txt file.

This can be done within the SEO plugin currently in use within many Baker Marketing websites, or directly within the file directory via an FTP program like Filezilla.

If you’d like some help remedying the situation, just reach out to the Baker Marketing team and we can fix the situation quickly and affordably.

Photo by Serena on Flickr (CC by 2.0)

Pin It on Pinterest

Share This