How to Add a Custom robots.txt File to Blogger?

How to Add a Custom robots.txt File to Blogger?

How to Add a Custom robots.txt File to Blogger?
Welcome to AsifKamboh.com blog, At this point, I'll talk to you about how to add a custom robots.txt file to Blogger Blogspot blogs.

There is no option to upload a custom robots.txt text file to the root directory of Blogger's blogs, But the robots.txt file plays an important role within the blog, so Blogger has given the option to add the data of this file in the Crawlers and indexing option of the settings.

If you want to add the robots.txt file information according to SEO (Search Engine Optimization) friendly in your blog, you can do it by following these steps.


What Is the robots.txt File?
The robots.txt file is a text file that is uploaded to the root directory of web or blogs and contains information about search engine crawlers.

A robots.txt file tells search engine crawlers which pages or files they can crawl and publish from your blog to its search engines and which blog pages they can't crawl or publish.

How to Add robots.txt File to Blogger?
Go to your Blogger dashboard and select your blog to which you want to add a custom robots.txt information.

Then go to Settings > Crawlers and Indexing > and enable the "Enable custom robots.txt" option, and click on the Custom robots.txt underneath it.

After that, a custom robots.txt window will pop up, paste the following custom SEO friendly robots.txt file information into this window, and click save.

User-agent: *
Allow: /
Disallow: /search

Sitemap: https://example.blogspot.com/sitemap.xml

Before adding the above code to your blog robot layout, replace the featured "example.blogspot.com" URL with your blog URL address.

Once you add the above robots.txt file information to your blog, it means that your custom robots.txt file has been created in your blog root directory, And if you want to verify your robots file information, add "/robots.txt" at the end of your blog's URL address and visit it in the browser.

For example: example.blogspot.com/robots.txt

When you go to your blog robot.txt file URL and you see your robot file info on that page, It means that you have successfully created a custom robot file in your blog.

Robots.txt File Additional Settings and Explanations.

Name Information
User-agent It's the name of the search engine robot (crawler).
Allow A directory of site pages or files that allow robots to crawl and index.
Disallow A directory of site pages or files whose robots are not allowed to crawl and index.
Sitemap This is the blog sitemap directory that notifies the robot at the time of the crawl where the blog sitemap is located.

User-agent:
Where the User-agent: * in the above robots.txt file information, The "*" means you have allowed all robot crawlers to crawl on your blog. And if you only want to allow one crawler to crawl, you'll need to add its name instead of the "*", such as User-agent: Googlebot-Image.

If you want to allow more than one custom user agent to crawl, you can do so one by one. And below is an example that allows multiple robots to crawl on the blog.

User-agent: Adsbot-Google
Allow: /
User-agent: Googlebot-Image
Allow: /
User-agent: Googlebot-Mobile
Allow: /
User-agent: Mediapartners-Google
Allow: /

Allow:
The allow tag is used in the robots.txt file to allow a specific blog directory to crawl the robot. You can use this tag to allow a robot to crawl a single post or page on your blog. For example, if I only want the robot to crawl on my blog pages, then I will replace this Allow: / tag with Allow: /p from my blog's robot file.

For example, if you only want to allow a robot to crawl a specific page or post, then your allow tag will look something like this inside the robots.txt file.

Allow: /p/example-page.html
Allow: /2020/05/example-post.html

Disallow:
The disallow is used to prevent robots from crawling a specific blog directory. You can block one or more of your blog's pages and posts from being indexed by search engines by using this tag in a robot file.

For example, if you only want to prevent a robot to crawl a specific page or post, then your disallow tag will look something like this inside the robots.txt file.

Disallow: /p/example-page.html
Disallow: /2020/05/example-post.html

Sitemap:
The sitemap tag is used in the robot file so that whenever the robot crawls on the blog, it is told the location of the sitemap in which directory the sitemap of that blog is.

If your blog has more than one sitemap, you'll tell the robot to the directory of all of those sitemaps, as in the example below.

Sitemap: https://example.blogspot.com/sitemap.xml
Sitemap: https://example.blogspot.com/sitemap-pages.xml
Sitemap: https://example.blogspot.com/feeds/posts/default
Sitemap: https://example.blogspot.com/atom.xml?redirect=false&start-index=1&max-results=500

How to Configure Custom Robots Header Tag Settings?
One of our blog posts appears in more than one URL in the blog, such as search results, tags, and labels. This way different URLs of a post would be created but we only have to publish the actual URL of our post in search engines, so we use custom robots header tags settings.

Go to Blogger dashboard > Settings > Crawlers and Indexing > and enable the "Enable custom robots header tags" option.

Once you have enabled this option, the Home page tags, Archive and search page tags, and Post and page tags options will be highlighted (all these options will be hidden first), then enable and disable all the settings as shown in the image below and save.

Blogger Custom Robots Header Tags Setting According to the SEO.

Once you have completed the above settings, your blog's homepage, post, and page tags will be published in search engines. And archived and search page tags will not be indexed in search engines.

Well, in this post I have tried to cover all the settings of Blogger Robot, I hope all this information will be helpful for you.
Load comments