Preventing Search Crawlers on a Page-Level Basis | Community
Skip to main content
KCS_Integration
Level 2
September 10, 2020

Preventing Search Crawlers on a Page-Level Basis

  • September 10, 2020
  • 0 replies
  • 2923 views

Issue

Search engines index all pages, but you want to exclude some pages from the index and allow others.

 

 


Solution

To prevent most search engine web crawlers from indexing a page on your site, place the following meta tag into the <head> section of your page (copy the bolded print):

<meta name="robots" content="noindex">

 

To prevent only Google web crawlers from indexing a page:

<meta name="googlebot" content="noindex">

 

You can also add the following after the content="noindex to stop links from being crawled on a page:

, nofollow"


Example use:

<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">


Using "nofollow" means the crawler will not crawl the links on the page while the "noindex" will only block the page from being indexed.

 

Within Marketo this can be applied on a template basis and then on any pages using said template will not be indexed. You can also ask support to set your Images and Files Directory in Design Studio to not be indexed so documents such as whitepapers will not show up in search engine results.

 

 


This post is no longer active and is closed to new replies. Need help? Start a new post to ask your question.