Your achievements

Level 1

0% to

Level 2

Tip /
Sign in

Sign in to Community

to gain points, level up, and earn exciting badges like the new
BedrockMission!

Learn more

View all

Sign in to view all badges

SOLVED

Google indexing issue duplicate url , how to prevent this?

ManuMathew1994
Level 3
Level 3

Currently I have noticed multiple urls like this in Google Search Console (which means Google is indexing them)

https://www.test.com/mypage.2.html

https://www.test.com/mypage.3.html

where it should have only https://www.test.com/mypage.html indexed.

Because of this, ‘a duplicate url exists’ and ‘page indexed without content’ issue is being raised by Google and which has resulted a significant drop in rankings.

 

Any Suggestion?

1 Accepted Solution
bsloki
Correct answer by
Community Advisor
Community Advisor

Hi @ManuMathew1994 

 

Do you see these pages with the selectors in your dispatcher cache ? If not, one of the way to avoid is to configure your dispatcher to not allow anyone to access the pages with the selector.

 

Also, here is a tip to delete them from the google index if needed : https://developers.google.com/search/docs/advanced/crawling/remove-information

 

View solution in original post

7 Replies
Shashi_Mulugu
Community Advisor
Community Advisor

@ManuMathew1994  Are you using selectors to access pages? if yes will the content differ based on the selector passed? will it also differ the metadata of the page? Based on the above questions, we can guide you better

ManuMathew1994
Level 3
Level 3

Hi 

 

No for this particular page we don't use any selectors, but the page seem to be accepting selectors.  The contents however is same for all the request when any number of selectors are added.

bsloki
Correct answer by
Community Advisor
Community Advisor

Hi @ManuMathew1994 

 

Do you see these pages with the selectors in your dispatcher cache ? If not, one of the way to avoid is to configure your dispatcher to not allow anyone to access the pages with the selector.

 

Also, here is a tip to delete them from the google index if needed : https://developers.google.com/search/docs/advanced/crawling/remove-information

 

View solution in original post

VKumar2
Level 2
Level 2

Hi @ManuMathew1994  - Assuming the pages are not supposed to access by using selectors, you could write re-write rule at apache redirect module to the original URL without selector.

Canonical URL is another option that you can try implementing and give the self page URL, this way it would force the selector pages to adhere canonical functionality, However if pages are already indexed on google - you might raise a google re-crawler request after implementing the required changes.

Thank you.