Use Canonical or Robots.txt for Map View URL without Backlink Potential
-
I have a Page X with lots of unique content. This page has a "Map view" option, which displays some of the info from Page X, but a lot is ommitted. Questions:
-
Should I add canonical even though Map View URL does not display a lot of info from Page X or adding to robots.txt or noindex, follow? I don't see any back links coming to Map View URL
-
Should Map View page have unique H1, title tag, meta des?
-
-
Thank you!
-
Sounds good! Glad to hear you got a solution sorted. Will be interested to hear how it goes.
-
thx for the feedback. I created a "/map/" folder in the URL and added to robots.txt. Again, they are simply a "Map view" option for users and has no or limited unique content, and no plans of changing that since the main page has all the unique content and indexed.
-
Hi there,
Unless the pages contain a lot of crossover duplicate content, there's a good chance Google might ignore the canonical tag anyway:
"One test is to imagine you don’t understand the language of the content—if you placed the duplicate side-by-side with the canonical, does a very large percentage of the words of the duplicate page appear on the canonical page? If you need to speak the language to understand that the pages are similar; for example, if they’re only topically similar but not extremely close in exact words, the canonical designation might be disregarded by search engines."
However, I wouldn't be able to make a strong case for noindexing the pages, unless you're sure they're not adding any value to users. Are these pages discovered by users in organic search (a landing pages report can help you isolate this)? If so, what's the user experience looking like? If users aren't finding their way to this page organically from search or direct (indicating they've bookmarked it), then you potentially could make a case for noindexing them. If they are reaching them as a landing page, you might want to think twice about noindexing.
An alternative would be to build out these pages more, so they standalone as unique, good quality content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why a certain URL ( a category URL ) disappears?
the page hasn't been spammed. - links are natural - onpage grader is perfect - there are useful high ranking articles linking to the page...pretty much everything is okay.....also all of my websites pages are okay and none of them has disappeared only this one ( the most important category of my site. )
Intermediate & Advanced SEO | | mohamadalieskandariii0 -
Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
What's your experience of using robots meta tag v robots.txt when it comes to a stand alone solution to prevent Google indexing? I am pretty sure robots meta tag is more reliable - going on own experiences, I have never experience any probs with robots meta tags but plenty with robots.txt as a stand alone solution. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart1 -
Question about Syntax in Robots.txt
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file? Currently I have-
Intermediate & Advanced SEO | | DRSearchEngOpt
Disallow: /attachment_id Where "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first. Thanks!0 -
Why is /home used in this company's home URL?
Just working with a company that has chosen a home URL with /home latched on - very strange indeed - has anybody else comes across this kind of homepage URL "decision" in the past? I can't see why on earth anybody would do this! Perhaps simply a logic-defying decision?
Intermediate & Advanced SEO | | McTaggart0 -
Canonicals question ref canonicals pointing to redundant urls
Hi, SCENARIO: A site has say 3 examples of the same product page but with different urls because that product fits into 3 different categories e.g. /tools/hammer /handtools/hammer /specialoffers/hammer and lets say the first 2 of those have the canonical pointing to /specialoffers/hammer YET that page is now redundant e.g. the webmaster decided to do away with the /specialoffers/ folder. ASSUMPTIONS: That is going to seriously hamper the chances of the 2 remaining versions of the hammer page being able to rank as they have canonicals pointing to a url that no longer exists. The canonical tags should be changed to point to 1 of the remaining url versions. As an added complication - lets say /specialoffers/hammer still exists, the url works, but just isn't navigable from the site. Thoughts/feedback welcome!
Intermediate & Advanced SEO | | AndyMacLean0 -
Blocking poor quality content areas with robots.txt
I found an interesting discussion on seoroundtable where Barry Schwartz and others were discussing using robots.txt to block low quality content areas affected by Panda. http://www.seroundtable.com/google-farmer-advice-13090.html The article is a bit dated. I was wondering what current opinions are on this. We have some dynamically generated content pages which we tried to improve after panda. Resources have been limited and alas, they are still there. Until we can officially remove them I thought it may be a good idea to just block the entire directory. I would also remove them from my sitemaps and resubmit. There are links coming in but I could redirect the important ones (was going to do that anyway). Thoughts?
Intermediate & Advanced SEO | | Eric_edvisors0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
Canonical Tag Uses Source Title and Meta Data?
When optimising a regional same language micro site within a sub folder of a .com it dawned on me that our use of the hreflang and canonical meta elements will render individual elements such as H1 and title obsolete. As a canonical tag takes the canonical source title and meta right? It would still have value in optimising localised headings though? Appreciate any thoughts, suggestions (o:
Intermediate & Advanced SEO | | 3wh0