Broken sitemaps vs no sitemaps at all?
-
The site I am working on is enormous. We have 71 sitemap files, all linked to from a sitemap index file.
The sitemaps are not up to par with "best practices" yet, and realistically it may be another month or so until we get them cleaned up.
I'm wondering if, for the time being, we should just remove the sitemaps from Webmaster Tools altogether. They are currently "broken", and I know that sitemaps are not mandatory. Perhaps they're doing more harm than good at this point? According to Webmaster Tools, there are 8,398,082 "warnings" associated with the sitemap, many of which seem to be related to URLs being linked to that are blocked by robots.txt.
I was thinking that I could remove them and then keep a close eye on the crawl errors/index status to see if anything changes.
Is there any reason why I shouldn't remove these from Webmaster Tools until we get the sitemaps up to par with best practices?
-
I think you can remove the sitemap since it returns so many warnings.
I don't think sitemaps have so much seo benefits but rather helps google find pages that are hard to find in your site or no accessible through regular href.
So make sure your site has a good structure and that all page can be found by browsing your site (click on links from pages to pages) and you will be fine sitemap or not.
Use linksleuth to crawl your site, if you are not sure of the accessibility of all pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Spotify XML Sitemap
All, Working on an SEO work up for a Spotify site. Looks like they are using a sitemap that links to additional pages. A problem, none of the links are actually linked within the sitemap. This feels like a strong error. https://lubricitylabs.com/sitemap.xml Thoughts?
Intermediate & Advanced SEO | | dmaher0 -
Best server-side xml sitemap generator?
I have tried xml-sitemaps which tends to crash when spidering my site(s) and requires multiple manual resumes which aren't practical for our businesses. Please let me know if any other server-side generators that could be used on multiple enterprise-sized websites exist that could be a good fit. Image sitemaps would also be helpfu.l +++One with multiple starting URLs would help spidering/indexing the most important sections of our sites. Also, has anyone heard of or used Dyno Mapper? This also looks like a good solution for us, but was wondering if anyone has had any experience with this product.
Intermediate & Advanced SEO | | recbrands0 -
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hi Everyone, I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually. I think it has something to do with this little message in the Mobile crawl report. "Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)." If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
Intermediate & Advanced SEO | | Carla_Dawson0 -
No follow vs do follow the how to
Hi Guys, Sorry if this is an ammature question, just wanted to know I noticed a few people talking about no follows and do follows for backlinks. Is there suppose to be some way to set you website up as nofollow and dofollow for backlinks? I noticed a few people saying to make sure that some directories are nofollow, i would like to know if I can set this up for my own site as I'm a bit conscious and paranoid about others that might backlink to my site who have huge spam or negative seo etc? Any insight into this would be much appreciated Thanks all
Intermediate & Advanced SEO | | edward-may0 -
H2 vs. H3 Tags for Category Navigation
Hey, all. I have client that uses tags in the navigation for its blog. For example, tags might appear around "Library," "Recent Posts," etc. This is handled through their WordPress theme. This seems fairly standard, but I wonder whether tags are semantically appropriate. Since each blog post is fairly lengthy (about 500-1000 words) with multiple tags, would it be more appropriate to use tags for this menu navigation? Are we cutting into the effectiveness of our tags by using them for menu navigation? The navigation is certainly an important page element, and it structures content, so it seems that it should use some header tag. Anyways, your thoughts are greatly appreciated. I'm a content creator, not an SEO, so this is a bit out of my skillset.
Intermediate & Advanced SEO | | Ask44435230 -
Extensions Vs Non Extensions
Hello, I'm a big fan of clean urls. However i'm curious as to what you guys do, to remove them in a friendly way which doesn't cause confusion. Standard URLS
Intermediate & Advanced SEO | | Whittie
http://www.example.com/example1.html
http://www.example.com/example2.html
http://www.example.com/example3.html
http://www.example.com/example4.php
http://www.example.com/example5.php What looks better (in my eyes)
http://www.example.com/example1/
http://www.example.com/example2/
http://www.example.com/example3/
http://www.example.com/example4/
http://www.example.com/example5/ Do you keep extensions throughout your website, avoiding any sort of confusion and page duplication; OR Put a canonical link pointing to the extension-less version of each page, with the anticipation of this version indexing into Google and other Search Engines. OR 301 Each page which has an extension to an extension-less version, and remove all linking to ".html" site wide causing errors within software like Dreamweaver, but working properly. OR Another way? Please emphasise I'm sorry if this is a little vague and I appreciate any angles on this, I quite like clean url's but unsure a hassle-less way to create it. Thanks for any advice in advance0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
When to Use Schema vs. Facebook Open Graph?
I have a client who for regulatory reasons cannot engage in any social media: no Twitter, Facebook, or Google+ accounts. No social sharing buttons allowed on the site. The industry is medical devices. We are in the process of redesigning their site, and would like to include structured markup wherever possible. For example, there are lots of schema types under MedicalEntity: http://schema.org/MedicalEntity Given their lack of social media (and no plans to ever use it), does it make sense to incorporate OG tags at all? Or should we stick exclusively to the schemas documented on schema.org?
Intermediate & Advanced SEO | | Allie_Williams0