Should all pages on a site be included in either your sitemap or robots.txt?
-
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
-
Thanks guys!
-
You bet - Cheers!
-
Clever PHD,
You are correct. I have found that these little housekeeping issues like eliminating duplicate content really do make a big difference.
Ron
-
I thinks Ron's point was that if you have a bunch of duplicates, the dups are not "real" pages, if you are only counting "real" pages. Therefore, if Google indexes your "real" pages and the dup versions of them, you can have more pages indexed. That is the issue then that you have duplicate versions of the same page in Google's index and so which will rank for a given key term? You could be competing against yourself. That is why it is so important you deal with crawl issues.
-
Thank you. Just curious, how would the number of pages indexed be higher than the number of actual pages?
-
I think you are looking at the pages indexed which is generally a higher number than those on your web site. There is a point to marking things up so that there is a no follow on any pages that you do not want indexed as well as properly marking up the web pages that you do specifically want indexed. It is really important that you eliminate duplicate pages. A common source of these duplicates is improper tags on the blog. Make sure that your tags are set up in a logical hierarchy like your site map. This will assist the search engines when they re index your page.
Hope this helps,
Ron
-
You want to have as many pages in the index as possible, as long as they are high quality pages with original content - if you publish quality original articles on a regular basis, you want to have all those pages indexed. Yes, from a practical perspective you may only be able to focus on tweaking the SEO on a portion of them, but if you have good SEO processes in place as you produce those pages, they will rank long term for a broad range of terms and bring traffic..
If you have 20,000 pages as you have an online catalog and you have 345 different ways to sort the same set of page results, or if you have keyword search URLs, or printer friendly version pages or your shopping cart pages, you do not want those indexed. These pages are typically, low quality/thin content pages and/or are duplicates and those do you no favor. You would want to use the noindex meta tag or canonical where appropriate. The reality is that out of the 20,000 pages, there are probably only a subset that are the "originals" and so you dont want to waste Googles time in crawling those pages.
A good concept here to look up is Crawl Budget or Crawl Optimization
http://searchengineland.com/how-i-think-crawl-budget-works-sort-of-59768
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap Indexed Pages, Google Glitch or Problem With Site?
Hello, I have a quick question about our Sitemap Web Pages Indexed status in Google Search Console. Because of the drastic drop I can't tell if this is a glitch or a serious issue. When you look at the attached image you can see that under Sitemaps Web Pages Indexed has dropped suddenly on 3/12/17 from 6029 to 540. Our Index status shows 7K+ indexed. Other than product updates/additions and homepage layout updates there have been no significant changes to this website. If it helps we are operating on the Volusion platform. Thanks for your help! -Ryan rou1zMs
Intermediate & Advanced SEO | | rrhansen0 -
Site Merge Strategy: Choosing Target Pages for 301 Redirects
I am going to be merging two sites. One is a niche site, and it is being merged with the main site. I am going to be doing 301 redirects to the main site. My question is, what is the best way of redirecting section/category pages in order to maximize SEO benefits. I will be redirecting product to product pages. The questions only concerns sections/categories. Option 1: Direct each section/category to the most closely matched category on the main site. For example, vintage-t-shirts would go to vintage-t-shirt on main site. Option 2: Point as many section/category pages to larger category on main site with selected filters. We have filtered navigation on our site. So if you wanted to see vintage t-shirts, you could go to the vintage t-shirt category, OR you could go to t-shirts and select "vintage" under style filter. In the example above, the vintage-t-shirt section from the niche site would point to t-shirts page with vintage filter selected (something like t-shirts/#/?_=1&filter.style=vintage). With option 2, I would be pointing more links to a main category page on the main site. I would likely have that page rank higher, because more links are pointing to it. I may have a better overall user experience, because if the customer decides to browse another style of t-shirt, they can simply unselect the filter and make other selections. Questions: Which of these options is better as far as: (1) SEO, (2) User experience If I go with option 2, the drawback is that the page titles will all be the same (i.e vintage-t-shirts pointing to the page with filter selected would have "t-shirts" as page title instead of a more targeted page with page title "vintage t-shirts." I believe a workaround would be to pull filter values from the URL and append them to the page title. That way page title for URL t-shirts/#/?=1&filter.style=vintage_ would be something like "vintage, t-shirts." Is this the appropriate way to deal with it? Any thoughts, suggestions, shared experiences would be appreciated.
Intermediate & Advanced SEO | | inhouseseo0 -
Does Google still don't index Hashtag Links ? No chance to get a Search Result that leads directly to a section of a page? or to one of numeras Hashtag Pages in a single HTML page?
Does Google still don't index Hashtag Links ? No chance to get a Search Result that leads directly to a section of a page? or to one of numeras Hashtag Pages in a single HTML page? If I have 4 or 5 different hashtag link section pages , consolidated into one HTML Page, no chance to get one of the Hashtag Pages to appear as a search result? like, if under one Single Page Travel Guide I have two essential sections: #Attractions #Visa no chance to direct search queries for Visa directly to the Hashtag Link Section of #Visa? Thanks for any help
Intermediate & Advanced SEO | | Muhammad_Jabali0 -
Does Google only look at LSI per page or context of the Site?
From what I have read i should optimise each page for a keyword/phrase, however, I read recently that google may also look at the context of the site to see if there are other similar words. For example i have different pages optimised for Funeral Planning, funeral plans, funeral plan costs, compare funeral plans, why buy a funeral plan, paying for a funeral, prepaid funeral plans. Is this the best strategy when the words/phrases are so close or should i go for longer pages with the variations on one page or at least less pages? Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Site less than 20 pages shows 1,400+ pages when crawled
Hello! I’m new to SEO, and have been soaking up as much as I can. I really love it, and feel like it could be a great fit for me – I love the challenge of figuring out the SEO puzzle, plus I have a copywriting/PR background, so I feel like that would be perfect for helping businesses get a great jump on their online competition. In fact, I was so excited about my newfound love of SEO that I offered to help a friend who owns a small business on his site. Once I started, though, I found myself hopelessly confused. The problem comes when I crawl the site. It was designed in Wordpress, and is really not very big (part of my goal in working with him was to help him get some great content added!) Even though there are only 11 pages – and 6 posts – for the entire site, when I use Screaming Frog to crawl it, it sees HUNDREDS of pages. It stops at 500, because that is the limit for their free version. In the campaign I started here at SEOmoz, and it says over 1,400 pages have been crawled…with something like 900 errors. Not good, right? So I've been trying to figure out the problem...when I look closer in Screaming Frog, I can see that some things are being repeated over and over. If I sort by the Title, the URLs look like they’re stuck in a loop somehow - one line will have /blog/category/postname…the next line will have /blog/category/category/postname…and the next line will have /blog/category/category/category/postname…and so on, with another /category/ added each time. So, with that, I have two questions Does anyone know what the problem is, and how to fix it? Do professional SEO people troubleshoot this kind of stuff all of the time? Is this the best place to get answers to questions like that? And if not, where is? Thanks so much in advance for your help! I’ve enjoyed reading all of the posts that are available here so far, it seems like a really excellent and helpful community...I'm looking forward to the day when I can actually answer the questions!! 🙂
Intermediate & Advanced SEO | | K.Walters0 -
Is there a way to redirect pages from an old site?
I have no access to an old wordpress site of a client's, but have parked the domain on their new site, gone into webmaster central and requested a change of address and wait... the old domain still shows in the search listings in place of the new site domain and the log files show 404 errors from links to the old site which go nowhere - can anyone suggest a way of managing this on the new site - is there a workaround to what should have been done - 301 redirects on the old site before it was taken down. many thanks
Intermediate & Advanced SEO | | Highlandgael0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Block all but one URL in a directory using robots.txt?
Is it possible to block all but one URL with robots.txt? for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
Intermediate & Advanced SEO | | nicole.healthline0