Anyone managed to decrease the "not selected" graph in WMT?
-
Hi Mozzers.
I am working with a very large E-com site that has a big issue with duplicate or near duplicate content. The site actually received a message in WMT listing out pages that Google deemed it should not be crawling. Many of these were the usual pagination / category sorting option URL issues etc.
We have since fixed the issue with a combination of site changes, robots.txt, parameter handling and URL removals, however I was expecting the "not selected" graph in WMT to start dropping.
The number of roboted pages has increased by around 1 million pages (which was expected) and indexed pages has actually increased despite removing hundreds of thousands of pages. I assume this is due to releasing some crawl bandwidth for more important pages like products.
I guess my question is two-fold;
1. Is the "not selected" graph cumulative, as this would explain why it isn't dropping?
2. Has anyone managed to get this figure to significantly drop? Should I even care? I am relating this to Panda by the way.
Important to note that the changes were made around 3 weeks ago and I am aware not everything will be re-crawled yet.
Thanks,
Chris -
Very interesting. I'm also convinced the "not selected" graph is a big clue towards a Panda penalty. I guess I will have to wait another couple of weeks to see if our changes have affected the graph. Maybe this time lag is why it can take upwards of 6 months to get recover from Panda!
-
Hi Chris
Here is the nice information about the "Not Selected" data in WMT. I hope this post will help you more to understand about the Not Selected Graph : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
-
The "Not Selected" isn't cumulative. The "Ever Crawled" is though.
I have a large Wordpress content site. It was hit by Panda on a very same day that my "not selected" multiplied by 8. I don't think it was a coincidence, and I didn't make any large changes to the site besides the regular addition of about 10 posts per week.
I've been able to effect a downward movement on the not selected count by removing/redirecting things like "replytocom" variable URLs in the comments section;reworking print and email versions of each article, etc. It very slow though, only reducing by an average of 100 per week.
Needless to say, I think the not selected metric means quite alot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
In Search Console, why is the XML sitemap "issue" count 5x higher than the URL submission count?
Google Search Console is telling us that there are 5,193 sitemap "issues" - URLs that are present on the XML sitemap that are blocked by robots.txt However, there are only 1,222 total URLs submitted on the XML sitemap. I only found 83 instances of URLs that fit their example description. Why is the number of "issues" so high? Does it compound over time as Google re-crawls the sitemap?
Intermediate & Advanced SEO | | FPD_NYC0 -
Does Implemented SEO Changes Using Google Tag Manager are not supported any more?
Hello all! On May i read the article https://moz.com/blog/seo-changes-using-google-tag-manager and I implemented it in order to de-index some pages. I was really happy cause it worked but now the same problem appeared. Does anybody know if Google stopped taking into consideration SEO changes through Tag Manager? Hin6E Hin6E
Intermediate & Advanced SEO | | GeorgeGia0 -
"near me" campaign
I'm looking at running a campaign to get a site ranking for terms that include "near me" so for instance, "personal trainers near me", "yoga lessons near me" I'm wondering if this should be a local campaign because of the the "near me" in the term and Google basing results on IP addresses of the searcher (if that's possible possible instead of town names) or will it come down to words on the page including "near me" Any help or examples would be hugely appreciated, thanks community!
Intermediate & Advanced SEO | | Marketing_Today0 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
Landing pages "dropping" and being replaced with homepage?
Hi Moz People Happy new year to all, I have an interesting one here. I have recently been making some landing pages and they have all pretty much hit page 1 for the search terms I've focused on (UK Domain). Up until this morning the landing page was the 8th organic result on the UK domain. However I have checked this morning and the landing page has dropped below the top 50 and instead our homepage is now showing as the last organic result on page 1. This is intriguing to me as it has also happened to a couple of other landing pages I have made. Is this due to the relevance being driven higher by the landing pages but overall the homepage is more important to Google? Do you guys think this might start happening to the other pages that I have created? Any input would be appreciated! ( Ill give you links and search terms if you want to take a look for yourselves but I try to refrain from "self advertising" ) Happy Thursday Mozzers ! Jamie
Intermediate & Advanced SEO | | SanjidaKazi0 -
How should I manage duplicate content caused by a guided navigation for my e-commerce site?
I am working with a company which uses Endeca to power the guided navigation for our e-commerce site. I am concerned that the duplicate content generated by having the same products served under numerous refinement levels is damaging the sites ability to rank well, and was hoping the Moz community could help me understand how much of an impact this type of duplicate content could be having. I also would love to know if there are any best practices for how to manage this type of navigation. Should I nofollow all of the URLs which have more than 1 refinement used on a category, or should I allow the search engines to go deeper than that to preserve the long tail? Any help would be appreciated. Thank you.
Intermediate & Advanced SEO | | FireMountainGems0 -
Is it ok to use both 301 redirect and rel="canonical' at the same time?
Hi everyone, I'm sorry if this has been asked before. I just wasn't able to find a response in previous questions. To fix the problems in our website regarding duplication I have the possibility to set up 301's and, at the same time, modify our CMS so that it automatically sets a rel="canonical" tag for every page that is generated. Would it be a problem to have both methods set up? Is it a problem to have a on a page that is redirecting to another one? Is it advisable to have a rel="canonical" tag on every single page? Thanks for reading!
Intermediate & Advanced SEO | | SDLOnlineChannel0 -
Is my "term & conditions"-"privacy policy" and "About Us" pages stealing link juice?
should i make them no follow? or is this a bogus method?
Intermediate & Advanced SEO | | SEObleu.com0