Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
-
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results.
However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do!
I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl?
Can I get my developer to add some code somewhere.
Help will be much appreciated. Asif
-
Hello Tom and Asif,
First of all Tom thanks for the excellent blog post re google docs.
We are also using the Jshop platform for one of our sites. And am not sure whether it is working correctly in terms of SEO. I just ran an seomoz crawl of the site and found that every single link in the list has a rel canonical in it, even the ones with session id's.
Here is an example:
www.strictlybeautiful.com/section.php/184/1/davines_shampoo/d112a41df89190c3a211ec14fdd705e9
www.strictlybeautiful.com/section.php/184/1/davines_shampoo
As Asif has pointed out the Jshop people say they have programmed it so that google cannot pick up the session ids, firstly is that even possible? And if I assume thats not an issue then what about the fact that every single page on the site has a rel canonical link on it?
Any help would be much appreciated.
<colgroup><col width="1074"></colgroup>
| |
| | -
Asif, here's the page with the information on the SEOmoz bot.
-
Thanks for the reply Tom. Spoke to our developer he has told me that the website platform (Jshop) does not show session ID's to the search engines so we are ok on that side. However as it doesn't recognise the Seomoz bot it shows it the session ID's. Do you know where I can find info on the Seomoz bot so we can see what it identifies itself as so it can be added to the list of recognised spiders?
Thanks
-
Hi Asif!
Firstly - I'd suggest that as soon as possible you address the core problem - the use of session ids in the URL. There are not many upsides to the approach and there are many downsides.That it doesn't show up with the site: command doesn't mean it isn't having a negative impact.
In the meantime, you should add a rel=canonical tag to all the offending pages pointing to the URL without the session id. Secondly, you could use robots.txt to block the SEOmoz bot from crawling pages with session ids, but it may affect the bots ability to crawl the site if all the links it is presented with are with session ids - which takes us back around to fixing the core problem.
Hope this helps a little!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I fix duplicate title issues?
I have a sub domain that isn't even on our own site but it's resulting in a lot of errors in Moz for duplicate content, as shown here: http://cl.ly/1R081v0K0e2N. Would this affect our ranking or is it simply just errors within Moz? What measures could I take to make sure that Moz or Google doesn't associate our site with these errors? Would I have to noindex in the htaccess file for the sub domain?
Moz Pro | | MMAffiliate0 -
503 Error or 200 OK??
So, in a Moz crawl and a Screaming From crawl, I'm getting some 503 Service Unavailable responses on the some pages. So I go to the pages in question, and the Moz bar is showing a 200 OK. The SEOBook http status checker (http://tools.seobook.com/server-header-checker/) also shows a 200 OK. What gives? The only reason I'm looking at this is because rankings plummeted a couple of weeks ago. Thanks! UPDATE So, I decided to use the mozbar to set the user agent as Googlebot and when I tried to access the pages in question I receive this message. I don't think this is an issue... anyone else have much experience here? Your access to this site has been limited Your access to this service has been temporarily limited. Please try again in a few minutes. (HTTP response code 503) Reason: Fake Google crawler automatically blocked Important note for site admins: If you are the administrator of this website note that your access has been limited because you broke one of the Wordfence firewall rules. The reason you access was limited is: "Fake Google crawler automatically blocked". If this is a false positive, meaning that your access to your own site has been limited incorrectly, then you will need to regain access to your site, go to the Wordfence "options" page, go to the section for Firewall Rules and disable the rule that caused you to be blocked. For example, if you were blocked because it was detected that you are a fake Google crawler, then disable the rule that blocks fake google crawlers. Or if you were blocked because you were accessing your site too quickly, then increase the number of accesses allowed per minute. If you're still having trouble, then simply disable the Wordfence firewall and you will still benefit from the other security features that Wordfence provides. If you are a site administrator and have been accidentally locked out, please enter your email in the box below and click "Send". If the email address you enter belongs to a known site administrator or someone set to receive Wordfence alerts, we will send you an email to help you regain access. Please read our FAQ if this does not work.
Moz Pro | | wiredseo0 -
Has the Crawl Test gone?
Just checked the new Moz, am I right in thinking the super useful crawl test functionality has gone? I use it for existing sites to download all the title tags and meta name descriptions, is there more to come??
Moz Pro | | Karen_Dauncey0 -
The crawl report shows a lot of 404 errors
They are inactive products, and I can't find any active links to these product pages. How can I tell where the crawler found the links?
Moz Pro | | shopwcs0 -
How can I prevent errors of duplicate page content generated by my tags from my wordpress on-site blog platform?
When I add meta data and a canonical reference to my blog tags for my on-site blog which works using a wordpress.org template, Roger generates errors of duplicate content. How can I avoid this problem? I want to use up to 5 tags per post, with the same canonical reference and each campaign scan generates errors/warnings for me!
Moz Pro | | ZoeAlexander0 -
20000 site errors and 10000 pages crawled.
I have recently built an e-commerce website for the company I work at. Its built on opencart. Say for example we have a chair for sale. The url will be: www.domain.com/best-offers/cool-chair Thats fine, seomoz is crawling them all fine and reporting any errors under them url great. On each product listing we have several options and zoom options (allows the user to zoom in to the image to get a more detailed look). When a different zoom type is selected it adds on to the url, so for example: www.domain.com/best-offers/cool-chair?zoom=1 and there are 3 different zoom types. So effectively its taking for urls as different when in fact they are all one url. and Seomoz has interpreted it this way, and crawled 10000 pages(it thinks exist because of this) and thrown up 20000 errors. Does anyone have any idea how to solve this?
Moz Pro | | CompleteOffice0 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0 -
Question about when new crawls start
Hi everyone, I'm currently using the trial of seomoz and I absolutely love what I'm seeing. However, I have 2 different websites (one has over 10,000 pages and one has about 40 pages). I've noticed that the smaller website is crawled every few days. However, the larger site hasn't been crawled in a few days. Although both campaigns state that the sites won't be crawled until next Monday, is there any way to get the crawl to start sooner on the large site? The reason that I've asked is that I've implemented some changes that will likely decrease the amount of pages that are crawled simply based upon the recommendations on this site. So, I'm excited to see the potential changes. Thanks, Brian
Moz Pro | | beeneeb0