Dynamic URL pages in Crawl Diagnostics
-
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages.
Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site.
The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site.
These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories.
So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
-
I am having a similar issue. I am getting hit with 404 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
I am having a similar issue. I am getting hit with 403 errors for pages that do not exist anymore of have been fixed. How do I get these to stop showing up?
-
Based on what has happened from time to time on our sites, my guess will be that it is caused by a widget or plug in on your CMS in some way interacting with the Bot. You are likely being crawled on these urls by Google (and producing 404's) as well and it is not likely it is just Roger bot picking it up. There is a lot on the GWMT forums regarding this with a myriad of suggested fixes: mod rewrite, http 410 for 404, etc.
One fix used by many is if your site has relative links you can do full out urls. If you have a ton of pages this might be a bit more of a pain. (Our clients typically have smaller sites so not too much of a problem).
If you are using WordPress (or another CMS that can utilize Extra Options Plug In) it is stated in the forums that the 404's can be stopped by:
In Extra Options plugin: I checked off all of the below options,, the last two do the job.. read about the nonindex nonfollow where appropriate,,, in that plugin,, this could be the answer.
Make meta descriptions from excerpts
Make home meta description from taglineAdd noindex where appropriate
Add nofollow where appropriateAnother option is to insure you have no
There are plenty of bright coders on the moz who can pitch in here and be more eloquent,
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Lots of 404s listed in Top Pages by Page Authority
Hi guys, I'm slowly getting to grips with all the aspects of using Moz.
Moz Pro | | giddygrafix
Looking at my link analysis, under the tab labelled Top Pages by Page Authority, there's an awful lot of 404 pages listed. Are these pages which are being linked to (either internally or externally), and should I put a 301 redirect on all the pages listed? I've attached a picture... moz-screen-shot.jpg0 -
Duplicate title on Crawl
Ok, this should hopefully be a simple one. Not sure if this is a Moz crawl issue of redirect issue. Moz is reporting duplicate title for www.site.co.uk , site.co.uk and www.site.co.uk/home.aspx is this a canonical change or a moz setting I need to get this number lower.
Moz Pro | | smartcow0 -
SEOmoz crawler not crawling my site
We set up a new campaign in SEOmoz on Friday. It is my understanding that the preliminary crawl can cover up to 250 and this has been our experience in the past. However, the preliminary crawl only went through 2 pages. This is a larger eCommerce site with many pages. Any ideas why more pages weren't crawled? We set up the campaign to track at the root domain level.
Moz Pro | | IMM0 -
Seo moz has only crawled 2 pages of my site. Ive been notified of a 403 error and need an answer as to why my pages are not being crawled?
SEO Moz has only crawled 2 pages of my clients site. I have noticed the following. A 403 error message screaming frog also cannot crawl the site but IIS can. Due to the lack of crawling ability, im getting no feed back on my on page optimization rankings or crawl diagnostics summary, so my competitive analysis and optimization is suffering Anybody have any idea as to what needs to be done to rectify this issue as access to the coding or cms platform is out of my hands. Thank you
Moz Pro | | nitro-digital0 -
Moving from a dynamic site to nopcommerce
Hi, First question since becoming a member so be gentle with me ;o) We are moving from a site using a dynamically generated ecommercetemplate to a nopcommerce site. I have two questions about this I know that 301 redirects are the best way to pass "link juice" but with the site being dynamically generated a lot of these links will simply disappear when we move to nop, meaning that they wont actually be there in order to add a 301. Any advice on this would be appreciated. How can i get a list of the pages on my current site which have the best rank in order to create the redirects. Unfortunately due to some technical issues we had with google analytics we were unable to install it so dont really have any analytics to give us some extra info. I was hoping that there would be somewhere within seomoz where i could be directed. Many thanks Chris
Moz Pro | | cjhamill0 -
Only one page has been crawled
I am running a campaing for three weeks now and first two crawls was ok but the last one is showing only one page crawled. the subdomain I am tracking is: www.cubaenmiami.com I have everything correct in my site. Regards Alex
Moz Pro | | esencia0 -
Only crawling one page
Hi there, A campaign was crawling fine, but at the last crawl, for some reason, SEOmoz can only crawl one page... any ideas? If I run a custom crawl I still access all of the site's pages.
Moz Pro | | harryholmes0070 -
Handling long URLs and overly-dynamic URLs on eCommerce site
Hello Forum, I've been optimizing an eCommerce site and our SEOmoz crawls are favorable for the most part, except for long URLs and overly-dynamic URLs. These issues stem from two URL types: Layered navigation (faceted search) and non-Google internal search results. I outline the issues for each below. We use an SEO-friendly URL structure for our product category pages, but once bots start "clicking" our layered navigation options, all the parameters are appended to our SEO-friendly urls, causing the SEOmoz crawl warnings. Layered Navigation :
Moz Pro | | pano
SEO-Friendly Category Page: oursite.com/shop/meditation-cushions.html Effects of layered navigation: oursite.com/shop/meditation-cushions.html?bolster_material_quality=414&bolsters_appearance=206&color=12&dir=asc&height=291&order=name As you can see the parameters include product attributes and page sorts. I should note that all pages generated by these parameters use the element to point back to the SEO-friendly URL We have also set up Google's Webmaster Tools to handle these parameters. Internal Search Function:
Our URLs start off simple: oursite.com/catalogsearch/result/?q=brown. Then the bot clicks all the layered navigation options, yielding oursite.com/catalogsearch/result/index/?appearance=54&cat=67&clothing_material=83&color=12&product_color=559&q=brown. Also, all search results are set to noindex,follow. My question is: Should we worry about these overly-dynamic and long ULR warnings? We have set up canonical elements, "noindex,follow" solutions, and configured Webmaster Tools to handle our parameters. If these are a concern, how would you resolve these issues?0