Crawl Tool Producing Random URL's
-
For some reason SEOmoz's crawl tool is returning duplicate content URL's that don't exist on my website. It is returning pages like "mydomain.com/pages/pages/pages/pages/pages/pricing" Nothing like that exists as a URL on my website. Has anyone experienced something similar to this, know what's causing it, or know how I can fix it?
-
The same thing is happening for one of my campaigns, specifically for a 302 redirect to the homepage. My guess is I need to update it to a 301, but I'm not 100% sure if that would solve the issue?
-
Well we have our website setup to where if you type something after mydomain.com/ that is not a valid URL it will take you to the site map. For example if I typed in mydomain.com/ljlksdfsdfkjsdlfjsflj it would take me to the site map. The same holds true for mydomain.com/pages/pages/pages/pages/pages/pricing. So to answer your question, no it does not take you to the correct page, but it doesn't give you a 404 error either. It takes you to the site map.
-
I had this weird issue too but it was down to how our developers created the website. Does "mydomain.com/pages/pages/pages/pages/pages/pricing" still take you to the correct page or does it show you a 404 error?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there an easy way to hide one of your URL's on google search?, rather than redirecting?
We don't want to redirect to a different page, as some people still use it, we just don't want it to appear in search
Technical SEO | | TheIDCo0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Best practices when merging 2 domains with different themes and CMS's?
I have a client with 2 sites - one for an external audience and one for their ~2,000-3,000 employees. The external site (call it acme.com), built on WP with a custom theme, is pretty small. The internal site (call it acmeinternal.com) has TONS of high quality content with incredible engagement metrics, but it's built on a separate CMS with an entirely different custom theme. The problem we're trying to solve now: Can we bring the internal site over to the external domain (acme.com and acme.com/internal, for example) so that client.com can benefit from the quantity and quality of content and behavioral metrics associated with the internal content? The external and internal audiences, and the corresponding content for each, are both entirely mutually exclusive. A potential client of theirs who would come to acme.com would have no reason to visit acme.com/internal (we'd actually prefer to not provide navigation to it for them), and the internal audience would treat acme.com/internal as their landing page, and all the posts would then live at acme.com/internal/news/post-name. I'm assuming there are reasons why we couldn't have half of the site on one template using one CMS, having certain SEO tags, certain HTML structure, etc where the other half of the site is using a completely different template with a different CMS with different SEO tags, different URL structure etc? To reap the reward of the great content, would we have to essentially recreate the internal site's content on the external site's cms and template? Is it even possible for the domain authority of acme.com to improve based on the engagement on acme.com/internal/_xxxx _if there's virtually zero linking back and forth between acme.com and /internal/? Any advice would be much appreciated!
Technical SEO | | ThinkAOR0 -
Webmaster tools lists a large number (hundreds)of different domains linking to my website, but only a few are reported on SEOMoz. Please explain what's going on?
Google's webmaster tools lists hundreds of links to my site, but SEOMoz only reports a few of them. I don't understand why that would be. Can anybody explain it to me? Is there someplace to I can go to alert SEOMoz to this issue?
Technical SEO | | dnfealkoff0 -
Crawl Errors In Webmaster Tools
Hi Guys, Searched the web in an answer to the importance of crawl errors in Webmaster tools but keep coming up with different answers. I have been working on a clients site for the last two months and (just completed one months of link bulding), however seems I have inherited issues I wasn't aware of from the previous guy that did the site. The site is currently at page 6 for the keyphrase 'boiler spares' with a keyword rich domain and a good onpage plan. Over the last couple of weeks he has been as high as page 4, only to be pushed back to page 8 and now settled at page 6. The only issue I can seem to find with the site in webmaster tools is crawl errors here are the stats:- In sitemaps : 123 Not Found : 2,079 Restricted by robots.txt 1 Unreachable: 2 I have read that ecommerce sites can often give off false negatives in terms of crawl errors from Google, however, these not found crawl errors are being linked from pages within the site. How have others solved the issue of crawl errors on ecommerce sites? could this be the reason for the bouncing round in the rankings or is it just a competitive niche and I need to be patient? Kind Regards Neil
Technical SEO | | optimiz10 -
Switching Site to a Domain Name that's in Use
I'm comfortable with the steps of moving a site to a new domain name as recommended by Google. However, in this case, the domain name I'm asked to move to is not really "new" ... meaning it's currently hosting a website and has been for a long time. So my question is, do I do this in steps and take the old website down first in order to "free up" the domain name in they eyes of search engines to avoid large numbers of 404s and then (in step 2) switch to the "new" domain in a few months? Thanks.
Technical SEO | | R2iSEO0 -
It's imposible to keep track of rankings?
Hello, here something interesting I'm Using Rank Tracker from SEOMOZ And from the link-assistant's Rank Tracker as Well... I need to track Google.com and Google.co.ve (venezuela) so I did... i got my keyword an here are my results. 1 Keyword A at google.com (united states) Rank Tracker SEOMOZ = pos 6 Rank Tracker OTHER = pos 6 Manual Query on google.com = 9 (I used the exact url seomoz tells me its using) 2 Keyword A at Google.co.ve Rank Tracker SEOMOZ = pos 8 Rank Tracker OTHER = pos 7 Manual query on google.co.ve = pos 8 So.... Why it's that?, so far I think that google.com for me down here (it actually says "Español") it's a different index? for latinamerica? only spanish pages? maybe it's because there's a couple of minutes between looking with one tool and the other... any help, would be great... Dan
Technical SEO | | daniel.alvarez0 -
URL's for news content
We have made modifications to the URL structure for a particular client who publishes news articles in various niche industries. In line with SEO best practice we removed the article ID from the URL - an example is below: http://www.website.com/news/123/news-article-title
Technical SEO | | mccormackmorrison
http://www.website.com/news/read/news-article-title Since this has been done we have noticed a decline in traffic volumes (we have not as yet assessed the impact on number of pages indexed). Google have suggested that we need to include unique numerical IDs in the URL somewhere to aid spidering. Firstly, is this policy for news submissions? Secondly (if the previous answer is yes), is this to overcome the obvious issue with the velocity and trend based nature of news submissions resulting in false duplicate URL/ title tag violations? Thirdly, do you have any advice on the way to go? Thanks P.S. One final one (you can count this as two question credits if required), is it possible to check the volume of pages indexed at various points in the past i.e. if you think that the number of pages being indexed may have declined, is there any way of confirming this after the event? Thanks again! Neil0