Why do old URL format are still being crawled by Rogerbot?
-
Hi,
In the early days of my blog, I used permalinks with the following format:
http://www.mysitesamp.com/2009/02/04/heidi-cortez-photo-shoot/
I then decided to change this format using .htaccess to this format:
http://www.mysitesamp.com//heidi-cortez-photo-shoot/
My question is, why do rogerbot still crawls my old URL format since these urls' no longer exists in my website or blog.
-
Thanks Alan,
That solved my problem...
-
-
Hi Alan,
After disallowing the directory in robots.txt, Rogerbot still includes the non-existing URLs. Here is a sample URL that is being reported by Rogerbot
www.lugaluda.com/2009/08/05/chase-online-banking-chase-checking-bonus/
-
If you give me the url, i can crawl it fior you if you like.
-
Thanks Alan, I really appreciate your help. Gave me an idea since all the old URLs are coming from a virtual 2009 directory, I tried to add a disallow statement for that directory in the robots.txt section. Hopefully this will help solve the problem.
I will let you know the results after rogerbot finishes recrawling my site...
Thanks Dude....
-
You need to search your site, but bots start on a page and follow the links, if the report them then they must of found them, bots like googlebot or bingbot can find them on other sites, but rogerbot is only crawling within your site.
-
How will I know if they still exists on my site? If I tried to access the specific URLs, they are no longer active.
-
The old format must still exist in your site somewhere, bots follow links from your home page though your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Client wants to remove mobile URLs from their sitemap to avoid indexing issues. However this will require SEVERAL billing hours. Is having both mobile/desktop URLs in a sitemap really that detrimental to search indexing?
We had an enterprise client ask to remove mobile URLs from their sitemaps. For their website both desktop & mobile URLs are combined into one sitemap. Their website has a mobile template (not a responsive website) and is configured properly via Google's "separate URL" guidelines. Our client is referencing a statement made from John Mueller that having both mobile & desktop sitemaps can be problematic for indexing. Here is the article https://www.seroundtable.com/google-mobile-sitemaps-20137.html
Intermediate & Advanced SEO | | RosemaryB
We would be happy to remove the mobile URLs from their sitemap. However this will unfortunately take several billing hours for our development team to implement and QA. This will end up costing our client a great deal of money when the task is completed. Is it worth it to remove the mobile URLs from their main website to be in adherence to John Mueller's advice? We don't believe these extra mobile URLs are harming their search indexing. However we can't find any sources to explain otherwise. Any advice would be appreciated. Thx.0 -
How to change URL structure in google webmasters
Is there any way to ask google to indexed the website in following URL structure abc.com/category/postname (I have this structure on my website) But Currently google indexed my website posts as - abc.com/postname/category How I can tell google to follow the right structure?
Intermediate & Advanced SEO | | Michael.Leonard0 -
Is there a problems with putting encoding into the subdomain of a URL?
We are looking at changing our URL structure for tracking various affiliates from: https://sub.domain.com/quote/?affiliate_id=xxx to https://aff_xxx_affname.domain.com/quote/ Both would allow us to track affiliates, but the second would allow us to use cookies to track. Does anyone know if this could possibly cause SEO concerns? Also, For the site we want to rank for, we will use a reverse proxy to change the URL from https://aff_xxx.maindomain.com/quote/ to https://www.maindomain.com/quote/ would that cause any SEO issues. Thank you.
Intermediate & Advanced SEO | | RoxBrock0 -
Why is page still indexing?
Hi all, I have a few pages that - despite having a robots meta tag and no follow, no index, they are showing up in Google SERPs. In troubleshooting this with my team, it was brought up that another page could be linking to these pages and causing this. Is that plausible? How could I confirm that? Thanks,
Intermediate & Advanced SEO | | SSFCU
Sarah0 -
URL - Keywords
My domain name contains my top two keywords. Am I penalized if I create another page where I add my domain key words a 2nd time after the domain name along with a subcategory and the name of a state. I don't know what white hat and black hat is so I want to make sure I stay white hat. Also I didn't know it but is it true that your title shows up in your domain name?
Intermediate & Advanced SEO | | Boodreaux0 -
Best Product URL For Indexing
My proposed URL: mydomain.com/products/category/subcategory/product detail Puts my products 4 levels deep. Is this too deep to get my products indexed?
Intermediate & Advanced SEO | | waynekolenchuk0 -
No index, follow vs. canonical url
We have a site that consists almost entirely as a directory of videos. Example here: http://realtree.tv/channels/realtreeoutdoorsclassics We're trying to figure out the best way to handle pagination and utility features such as sort for most recent, most viewed, etc. We've been reading countless articles on this topic, but so far have been unable to determine what might be considered the industry standard. Two solutions seem to stand out... Using the canonical url on all the sorted and paginated pages. However, after reading many blog posts, it seems that you should NEVER use the canonical url to solve the issue of paginated, and thus duplicated content because the search bots will never crawl past the first page leaving many results not in the index. (We are considering ruling this method out.) Another solution seems to be using the meta tag for noindex, follow so that a search engine like Google will crawl your directory pages but not add them to the index themselves. All links are followed so content is crawled and any passing link juice remains unchanged. However, I did see a few articles skeptical of this solution as well saying that there are always better alternatives, or that there is no verification that search engines obey this meta tag. This has placed some doubt in our minds. I was hoping to get some expert advice on these methods as it would pertain to our site. Thank you.
Intermediate & Advanced SEO | | grayloon0 -
New AddThis URL Sharing
So, AddThis just added a cool feature that attempts to track when people share URL's via cutting and pasting the address from the browser. It appears to do so by adding a URL fragment on the end of the URL, hoping that the person sharing will cut and paste the entire thing. That seems like a reasonable assumption to me. Unless I misunderstand, it seems like it will add a fragment to every URL (since it's trying to track all of 'em). Probably not a huge issue for the search engines when they crawl, as they'll, hopefully, discard the fragment, or discard the JS that appends the fragment. But what about backlinks? Natural backlinks that someone might post to say, their blog, by doing exactly what AddThis is attempting to track - cutting and pasting the link. What are people's thoughts on what will happen when this occurs, and the search engines crawl that link, fragment included?
Intermediate & Advanced SEO | | BedeFahey0