Why do old URL format are still being crawled by Rogerbot?
-
Hi,
In the early days of my blog, I used permalinks with the following format:
http://www.mysitesamp.com/2009/02/04/heidi-cortez-photo-shoot/
I then decided to change this format using .htaccess to this format:
http://www.mysitesamp.com//heidi-cortez-photo-shoot/
My question is, why do rogerbot still crawls my old URL format since these urls' no longer exists in my website or blog.
-
Thanks Alan,
That solved my problem...
-
-
Hi Alan,
After disallowing the directory in robots.txt, Rogerbot still includes the non-existing URLs. Here is a sample URL that is being reported by Rogerbot
www.lugaluda.com/2009/08/05/chase-online-banking-chase-checking-bonus/
-
If you give me the url, i can crawl it fior you if you like.
-
Thanks Alan, I really appreciate your help. Gave me an idea since all the old URLs are coming from a virtual 2009 directory, I tried to add a disallow statement for that directory in the robots.txt section. Hopefully this will help solve the problem.
I will let you know the results after rogerbot finishes recrawling my site...
Thanks Dude....
-
You need to search your site, but bots start on a page and follow the links, if the report them then they must of found them, bots like googlebot or bingbot can find them on other sites, but rogerbot is only crawling within your site.
-
How will I know if they still exists on my site? If I tried to access the specific URLs, they are no longer active.
-
The old format must still exist in your site somewhere, bots follow links from your home page though your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
Intermediate & Advanced SEO | | BandG0 -
International Country URL Structure
Hey Guys, We have a www.site.com (gTLD) site, the primary market in Australia. We want to expand to US and UK. For the homepage, we are looking to create 3 new subfolders which are: site.com/au/ site.com/uk/ site.com/us/ Then if someone visits the site.com redirect based on their ip address to to the correct location. We are also looking to setup hreflang tags between the 3 sub-folders and set geo-location targeting in google search console at sub-folder level. Just wondering if this setup sounds ok for international SEO? Cheers.
Intermediate & Advanced SEO | | pladcarl90 -
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
How and When Should I use Canonical Url Tags?
Pretty new to the SEO universe. But I have not used any canonical tags, just because there is not definitive source explaining exactly when and why you should use them??? Am I the only one who feels this way?
Intermediate & Advanced SEO | | greenrushdaily0 -
Attack of the dummy urls -- what to do?
It occurs to me that a malicious program could set up thousands of links to dummy pages on a website: www.mysite.com/dynamicpage/dummy123 www.mysite.com/dynamicpage/dummy456 etc.. How is this normally handled? Does a developer have to look at all the parameters to see if they are valid and if not, automatically create a 301 redirect or 404 not found? This requires a table lookup of acceptable url parameters for all new visitors. I was thinking that bad url names would be rare so it would be ok to just stop the program with a message, until I realized someone could intentionally set up links to non existent pages on a site.
Intermediate & Advanced SEO | | friendoffood1 -
Which URL is better for SEO?
We have a URL structure question: Because we have websites in multiple countries and in multiple languages, we need to add additional elements to our URL structure. Of the two following options, what would be better for SEO? Option 1: www.abccompany.com/abc-ca-en/home.htm Option 2: www.abccompany.com/home.abc.ca.en.htm
Intermediate & Advanced SEO | | northwoods-2603420 -
URL Parameter & crawl stats
Hey Guys,I recently used the URL parameter tool in WBT to mark different urls that offers the same content.I have the parameter "?source=site1" , "?source=site2", etc...It looks like this: www.example.com/article/12?source=site1The "source parameter" are feeds that we provide to partner sites and this way we can track the referral site with our internal analytics platform.Although, pages like:www.example.com/article/12?source=site1 have canonical to the original page www.example.com/article/12, Google indexed both of the URLs
Intermediate & Advanced SEO | | Mr.bfz
www.example.com/article/12?source=site1andwww.example.com/article/12Last week I used the URL parameter tool to mark "source" parameter "No, this parameter doesnt effect page content (track usage)" and today I see a 40% decrease in my crawl stats.In one hand, It makes sense that now google is not crawling the repeated urls with different sources but in the other hand I thought that efficient crawlability would increase my crawl stats.In additional, google is still indexing same pages with different source parameters.I would like to know if someone have experienced something similar and by increasing crawl efficiency I should expect my crawl stats to go up or down?I really appreciate all the help!Thanks!0 -
Could this URL issue be affecting our rankings?
Hi everyone, I have been building links to a site for a while now and we're struggling to get page 1 results for their desired keywords. We're wondering if a web development / URL structure issue could be to blame in what's holding it back. The way the site's been built means that there's a 'false' 1st-level in the URL structure. We're building deeplinks to the following page: www.example.com/blue-widgets/blue-widget-overview However, if you chop off the 2nd-level, you're not given a category page, it's a 404: www.example.com/blue-widgets/ - [Brings up a 404] I'm assuming the web developer built the site and URL structure this way just for the purposes of getting additional keywords in the URL. What's worse is that there is very little consistency across other products/services. Other pages/URLs include: www.example.com/green-widgets/widgets-in-green www.example.com/red-widgets/red-widget-intro-page www.example.com/yellow-widgets/yellow-widgets I'm wondering if Google is aware of these 'false' pages* and if so, if we should advise the client to change the URLs and therefore the URL structure of the website. This is bearing in mind that these pages haven't been linked to (because they don't exist) and therefore aren't being indexed by Google. I'm just wondering if Google can determine good/bad URL etiquette based on other parts of the URL, i.e. the fact that that middle bit doesn't exist. As a matter of fact, my colleague Steve asked this question on a blog post that Dr. Pete had written. Here's a link to Steve's comment - there are 2 replies below, one of which argues that this has no implication whatsoever. However, 5 months on, it's still an issue for us so it has me wondering... Many thanks!
Intermediate & Advanced SEO | | Gmorgan0