Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Indexing Request - Typical Time to Complete?
-
In Google Search Console, when you request the (re) indexing of a fetched page, what's the average amount of time it takes to re-index and does it vary that much from site to site or are manual re-index request put in a queue and served on a first come - first serve basis despite the site characteristics like domain/page authority?
-
I want to be clear that I'm not referring to a re-crawl, but a re-index. Now I realize there are a gazillion ranking signals and most of the stronger signals are probably not on-page signals (although page title, headers, and anchor text combined is probably a relatively strong signal) so that for most situations, on-page changes are going to like move you from the middle of page 2 to top 3 (except for obscure - low competition long tail keywords of course.)
So is there a delay between re-crawl and re-rank (I'll use that term instead of re-index). I also realize the rank can change based on changes on the other sites in the SERPS. I suppose the re-rank delay could be verified by taking a 'sacrificial' page and totally changing the title, headings, and other on-page items to a completed different keyword theme and see how long it takes for the rank to go down for the previous keyword theme and up for the new theme.
I would think Google would quite possibly add a delay, even a random delay length, to discourage people from constantly requesting re-indexing of a single page to see the rank change. Granted the change if any would be small since on-page signals as I mentioned are a sliver of the signal pie. So most SEO's I would think would be of the opinion this 'trial-and-error' is a waste of time?
-
Hi SEO1805
I agree with Casey, if you go into your Google Analytics account and do a fetch and render to check the new/revised page and then request indexing you will normally see the results updated in a few hours.
Do realize tho that of course google is not a single server, so one person may see the updates very soon and others may not right away as the search index propagates to Googles servers.
Of course the search consol will also tell you how many pages Google is visiting every day so you have an idea about how often they think you are updating your content.
Take care,
Herb
-
Exactly what Logopedia y Más said!
I've just made some sitewide changes to a client site today (3-4 hours ago) and done a fetch straight after, some are already reflected in the SERPs while some aren't. So it really just depends, however, I find it typically doesn't take too long with the sites I have dealt with
-
Hi Seo 1805.
Put, indexing and ranking higher in Google isn’t an exact science. There's really no set timetable for how quickly your new page will be indexed by Google.
It isn’t guaranteed that the URL will be crawled again or that it will be done immediately. It usually takes several days to accept a request. Please also note that they can't guarantee that Google will index all the changes made, since to update the indexed content depends on a complex algorithm.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Staging website got indexed by google
Our staging website got indexed by google and now MOZ is showing all inbound links from staging site, how should i remove those links and make it no index. Note- we already added Meta NOINDEX in head tag
Intermediate & Advanced SEO | | Asmi-Ta0 -
Google does not want to index my page
I have a site that is hundreds of page indexed on Google. But there is a page that I put in the footer section that Google seems does not like and are not indexing that page. I've tried submitting it to their index through google webmaster and it will appear on Google index but then after a few days it's gone again. Before that page had canonical meta to another page, but it is removed now.
Intermediate & Advanced SEO | | odihost0 -
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Why are bit.ly links being indexed and ranked by Google?
I did a quick search for "site:bit.ly" and it returns more than 10 million results. Given that bit.ly links are 301 redirects, why are they being indexed in Google and ranked according to their destination? I'm working on a similar project to bit.ly and I want to make sure I don't run into the same problem.
Intermediate & Advanced SEO | | JDatSB1 -
Google Indexing Feedburner Links???
I just noticed that for lots of the articles on my website, there are two results in Google's index. For instance: http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html and http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Now my Feedburner feed is set to "noindex" and it's always been that way. The canonical tag on the webpage is set to: rel='canonical' href='http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html' /> The robots tag is set to: name="robots" content="index,follow,noodp" /> I found out that there are scrapper sites that are linking to my content using the Feedburner link. So should the robots tag be set to "noindex" when the requested URL is different from the canonical URL? If so, is there an easy way to do this in Wordpress?
Intermediate & Advanced SEO | | sbrault740 -
Should I use both Google and Bing's Webmaster Tools at the same time?
Hi All, Up till now I've been registered only to Google WMT. Do you recommend using at the same time Bing's WMT? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
What is the average response time for Reconsideration request
I know that Google states 'several' weeks but just wondering if anybody has any experience with a Reconsideration request and if they got any type of reply and what their general experience was. thanks
Intermediate & Advanced SEO | | BelfastSEO0