Howcome Google is indexing one day 2500 pages and the other day only 150 then 2000 again ect?
-
This is about an big affiliate website of an customer of us, running with datafeeds...
Bad things about datafeeds:
- Duplicate Content (product descriptions)
- Verrryyyy Much (thin) product pages (sometimes better to noindex, i know, but this customer doesn't want to do that)
-
Hi Dana,
Thanks for your detailed explanation. Appreciate it Off course I understand that site speed is a factor for crawling (+ ranking) and that the Google bots only want to spend a certain period of time on a website. It's more like, when servers are performing almost equal every day so page loads are igual to, what could it be?
I agree with your two points of considering, but I'm the type of guy that always wants to know why something is happening
@Nakul: Thanks for your responds!
The pages that are in and out of the index are mostly product pages. So the thing about "frequently updates" can be something. The website is pretty young so authority is not yet build as it should be for a big site. This can also be a factor cause the more authority the more time Google will spend indexing a website rightAnyway, great thanks for both of your answers!
Gr. Wesley
-
I agree with everything Nakul has said. Just to piggyback on that with additional information, try to think about it this way. Remember when someone gave you $1.00 when you were little and said "Don't spend it all in one place?" Well, someone at Google must have grown up with the same grandparents I did.
Okay, now, the analogy-free explanation
Google has a "crawl budget" every day. Every day that budget is allocated to millions of different sites. Now, by "sites" I mean "pages." Some pages change really frequently (i.e. the Yahoo New homepage). Some pages change hardly ever (i.e. an archived blog post). Also, some pages have very high PR and others, not so much. Also, some pages load extremely fast (consuming less of Google's bandwidth when the page is crawled) which leaves more Google resources available to Google to crawl more pages. Google likes it, and so should we all because people with fast sites are making it possible for everyone to get crawled more often (in essence, making them very considerate, well-behaved members of the Internet community).
So, based on all these, Google is going to apportion a part of its crawl budget to your site on any given day. Some days, it may have more room in its budget for you than others. Part of this might be effected by how fast pages, on any given day, load from your site. A ton of parameters can come into play here, including whether or not the pages on that day are heavier, or whether or not your servers are performing really fast on one day versus another.
I'd say the two things to be really concerned with after considering all of these things are:
- Is Google indexing all of the pages you want indexed?
- Is Google's cache date of your important pages recent enough? (i.e. 3 weeks or less)
If the answer is "no" to either one of those, then it's time to do some investigation to find out if there are technical issues or penalties that have been put in place that are hurting Google's ability or desire (not the right word to use about a bot, but I'm using it anyway) to crawl your pages.
Does that help?
-
Domain Authority / Pagerank is what Google looks to see how deep and how frequently Google will crawl a particular website. They also typically look into how frequently the content is being updated.
Think about it from Google's perspective. Why should they index that website, 2500 pages every day. What's changing ? Does the site have enough domain authority to warrant that kind of indexing ?
In my opinion, this is not a concern. Just submit XML Sitemaps and see what percentage of your submitted pages are indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting Google to index our sitemap
Hi, We have a sitemap on AWS that is retrievable via a url that looks like ours http://sitemap.shipindex.org/sitemap.xml. We have notified Google it exists and it found our 700k urls (we are a database of ship citations with unique urls). However, it will not index them. It has been weeks and nothing. The weird part is that it did do some of them before, it said so, about 26k. Then it said 0. Now that I have redone the sitemap, I can't get google to look at it and I have no idea why. This is really important to us, as we want not just general keywords to find our front page, but we also want specific ship names to show links to us in results. Does anyone have any clues as to how to get Google's attention and index our sitemap? Or even just crawl more of our site? It has done 35k pages crawling, but stopped.
Intermediate & Advanced SEO | | shipindex0 -
Can a duplicate page referencing the original page on another domain in another country using the 'canonical link' still get indexed locally?
Hi I wonder if anyone could help me on a canonical link query/indexing issue. I have given an overview, intended solution and question below. Any advice on this query will be much appreciated. Overview: I have a client who has a .com domain that includes blog content intended for the US market using the correct lang tags. The client also has a .co.uk site without a blog but looking at creating one. As the target keywords and content are relevant across both UK and US markets and not to duplicate work the client has asked would it be worthwhile centralising the blog or provide any other efficient blog site structure recommendations. Suggested solution: As the domain authority (DA) on the .com/.co.uk sites are in the 60+ it would risky moving domains/subdomain at this stage and would be a waste not to utilise the DAs that have built up on both sites. I have suggested they keep both sites and share the same content between them using a content curated WP plugin and using the 'canonical link' to reference the original source (US or UK) - so not to get duplicate content issues. My question: Let's say I'm a potential customer in the UK and i'm searching using a keyword phrase that the content that answers my query is on both the UK and US site although the US content is the original source.
Intermediate & Advanced SEO | | JonRayner
Will the US or UK version blog appear in UK SERPs? My gut is the UK blog will as Google will try and serve me the most appropriate version of the content and as I'm in the UK it will be this version, even though I have identified the US source using the canonical link?2 -
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Google Index Status Falling Fast - What should I be considering?
Hi Folks, Working on an ecommerce site. I have found a month on month fall in the Index Status continuing since late 2015. This has resulted in around 80% of pages indexed according to Webmaster. I do not seem to have any bad links or server issues. I am in the early stages of working through, updating content and tags but am yet to see a slowing of the fall. If anybody has tips on where to look for to issues or insight to resolve this I would really appreciate it. Thanks everybody! Tim
Intermediate & Advanced SEO | | Toby-Symec0 -
Https & http urls in Google Index
Hi everyone, this question is a two parter: I am now working for a large website - over 500k monthly organic traffic. The site currently has both http and https urls in Google's index. The website has not formally converted to https. The https began with an error and has evolved unchecked over time. Both versions of the site (http & https) are registered in webmaster tools so I can clearly track and see that as time passes http indexation is decreasing and https has been increasing. The ratio is at about 3:1 in favor of https at this time. Traffic over the last year has slowly dipped, however, over the last two months there has been a steady decline in overall visits registered through analytics. No single page appears to be the culprit, this decline is occurring across most pages of the website, pages which traditionally draw heavy traffic - including the home page. Considering that Google is giving priority to https pages, could it be possible that the split is having a negative impact on traffic as rankings sway? Additionally, mobile activity for the site has steadily increased both from a traffic and a conversion standpoint. However that traffic has also dipped significantly over the last two months. Looking at Google's mobile usability error's page I see a significant number of errors (over 1k). I know Google has been testing and changing mobile ranking factors, is it safe to posit that this could be having an impact on mobile traffic? The traffic declines are 9-10% MOM. Thank you. ~Geo
Intermediate & Advanced SEO | | Geosem0 -
Is it a problem that Google's index shows paginated page urls, even with canonical tags in place?
Since Google shows more pages indexed than makes sense, I used Google's API and some other means to get everything Google has in its index for a site I'm working on. The results bring up a couple of oddities. It shows a lot of urls to the same page, but with different tracking code.The url with tracking code always follows a question mark and could look like: http://www.MozExampleURL.com?tracking-example http://www.MozExampleURL.com?another-tracking-examle http://www.MozExampleURL.com?tracking-example-3 etc So, the only thing that distinguishes one url from the next is a tracking url. On these pages, canonical tags are in place as: <link rel="canonical<a class="attribute-value">l</a>" href="http://www.MozExampleURL.com" /> So, why does the index have urls that are only different in terms of tracking urls? I would think it would ignore everything, starting with the question mark. The index also shows paginated pages. I would think it should show the one canonical url and leave it at that. Is this a problem about which something should be done? Best... Darcy
Intermediate & Advanced SEO | | 945010 -
Software to monitor indexed pages
Dear SEO moz, As a SEO marketer on a pretty big website I noticed a HUGE amount of dropping pages indexed by google. We did not do anything to block googleblot in the past 6 months, but since November the number of indexed pages decreased from 3.4 milion (3,400.000) to 7 hundred thousand (700,000). Obviously I want to know which pages are de-indexed. Does anyone you know a tool which can do this?
Intermediate & Advanced SEO | | JorisHas1 -
Google + Local Pages
Hi, If I have a company with multipul addresses, Do I create separate Google + page for each area?
Intermediate & Advanced SEO | | Bryan_Loconto0