Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to Diagnose "Crawled - Currently Not Indexed" in Google Search Console
-
The new Google Search Console gives a ton of information about which pages were excluded and why, but one that I'm struggling with is "crawled - currently not indexed". I have some clients that have fallen into this pit and I've identified one reason why it's occurring on some of them - they have multiple websites covering the same information (local businesses) - but others I'm completely flummoxed.
Does anyone have any experience figuring this one out?
-
@intellect did you find a solution to that?
-
-
@dalerio-consulting what should can we do with excluded section then. let say this page of my website is under duplicate canonical tag in excluded section. then should i leave it if its not very serious or should i request indexing ? Are these excluded pages issues very serious to take?
-
Hey Brett!
Basically what we believe this status means is Google saying "I can crawl and access the URL but I don't believe this page belongs in the index". They key here is to figure out why Google might not believe the page should be considered for indexation. We analyzed a good number of Index Coverage reports across all of our different clients.
Here are the most commons reasons URLs get reported as "Crawled - Currently Not Indexed":
- False positives
- RSS Feed URLs
- Paginated URLs
- Expired products
- 301 redirects
- Thin content
- Duplicate content
- Private-facing content
You can find a breakdown of each reason on the post we wrote here: https://moz.com/blog/crawled-currently-not-indexed-coverage-status
However, there's likely many more reasons why Google does't think the page is eligible for indexation.
-
Crawled - Currently not indexed is the most common way for pages or posts on your site not to be indexed. It is also the most difficult one to pinpoint because it happens for a multitude of reasons.
Google needs computing power to analyze each website. How it works is that Google assigns a certain crawl budget to each site, and that crawl budget determines how many pages of your site will be indexed. Google will always index your top pages, therefore, the excluded pages are of less quality rank-wise.
Every website has pages that are not indexed, and the healthy ratio of non-indexed pages will depend on the niche of the website.
There are however 2 ways for you to get your pages out of the "Crawled - Currently not indexed" pit:
- Decrease the number of pages/posts. It's a matter of quality v quantity, so make sure that put more attention into linking every new post so that they get indexed in no time. Don't forget to utilize robots.txt to block pages that aren't useful to the site from indexing so that the crawl budget can be assigned to the other posts.
- Increase the crawl budget. You can do that by raising the quality of the pages/posts. Make more internal and external backlinks for your posts and homepage, make sure that the articles are unique and keyword-optimized, and work hard to aim so that each article will rank on that first page.
SEO is a tough business, but if managed carefully, over time it will pay off.
Daniel Rika - Dalerio Consulting
https://dalerioconsulting.com
info@dalerioconsulting.com -
Crawled - currently not indexed list includes sitemap and robots.txt
We have searched and try to understand this issue. But we did not get final result regarding this issue
If any one fixed this issues, please share your suggestions as soon as possible
-
Hi There,
Google has been struggling to eliminate spam pages, content and structurally ordering them; this is an inherent problem especially with badly structured e-commerce websites.
You might be aware that "Crawled - Currently Not Indexed" means that your page(s) has been found by Google but it is not currently indexed, this might not be an error, just that your pages are in a queue. That might be due to the following reasons:
- There are a lot of pages to index, so it's going to take Google some time to get through them and mark them as either indexed or not.
- There might be duplicate pages / canonical issues for the website of the pages. Google might be seeing a lot of duplicate pages without canonical tags on your site, to improve the number of pages indexed you need to either improve pages so they are no longer duplicated or add canonical tags to help Google attribute it to the correct page
You need to justify each and every page for their merits, and then let google decide whether it think it should be available in their search and also against what keywords at what rank. To summarise, just help 'Google search' by structuring your data right, it might reward you by ranking your pages at right places for the right keywords.Thanks and Regards,Vijay
-
Search Console > Status > Index Coverage > Crawled - currently not indexed
Yes, I had the same Issues last month, in my case the crawler took it 6 weeks to update the Index Coverage. And apparently, there are not too many things that you can do it about it.
Regards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to track google auto search suggestion click?
Hello Guys, In google.co.uk when I search SEL and google gives me option of different different sites and when I click on any one site then that click tracking I need. I have attached the screenshot to understand easily. Is it possible to track such things or possible via server logs etc? TV99h
Reporting & Analytics | | micey1231 -
"index.htm" for all url's in google analytics
I don't have this issue with other wordpress websites, only this one website, and I don't know what's causing the issue: Google Analytics is adding an "index.htm" to every single page on the website. So it is tracking the pages, I see no errors - is it tracking the right page? When I click on the page link in a report, I naturally go to a "404 page not found" since the website address isn't "www.example.com/rewards/index.htm" - but instead the actual address would be:
Reporting & Analytics | | cceebar
"www.example.com/rewards/". I have navigated to View Settings in GA to insure "default page" is empty. Although adding anything else to this field does not effect the page url in analytics reports either. Could it be htaccess file - or a plugin effecting the htaccess file?_Cindy0 -
Google Search Console (new GWT) - Does a language specific sub folder need its own GSC profile
HI I've got a clients site set which targets 3 language/countries: English via the main site on the domain.com Turkish via a Turkish language site on a subfolder domain.com/tr/ And German via domain.de The devs have set up .com and .de in GSC and is reporting data in both However there's no data in the domain/com/tr GSC profile ! Is that because its on a subfolder so data pertaining to it is being reported in the main domain.com GSC account ? Or does something more need to be done to set up the Turkish subfolder in GSC ? If so what ? All Best Dan
Reporting & Analytics | | Dan-Lawrence0 -
Is it possible to use Google Tag Manager to pass a user’s text input into a form field to Google analytics?
Hey Everyone, I finally figured out how to use auto event tracking with Google Tag Manager, but didn't get the data I wanted. I want to see what users are typing into the search field on my site (the URL structure of my site isn't set up properly to use GA's built-in site search tracking). So, I set up the form submit event tracking in Google Tag Manager and used the following as my event tracking parameters: Category: Search Action: Search Value When I test and look in Google Analytics I just see: "search" and "search value." I wanted to see the text that I searched on my site. Not just the Action and Category of the event.... Is what I'm trying to do even possible? Do I need to set up a different event tracking parameter? Thanks everyone!
Reporting & Analytics | | DaveGuyMan0 -
Google Analytics VS target="_blank" internal links: How much wrong is it?
I am working on an e-commerce website, and our CEO is sure that having target="_blank" in internal search result is boosting the conversion (not sure, but it's not an issue at the moment). The problem is that Google Analytics sees all URLs visited from search results as entrances/direct visits, hence the Booking Funnel Tracking does not work as it was supposed to. Is there any way to recover the tracking? Or we shall get the rid of target="_blank" attribute?
Reporting & Analytics | | apartmentGin0 -
Inurl:login.aspx + "register" "retrieve password" + intext:employment
Someone came to my website with this keyword a few times, I was just wondering if anyone knows what that is? inurl:login.aspx + "register" "retrieve password" + intext:employment
Reporting & Analytics | | Rubix0 -
How do I manually add transactions to Google Analytics
We are seeing Google Analytic's drop transaction on our site so therefore all the figures are skewed. Is there a way I can manually add transactions to GA to cover the missing one?
Reporting & Analytics | | Towelsrus0 -
Totally Remove "localhost" entries from Google Analytics
Hello All, In Google Analytics I see a bunch of traffic coming from "localhost:4444 / referral". I had tried once before to create a filter to exclude this traffic source, but obviously I did it wrong since it's still showing up. Here is the filter I have currently: Filter Name: Exclude localhost
Reporting & Analytics | | Robert-B
Filter Type: Custom filter > Exclude
Filter Field: Referral
Filter Pattern: .localhost:4444.
Case Sensitive: No Can anyone see what I'm doing wrong and give me a push in the right direction? Thanks in advance!0