Help, a certain directory is not being indexed
-
Before I start, dont expect this to be too easy. This really has me puzzled and am surprised I am still yet to find a solution for it. Get ready.
We have a wordpress website, launched over 6 months ago and have never had an issue getting content such as pages and post pages and categories indexed. However, I some what recently (about 2 months ago) installed a directory plugin (Business Directory Plugin) which lists businesses via unique urls that are accesible from a sub folder. Its these business listings that I absolutely cannot get indexed.
The index page to the directory which links to the business pages is indexed, however for some reason google is not indexing all the listing pages which are linked to from this page. Its not an issue of the content being uncrawlable or at least dont think so as when I run crawlers on my site such as xml sitemap crawlers it finds all the pages including the directory pages so I am sure its not an issue of the search engines not finding the content.
I have created xml sitemaps and uploaded to webmaster tools, tools recongises that there are many pages in the xml sitemap but google continues to only index a small percentage (everything but my business listings).
The directory has been there for about 8 weeks now so I know there is a issue as it should of been indexed by now.
See our main website at www.smashrepairbid.com.au and the business directory index page at www.smashrepairbid.com.au/our-shops/
To throw in a curve ball, in looking into this issue and setting up tools we noticed a lot of 404 error pages (nearly 4,000). We were very confused where these were coming from as they were only being generated from search engines - humans could not access the 404s and so we are guessing se's were firing some javascript code to generate them or something else weird. We could see the 404s in the logs so we know they were legit but again feel it was only search engines, this was validated when we added some rules to robots.txt and we saw the errors in the logs stop. We put the rules in robots txt file to try and stop google from indexing the 404 pages as we could not find anyway to fix the site / code (no idea what is causing them). If you do a site search in google you will see all the pages that are omitted in the results.
Since adding the rules to robots, our impressions shown through tools have jumped right up (increased by 5 times) so thought this was a good indication of improvement but still not getting the results we want.
Does anyone have any clue whats going on or why google and other se's are not indexing this content? Any help would be greatly appreciated and if you need any other information to assist just ask me.
Really appreciate anyone who can spare their time to help me, I sure do need it.
Thanks.
-
OK issue resolved!
Lynn thank you - was the relative url in the canonical tag that played havoc Changing it to absolute is now causing the pages to be indexed.
Lesson learnt.
-
Hey Kane,
The /shops url was a old url that had a directory in it. We blocked it in the robots as it was generating tons of 404 errors. In webmaster tools we can see thousands of 404 errors within that directory so we deleted it all and tried to block se's from throwing the errors (like i described in initial post).
A number of those listing do have very little information however there are a bunch that do have great content which is why I am not sure if that is the case. I will keep an eye on this though and also check about the logs and let you know what that says.
-
Thanks Lynn.
I have taken on your recommendation and changed the canonical tag to be absolute. Thanks for your help we will see how it goes.
-
As Lynn said, relative canonical tags could absolutely cause issues. That said, I'm seeing absolute URLs in the canonical tag now, so you may have fixed that in the past few days.
Also, I do see the Our Shops pages indexed when I search for site:smashrepairbid.com.au, but I don't see any other pages in the /our-shops/ directory aside from www.smashrepairbid.com.au/our-shops/?action=search
Your robots.txt is currently blocking /shops/. I don't think that would cause an issue but would be nice to remove that if it's not needed...
There's almost zero content on the pages I glanced at, eg. http://www.smashrepairbid.com.au/our-shops/1263/bakker-towing/ and http://www.smashrepairbid.com.au/our-shops/1616/coastal-towing-service/. When you look at it from Google's perspective, there's very little value being added by these pages. No unique photos, no phone number, no website, etc. There's a million local business scrapers that have more content than this, so why should they bother indexing these pages?
Try pulling up your logs and seeing if these URLs have been requested by Google's spiders. Here's a good guide from Ian Lurie on how to do that in Excel: http://www.portent.com/blog/analytics/how-to-read-a-web-site-log-file.htm
If the spiders are crawling those shop URLs but aren't indexing them, I think the first thing to do is add way more content to the pages.
-
Hi Trent,
Having a quick look I saw that you have relative urls in your canonical tag and this could be problematic. I think it would be worth making those urls absolute to avoid any confusion on Google's part in determining what page or page version should be indexed.
Cannot say for sure if this is the problem, but worth looking into.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Random important product pages dropped out of index week ending Dec 22: why???
Hello We've been around a very long time, and I have a long running pet set of core terms and pages tracked using Moz and other tools. With no changes to the content or site or htaaccess or robots.txt or sitemap, insignificant backlink changes etc, we saw a ton of important product pages drop out of the index the week ending December 22 2019. We are still ranking for many of the terms associated, but at far worse positions since the pages G is choosing instead for those terms are not as focused. I need to be clear that this has not happened across the board, but seemingly at random. When I look in G Search Console, the pages are submitted and indexed (last crawl yesterday), mobile friendly, have breadcrumbs, and the only warning are product level for lack of optional fields under offers (nothing new, not particular to the dropped pages in question here). So, what happened the week ending December 22???? Should I expect the dust to settle and the pages to return? Extremely strange. Thx
On-Page Optimization | | jamestown0 -
Help recover lost traffic (70%) from robots.txt error.
Our site is a company information site with 15 million indexed pages (mostly company profiles). Recently we had an issue with a server that we replaced, and in the processes mistakenly copied the robots.txt block from the staging server to a live server. By the time we realized the error, we lost 2/3 of our indexed pages and a comparable amount of traffic. Apparently this error took place on 4/7/19, and was corrected two weeks later. We have submitted new sitemaps to Google and asked them to validate the fix approximately a week ago. Given the close to 10 million pages that need to be validated, so far we have not seen any meaningful change. Will we ever get this traffic back? How long will it take? Any assistance will be greatly appreciated. On another note, these indexed pages were never migrated to SSL for fear of losing traffic. If we have already lost the traffic and/or if it is going to take a long time to recover, should we migrate these pages to SSL? Thanks,
On-Page Optimization | | akin671 -
Site no indexed after a week loadbalancer, cache-control?
hi All apologies to everybody in advance my level of SEO or technical stuff is quite minimal. we have setup a new site with a top level domain sitting on multiple servers ( https.www.mysite/new/ on one and http.wwww.mysites/old/ on another) different pages,content etc. it's definitely a patchy solution until what is left in the old server will be migrated to the new one. however the new site is still not indexed after around 2 weeks. I have checke on moz and and google console ( fetch and rendering) and nothing seem to block indexing. no issue with robot.txt ( either on https.www.mysite/new/robots.txt or http.wwww.mysites/old/robots.txt) meta-robot. any idea on what could cause the problem? can it be an issue with the cache-control set as : no-store no-cache? Or the loadbalancer that is preventing google to access ? Dario
On-Page Optimization | | Mrlocicero0 -
Magento Duplicate Content Question - HELP!
In Magento, when entering product information, does the short description have to be different than the meta description? If they are both the same is this considered duplicate content? Thanks for the help!!!
On-Page Optimization | | LeapOfBelief0 -
Duplicate Content aka 301 redirect from .com to .com/index.html
Moz reports are telling me that I have duplicate content on the home page because .com and .com/index.html are being seen as two pages. I have implemented 301 redirect using various codes I found online, but nothing seems to work. Currently I'm using this code. RewriteEngine On
On-Page Optimization | | omakad
RewriteBase /
RewriteCond %{HTTP_HOST} ^jacksonvilleacservice.com
RewriteRule ^index.html$ http://www.jacksonvilleacservice.com/ [L,R=301] Nothing is changing. What am I doing wrong? I have given it several weeks but report stays the same. Also according to webmasters tools they can't see this as duplicate content. What am I doing wrong?0 -
Should I index news and blog posts which receive little traffic
Hi all, I have a very large site at the moment with a handful of high authority pages with great content which describe the charity's work. But the main content on the site is blogs and news article which, while being good quality content from a reader's point of view, receive little traffic from search engines as they are so niche and long-tail. They do, however, get ok internal traffic from other pages. Should these pages be indexed still or should I remove them? Is there a rule of thumb regarding minimum clicks/bounce rates that an indexed page should have? What do big news agencies do with really niche articles that may get next to no traffic but have valuable content on for those that do click through? Thanks in advance! Den
On-Page Optimization | | Deniz0 -
Canonical URL tags help I am not sure what this is
I am trying to get an A grade on my webpage and this is one of the critical steps canonical URL tags I cant find much information as to what this even is never mind fixing it. Thanks I am a total newbe at this any advice is appreciated
On-Page Optimization | | gemfirez0