Sudden Increase In Number of Pages Indexed By Google Webmaster When No New Pages Added
-
Greetings MOZ Community:
On June 14th Google Webmaster tools indicated an increase in the number of indexed pages, going from 676 to 851 pages. New pages had been added to the domain in the previous month. The number of pages blocked by robots increased at that time from 332 (June 1st) to 551 June 22nd), yet the number of indexed pages still increased to 851.
The following changes occurred between June 5th and June 15th:
-A new redesigned version of the site was launched on June 4th, with some links to social media and blog removed on some pages, but with no new URLs added. The design platform was and is Wordpress.
-Google GTM code was added to the site.
-An exception was made by our hosting company to ModSecurity on our server (for i-frames) to allow GTM to function.
In the last ten days my web traffic has decline about 15%, however the quality of traffic has declined enormously and the number of new inquiries we get is off by around 65%. Click through rates have declined from about 2.55 pages to about 2 pages.
Obviously this is not a good situation.
My SEO provider, a reputable firm endorsed by MOZ, believes the extra 175 pages indexed by Google, pages that do not offer much content, may be causing the ranking decline.
My developer is examining the issue. They think there may be some tie in with the installation of GTM. They are noticing an additional issue, the sites Contact Us form will not work if the GTM script is enabled. They find it curious that both issues occurred around the same time.
Our domain is www.nyc-officespace-leader. Does anyone have any idea why these extra pages are appearing and how they can be removed? Anyone have experience with GTM causing issues with this?
Thanks everyone!!!
Alan -
Yes, and I appreciate it!
Alan -
I did what I asked you to do.
-
-
-
- in my first post and repeated frequently.
-
-
-
-
Hi Egol:
How did you locate this duplicate or re-published content?
Obviously what you have pointed out is a major source of concern so I ran Copyscape search this afternoon for duplicate content and did not locate any the URLs you mention in the "this", "this" link above. It appears you entered the URL of the blog post in Google's search bar. Would that work? This method would be pretty slow going with 600 URLs.
Thanks,
Alan -
Those are the 448 URLs from your website that have been filtered.
You should find garbage in them like shown below.
Have you done what I have suggested three times above? Do that if you want to identify the problem pages.
-
www.nyc-officespace-leader.com/wp-content/plugins/...
A description for this result is not available because of this site's robots.txt – learn more.
-
www.nyc-officespace-leader.com/wp-content/plugins/...
A description for this result is not available because of this site's robots.txt – learn more.
-
www.nyc-officespace-leader.com/wp-content/plugins/...
A description for this result is not available because of this site's robots.txt – learn more.
-
-
Hi Egol:
Thanks for the suggestion.
When I click on _ repeat the search with the omitted results included _I get 448 results not the entire 859 results. Seems very strange. Some of these URLS have light content but I don't believe they are dups. I don't see any content outside our website when I click this.
Am I doing something wrong? I would think the total of 859 would appear not 447 URLs.
Thanks!!
Alan -
I don't know. You should ask someone who knows a lot about canonicalization.
Did you drill down through all of those indexed pages to see if you can identify all of them?
I've suggested it twice.
-
Hi Egol:
In the content of launching an upgraded site, could the canonicalization have implemented incorrectly? That could account for 175 pages sudden new content as the thin content has been there for some time.
I am particularly suspicious regarding canonicalization as there was an issue involving multi page URLs of property listings when the site was migrated from Drupal to Wordpress last Summer.
Thoughts?
Thanks, Alan
-
Apparently infitter24.rssing.com/chan-13023009/all is poaching my content, taking my original content and adding it to there site. I am not quiet sure what to do about that.
You can have an attorney demand that they stop, you can file DMCA complaints. Be careful
**However it does not explain the sudden appearance of the 175 pages on Googles index **
-
Do this query: site:www.nyc-officespace-leader.com
-
Start drilling down the SERPs. One page at a time. Look for content that you didn't make. Look for duplicates.
-
Get a spreadsheet that has all of your URLs. Drill down through the SERPs checking every one of them. Can you account for your pagination. You have a lot of it and that type of page is usually rubbish in the index. Combine, canonicalize, or get rid of them.
-
-
Hi Egol:
Thanks so much for taking the time for your thorough response!!
Apparently infitter24.rssing.com/chan-13023009/all is poaching my content, taking my original content and adding it to there site. I am not quiet sure what to do about that.
You have pointed out something very useful and I appreciate it and will act upon it. However it does not explain the sudden appearance of the 175 pages on Googles index that did not appear at the end of May and somehow coincided with uploading of the new version of our website in early June. Any ideas???
Thanks,
Alan -
-
Do this query: site:www.nyc-officespace-leader.com
-
Start drilling down the SERPs. One page at a time. Look for content that you didn't make. Look for duplicates.
-
When you drill down about 44 pages you will find this...
In order to show you the most relevant results, we have omitted some entries very similar to the 440 already displayed.
If you like, you can repeat the search with the omitted results included.The bad stuff is usually behind that link. Google doesn't want to show that stuff to people. It could be thin, it could be duplicate, it could be spammy, they just might not like it.
- Find out what is in there.
Possible problems that I see....
I see dupe content like this and this. Either your guys are grabbin' somebodyelse's content or they are grabbin' yours. Can get you in trouble with Panda. You need original and unique. Anything that is not original and unique should be deleted, noindexed or rewritten.
A lot of these pages are really skimpy. Think content can get you into trouble with Panda. Anything that is skimpy should be deleted, noindexed or beefed up.
I see multiple links to tags on lots of these posts. That can cause duplicate content problems.
The tag pages are paginated with just a few pages on each. These can generate extra pages that are low value, suck up your linkjuice or compound duplicate content problems.
You have archive pages, and category pages and more pagination problems.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How is Google Analytics defining page depth?
We run two websites and as part of our KPIs we are treating those who visit 3 or more pages of our website as a client served. As a digital team we are not convinced that this is the best metric to use as the improvements we are making to the sites mean that people are able to find the information quicker. Additionally other organisations including forums etc link to us so those users will get the info they need in one click. What I would like to know is how Google calculates page depth in GA. Are they treating the landing page as ground zero and then when users clicks a link they go one page deep? Or is the landing page, page depth 1 . Is page depth a measure of how many clicks a user needs to find their information?
Reporting & Analytics | | MATOnlineServices0 -
Why only a few pages of my website are being indexed by google
Our website www.navisyachts.com has in its sitemap over 3000 pages of information, and this is all unique content written by our team. Now Google Webmaster central shows only 100 urls indexed from 3500 submitted. Can you help me understand why and how I can fix this issue? The website has 4 years old, is a Joomla 3.3 up to date. It has part of the content in the Joomla core content systems and part in K2. Thank you. Pablo
Reporting & Analytics | | FWC_SEO0 -
Google Analytics is treating my blog like all the content is just on the home page.
Hello all, I installed Google Analytics on a main website and a blog (blog.travelexinsurance.com) While it appears to be tracking correctly (and when I test it in real time it shows that I'm visiting) but it is treating the entire blog as though it's one page. So I can't see data on blog post X. All I see is that X visitors came to my blog in aggregate. So I see blog.travelex.com has 999 visitors, but it doesn't show that /travel-luggage got 50 visits, while /insurace-tips got 75 and so forth. I assume I screwed up the tracking somehow, but can't figure out where I went wrong. Tracking on the main domain works just fine. It's specific to the blog.
Reporting & Analytics | | Patrick_G0 -
Why is Google Analytics reporting 20% fewer goals than Unique pageviews of same thank you page?
This is really puzzling me and my research has not thrown out the answer. I have always understood URL goals to be unique pageviews of the thank you page you are tracking. UPVs and goals should both only be counted once per session... Has anyone else seen this issue? Goals were not set up historically so I wanted to use unique pageviews of the thank you page for year on year comparisons, but 20% is a big difference! Background There are multiple pages to track so goal is set up using Regex There is no mistake in the goal set up (honest!) The goal URLs all match the unique pageview URLs, there are no rogue URLs There has been no change to the site or the tracking set up Data is not being sampled It's a lead gen site in an area where multiple enquiries within one visit would be very unusual Thanks in advance!
Reporting & Analytics | | McCannSEO0 -
Goal Totals in Google Analytics
How do I get a cumulative tally of all the goals reached in google analytics? I've been using the percentages currently, but I was wondering if there was an easier way. Thanks
Reporting & Analytics | | OOMDODigital0 -
Google Analytics Site Search to new sub-domain
Hi Mozzers, I'm setting up Google's Site Search on a website. However this isn't for search terms, this will be for people filling in a form and using the POST action to land on a results page. This is similar to what is outlined at http://support.google.com/analytics/bin/answer.py?hl=en&answer=1012264 ('<a class="zippy zippy-collapse">Setting Up Site Search for POST-Based Search Engines').</a> However my approach is different as my results appear on a sub-domain of the top level domain. Eg.. user is on www.domain.com/page.php user fills in form submits user gets taken to results.domain.com/results.php The issue is with the suggested code provided by Google as copied below.. Firstly, I don't use query strings on my results page so I would have to create an artificial page which shouldn't be a problem. But what I don't know is how the tracking will work across a sub-domain without the _gaq.push(['_setDomainName', '.domain.com']); code. Can this be added in? Can I also add Custom Variables? Does anyone have experience of using Site Search across a sub-domain perhaps to track quote form values? Many thanks!
Reporting & Analytics | | panini0 -
Google anomaly
Hi, As per Google's Keyword Tool , the exact search volume for a particular keyword is 22000. As per Google's Webmaster Tool , my SER for the same keyword is 5.8 BUT , I see only 500 impressions for this keyword in webmaster ! Can someone help decipher this behavior ?
Reporting & Analytics | | iamnew0 -
If a page bounces in the woods, can Google Panda hear it?
I have read that after the Panda update a site's bounce rate is an important ranking metric. However, can anyone confirm whether all pages count equally? For instance, my home page gets 5000% more traffic than Deep Page X. If Deep Page X has a poor bounce rate, does it matter as much as if my Homepage has a bad bounce rate? I am guessing not, but wanted to open it up for discussion. If not, it has me wondering on what to do for some of my database driven content. I have some dynamically created pages that have higher bounce rates and minimal unique content. They aren't pure spam or junk, but are likely only about 1% unique from one another. Sounds like a no brainer change post-Panda, right? Well, what if I was the only one targeting the keywords for these pages? The pages pull from info I stored on the U.S. government stimulus program (related to my industry). It then has just about every city, state and county combo in the country for my product. For instance, a page <title>might be "Flemington, NJ Widgets - Somerset County". Something that no one else is targeting and drives minimal traffic.</p> <p> </p> <p>Do I take this content down? I didn't have any affects, positive or negative from Panda, so I am hesitant to take down thousands of Google cached pages.</p></title>
Reporting & Analytics | | TheDude0