I'm stumped!
-
I'm hoping to find a real expert to help out with this.
TL;DR Our visibility in search has started tanking and I cannot figure out why.
The whole story:
In fall of 2015 I started working with Convention Nation (www.conventionnation.com). The client is trying to build a resource for convention and tradeshow attendees that would help them identify the events that will help them meet their goals (learning, networking, sales, whatever). They had a content team overseas that spent their time copy/pasting event information into our database.
At the time, I identified several opportunities to improve SEO:
- Create and submit a sitemap
- Add meaningful metas
- Fix crawl errors
- On-page content uniqueification and optimization for most visible events (largest audience likely to search)
- Regular publishing and social media
Over nine months, we did these things and saw search visibility, average rank and CTR all double or better.
There was still one problem, and that is created by our specific industry. I'll use a concrete example: MozCon. This event happens once a year and there are enough things that are the same about it every year (namely, the generalized description of the event, attendees and outcomes) that the 2015 page was getting flagged as a duplicate of 2016.
The event content for most of our events was pretty thin anyway, and much of it was duplicated from other sources, so we implemented a feature that grouped recurring events. My thinking was that this would reduce the perception of duplicate or obsolete content and links and provide a nice backlink opportunity.
I expected a dip after we deployed this grouping feature, that's been consistent with other bulk content changes we've made to the site, but we are not recovering from the dip. In fact, our search visibility and traffic are dropping every week.
So, the current state of things is this:
- Clean crawl reports: No errors reported by Moz or Google
- Moz domain authority: 20; Spam score 2/17
- We're a little thin on incoming links, but steady growth in both social media and backlinks
- Continuing to add thin/duplicate content for unique events at the rate of 200 pages/mo
- Adding solid, unique strategic content at the rate of 15 pages/mo
I just cannot figure out where we've gone astray. Is there anything other than the thin/copied content that could be causing this? It wasn't hurting us before we grouped the events...
What could possibly account for this trend?
Help me, Moz Community, you're my only hope!
Lindsay
-
Thanks, Bernadette.
The consolidation has been done, and surprisingly seems to correlate with our dropping visibility.
Do you have any experience with a "consolidation" that might shed some light?
-
Lindsay, there are so many unique issues that could be causing this that it's difficult to diagnose the problem without having a specific URL to look at. However, it sounds as if you've been looking at a lot of the on-site and on-page factors. Have you looked at the links to the site and done anything to evaluate the links to your website and make sure you don't have low quality links that need to be disavowed?
When you mention the thin or duplicate content issues, have you considered consolidating the content and removing the previous year's content (or moving it an archiving it) so that only the current content exists? Besides archiving or removing it from the site, you may want to use the canonical tag. Another way to deal with that content would be to use the robots.txt file to disallow indexing of certain older content pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why isn't the canonical tag on my client's Magento site working?
The reason for this mights be obvious to the right observer, but somehow I'm not able to spot the reason why. The situation:
Intermediate & Advanced SEO | | Inevo
I'm doing an SEO-audit for a client. When I'm checking if the rel=canonical tag is in place correctly, it seems like it: view-source:http://quickplay.no/fotball-mal.html?limit=15) (line nr 15) Anyone seing something wrong with this canonical? When I perform a site:http://quickplay.no/ search, I find that there's many url's indexed that ought to have been picked up by the canonical-tag: (see picture) ..this for example view-source:http://quickplay.no/fotball-mal.html?limit=15 I really can't see why this page is getting indexed, when the canonical-tag is in place. Anybody who can? Sincerely 🙂 GMdWg0K0 -
Establishing if links are 'nofollow'
Wonder if any of you guys can tell me if there is any other way to tell google links are nofollow other than in the html (ie can you tell google to nofollow every link in a subdomain or something). I'm trying to establish if a couple of links on a very high ranking site are passing me pagerank or not without asking them directly and looking silly! Within the source code for the page they are NOT tagged as nofollow at present. Hope that all makes sense 😉
Intermediate & Advanced SEO | | mat20150 -
Big discrepancies between pages in Google's index and pages in sitemap
Hi, I'm noticing a huge difference in the number of pages in Googles index (using 'site:' search) versus the number of pages indexed by Google in Webmaster tools. (ie 20,600 in 'site:' search vs 5,100 submitted via the dynamic sitemap.) Anyone know possible causes for this and how i can fix? It's an ecommerce site but i can't see any issues with duplicate content - they employ a very good canonical tag strategy. Could it be that Google has decided to ignore the canonical tag? Any help appreciated, Karen
Intermediate & Advanced SEO | | Digirank0 -
Incoming links which don't exists...
I believe our site is being penalized/held back in rankings, and I think this is why... We placed an advert on a website which they didn't make "no follow" so we had hundreds of site-wide links coming into our site. We asked them to remove the advert which they did. This was 4 months ago, and the links are still showing in GWMT. We have look into their pages which GWMT is saying still link to us, but these a number pages aren't being indexed by Google, and others aren't being cached. Is it possible that because Google cant find these pages, it can tell our link has been removed? And/or are we being penalized for this? Many thanks
Intermediate & Advanced SEO | | jj34341 -
How does the crawl find duplicate pages that don't exist on the site?
It looks like I have a lot of duplicate pages which are essentially the same url with some extra ? parameters added eg: http://www.merlin.org.uk/10-facts-about-malnutrition http://www.merlin.org.uk/10-facts-about-malnutrition?page=1 http://www.merlin.org.uk/10-facts-about-malnutrition?page=2 These extra 2 pages (and there's loads of pages this happens to) are a mystery to me. Not sure why they exist as there's only 1 page. Is this a massive issue? It's built on Drupal so I wonder if it auto generates these pages for some reason? Any help MUCH appreciated. Thanks
Intermediate & Advanced SEO | | Deniz0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Domain Age. What's a good age?
I have a new site that ranks very well and is rich with content. I know that it would rank better but since it's new I'm assuming that it is being held back. My question is how long does it take for a site to mature?
Intermediate & Advanced SEO | | bronxpad0 -
Reverse Proxys - Lost On It's Purpose To Help Seo
Reverse Proxys - Lost On It's Purpose To Help Seo - read an article on seomoz check link below. When should we use this reverse proxy and is it really worth the trouble at all ? Why create subdomains vs subfolders when organizing different sections of the website ? http://www.seomoz.org/blog/what-is-a-reverse-proxy-and-how-can-it-help-my-seo
Intermediate & Advanced SEO | | helpwanted0