Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does omitted results shown by Google always mean that website has duplicate content?
-
Google search results for a particular query was appearing in top 10 results but now the page appears but only after clicking on the " omitted results by google."
My website lists different businesses in a particular locality and sometimes results for different localities are same because we show results from nearby area if number of businesses in that locality (search by users) are less then 15.
Will this be considered as "duplicate content"? If yes then what steps can be taken to resolve this issue?
-
It might go to supplemental index when:
- Content is not unique.
- No content at all or with very little content.
- You have pages, not determined to have content initially, such as sitemap, contact, Terms and Conditions, etc.
- Pages that don’t have titles/meta descriptions or have duplicate ones.
-
Hi Prashant,
Yes - any URLs that are different are different in Google's eyes, unless the modifier is a # symbol.
So if you have www.example.com/key#value12345 and www.example.com/key#valuexyzabc, then Google sees these as the same, i.e. www.example.com/key. They will ignore everything after the # character.
All other query strings, etc., mean that the URL has changed and if the pages on those URLs are the same, it's duplicate content.
I hope this helps.
Cheers,
Jane
-
Thanks Jane,
Will the following urls will be considered as two different urls?
1. www.example.com/key=value1& key2=value2
2. www.example.com/key2=value2 & key=value1
-
Thanks David,
I found that a few of these urls were not crawled by Googlebot for a month or so. Now when i checked the last crawled status using "cache:", i found out that these pages were crawled again only recently and probably that is why it is back in top 10 results (main index).
I have one question: When does an url go into "Supplemental Index" ?
-
Hi Prashant,
This sounds like removal due to duplication rather than DMCA - the omission is usually noted as being because of DMCA notices if they are the reason, e.g. http://img.labnol.org/images/2008/07/googlesearchdmcacomplaint.png
Google likely sees these as duplicates, or near-dupes, as David has said,
-
Digital Millennium Copyright Act being used here? No.
OP, it does sound like you have duplicate content issues. See what you can do to make those omitted pages more unique.
-
It's most likely because some one would have put a DMCA takedown on that Google search result. jump in to your Google WebMasterTools account and you should see some notification from Google about it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does changing text content on a site affects seo?
HI, i have changed some h1 and h2 , changed and added paragraphs,fixed plagiarism,grammar and added some pics with alt text, I have just done it today, I am ranking on second page QUESTION-1 is it gonna affect my 2 months SEO efforts? QUESTION -2 Do I have to submit sitemap to google again? QUESTION-3 does changing content on the site frequently hurts SEO?
Algorithm Updates | | Sam09schulz0 -
Site appearing and disappearing from google serps.
Hi, My website is normally on page 2-3 on google consistently. Over the past month it has been appearing and then completely disappearing from the serps. One day it will be on page 2, then the next day completely missing from the serps. When i check the index it seems to be indexed correctly when doing site:mysite.com. I don't understand why this keeps happening, any experience with this issue? It doesn't seem to be a google dance as far as I can tell. When my other sites dance they typically just go up or down a few ranks for a couple weeks until they stabilize. Not completely fall off the search engine.
Algorithm Updates | | Chris_www0 -
Google & Tabbed Content
Hi I wondered if anyone had a case study or more info on how Google treats content under tabs? We have an ecommerce site & I know it is common to put product content under tabs, but will Google ignore this? Becky
Algorithm Updates | | BeckyKey1 -
Can I use schema markup for my Trustpilot results?
Hi we have excellent Trustpilot reviews & want to know if we can include these in schema markup in order for the results to show in SERPs? The Trustpilot results show in PPC but not SERPs. A competitor looks to have no Trustpilot or other independent reviews but is showing 5 stars in SERPs, i also cant find any customer reviews on their site, it looks to be just coding that is driving the SERPs view? Their site is goldencharter.co.uk Any thoughts much appreciated Thanks Ash
Algorithm Updates | | AshShep11 -
Google is forcing a 301 by truncating our URLs
Just recently we noticed that google has indexed truncated urls for many of our pages that get 301'd to the correct page. For example, we have:
Algorithm Updates | | mmac
http://www.eventective.com/USA/Massachusetts/Bedford/107/Doubletree-Hotel-Boston-Bedford-Glen.html as the url linked everywhere and that's the only version of that page that we use. Google somehow figured out that it would still go to the right place via 301 if they removed the html filename from the end, so they indexed just: http://www.eventective.com/USA/Massachusetts/Bedford/107/ The 301 is not new. It used to 404, but (probably 5 years ago) we saw a few links come in with the html file missing on similar urls so we decided to 301 them instead thinking it would be helpful. We've preferred the longer version because it has the name in it and users that pay attention to the url can feel more confident they are going to the right place. We've always used the full (longer) url and google used to index them all that way, but just recently we noticed about 1/2 of our urls have been converted to the shorter version in the SERPs. These shortened urls take the user to the right page via 301, so it isn't a case of the user landing in the wrong place, but over 100,000 301s may not be so good. You can look at: site:www.eventective.com/usa/massachusetts/bedford/ and you'll noticed all of the urls to businesses at the top of the listings go to the truncated version, but toward the bottom they have the full url. Can you explain to me why google would index a page that is 301'd to the right page and has been for years? I have a lot of thoughts on why they would do this and even more ideas on how we could build our urls better, but I'd really like to hear from some people that aren't quite as close to it as I am. One small detail that shouldn't affect this, but I'll mention it anyway, is that we have a mobile site with the same url pattern. http://m.eventective.com/USA/Massachusetts/Bedford/107/Doubletree-Hotel-Boston-Bedford-Glen.html We did not have the proper 301 in place on the m. site until the end of last week. I'm pretty sure it will be asked, so I'll also mention we have the rel=alternate/canonical set up between the www and m sites. I'm also interested in any thoughts on how this may affect rankings since we seem to have been hit by something toward the end of last week. Don't hesitate to mention anything else you see that may have triggered whatever may have hit us. Thank you,
Michael0 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0 -
Rankings changing every couple of MINUTES in Google?
We've been experiencing some unusual behaviour in the Google.co.uk SERPs recently... Basically, the ranking of some of our websites for certain keywords appears to be changing by the minute. For example, doing a search for "our keyword" might show us at #20. Then a few minutes later, doing the same search shows us at #14, and then the same search a few minutes later shows us at #26, and then sometimes we're not ranked at all, etc etc. I know the algorithm changes a lot, but does it really change every couple of minutes? Has anyone else experienced this kind of behaviour in the SERPs? What could be causing it to happen?
Algorithm Updates | | d4online0 -
To use the same content just changing the keywords could be seen as duplicate content?
I want to offer the same service or product in many different cities, so instead of creating a new content for each city what I want to do it to copy the content already created for the product and service of a city and then change the name of the city and create a new url inside my website for each city. for example let say I sell handmade rings in the USA, but I want o target each principal city in the USA, so I have want to have a unque url for ecxh city so for example for Miami I want to have www.mydomain.com/handmade-rings-miami and for LA the url would be www.mydomain.com/handmade-rings-la Can I have the same content talking about the handmade rings and just change the keywords and key phrases? or this will count as a duplicate content? content: TITLE: Miami Handmade Rings URL :www.mydomain.com/handmade-rings-miami Shop Now handmade rings in Miami in our online store and get a special discount in Miami purchases over $50 and also get free shipping on Miami Local address... See what our Miami handmade rings clients say about our products.... TITLE: LA Handmade Rings URL: www.mydomain.com/handmade-rings-la Shop Now handmade rings in LA in our online store and get a special discount in LA purchases over $50 and also get free shipping on LA Local address... See what our LA handmade rings clients say about our products.... There are more than 100 location in the country I want to do this, so that is why I want to copy paste and replace.. Thanks in advance, David Orion
Algorithm Updates | | sellonline1230