Web accessibility - High Contrast web pages, duplicate content and SEO
-
Hi all,
I'm working with a client who has various URL variations to display their content in High Contrast and Low Contrast. It feels like quite an old way of doing things.
The URLs look like this:
domain.com/bespoke-curtain-making/ - Default URL
domain.com/bespoke-curtain-making/?style=hc - High Contrast page
domain.com/bespoke-curtain-making/?style=lc - Low Contrast pageMy questions are:
- Surely this content is duplicate content according to a search engine
- Should the different versions have a meta noindex directive in the header?
- Is there a better way of serving these pages?
Thanks.
-
Of course! Thank you. It's been a long week...
-
Hi,
In your case best option would be use rel="canonical" and I assume you want to index default URL.
Hope this helps.
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content across different domains
Hi Guys, Looking for some advice regarding duplicate content across different domains. I have reviewed some previous Q&A on this topic e.g. https://moz.com/community/q/two-different-domains-exact-same-content but just want to confirm if I'm missing anything. Basically, we have a client which has 1 site (call this site A) which has solids rankings. They have decided to build a new site (site B), which contains 50% duplicate pages and content from site A. Our recommendation to them was to make the content on site B as unique as possible but they want to launch asap, so not enough time. They will eventually transfer over to unique content on the website but in the short-term, it will be duplicate content. John Mueller from Google has said several times that there is no duplicate content penalty. So assuming this is correct site A should be fine, no ranking losses. Any disagree with this? Assuming we don't want to leave this to chance or assume John Mueller is correct would the next best thing to do is setup rel canonical tags between site A and site B on the pages with duplicate content? Then once we have unique content ready, execute that content on the site and remove the canonical tags. Any suggestions or advice would be very much appreciated! Cheers, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
Home page ranks for most keywords with no SEO
Hi Guys, The keyword I am trying to rank is on seperate page with its own Optimization in place. However the HOME page of website starts ranking for that keywords . The keyword is mentioned on Home page Just once in content description, that's all. What should be my ideal strategy. Deleting the Sub Page, will that improve the SEO of my home page and improve the Rank for that keyword ? Also I can see my Own YELP pages rank better than the actual website for few terms . Any way around this ? A Part from building links to page ?
Intermediate & Advanced SEO | | aus00071 -
Duplicate Internal Content on E-Commerce Website
Hi, I find my e-commerce pharmacy website is full of little snippets of duplicate content. In particular: -delivery info widget repeated on all the product pages -product category information repeated product pages (e.g. all medicines belonging to a certain category of medicines have identical side effects and I also include a generic snippet of the condition the medicine treats) Do you think it will harm my rankings to do this?
Intermediate & Advanced SEO | | deelo5550 -
Similar page titles but not quite duplicate
Howdy Mozzers, I have a problem with the way Google now tries not to show more than one search result per site on the first page. As in it is a lot harder to be ranked number 1 - 10 twice with different pages. Some of my pages have similar yet different page titles so they use the same first two keywords and then a variable such as '(keyword) (keyword) installations' '(keyword) (keyword) surveys'. Then when I search for '(keyword) (keyword)' they all appear at the start of page two with only ever one of them moving onto the end of page one. Now, it could just be that they are not quite optimised for page 1 but I think it would be more holding back of pages so they don't flood page 1. Any help on this? And also is there a problem with having similar page titles for pages? Cheers
Intermediate & Advanced SEO | | Hughescov0 -
Renaming multiple web pages
I'd like to rename about 20 web pages on one of my websites. If I do a 301 redirect for each of the pages will I get dinged by google for all of the renames?
Intermediate & Advanced SEO | | dpdeleon10 -
Duplicate page titles Wordpress SEO/Yoast
Hi I have a Wordpress site using the Wordpress SEO plugin by Yoast. Everything appears to be fine except that on 1 Feb SEOMoz crawl suddenly picked up a bunch of errors. The errors are duplicate page titles, and these exist only for the mysite.com/page/X pages. I can't find any setting in Yoast that looks wrong or tells me how to fix this. The pages are also dynamically canonicalizing to themselves - not sure if this makes any difference although I don't know how this is happening. Does anyone know how to fix this duplicate title error? Alex
Intermediate & Advanced SEO | | alextanner0 -
SEO-Friendly Method to Load XML Content onto Page
I have a client who has about 100 portfolio entries, each with its own HTML page. Those pages aren't getting indexed because of the way the main portfolio menu page works: It uses javascript to load the list of portfolio entries from an XML file along with metadata about each entry. Because it uses javascript, crawlers aren't seeing anything on the portfolio menu page. Here's a sample of the javascript used, this is one of many more lines of code: // load project xml try{ var req = new Request({ method: 'get', url: '/data/projects.xml', Normally I'd have them just manually add entries to the portfolio menu page, but part of the metadata that's getting loaded is project characteristics that are used to filter which portfolio entries are shown on page, such as client type (government, education, industrial, residential, industrial, etc.) and project type (depending on type of service that was provided). It's similar to filtering you'd see on an e-commerce site. This has to stay, so the page needs to remain dynamic. I'm trying to summarize the alternate methods they could use to load that content onto the page instead of javascript (I assume that server side solutions are the only ones I'd want, unless there's another option I'm unaware of). I'm aware that PHP could probably load all of their portfolio entries in the XML file on the server side. I'd like to get some recommendations on other possible solutions. Please feel free to ask any clarifying questions. Thanks!
Intermediate & Advanced SEO | | KaneJamison0 -
Duplicate content
Is there manual intervention required for a site that has been flagged for duplicate content to get back to its original rankings, once the duplicated content has been removed? Background: Our site recently experienced a significant drop in traffic around the time that a chunk of content from other sites (ie. duplicate) went live. While it was not an exact replica of the pages on other sites, there was quite a bit of overlap. That content has since been removed, but our traffic hasn't improved. What else can we do to improve our ranking?
Intermediate & Advanced SEO | | jamesti0