Establishing our web pages as the original source
-
I'm currently working on a recrutiment website. One of the things we do to drive traffic to the site is post our job listings out to a number of job boards eg. Indeed. These sites replicate our own job listings which means that for every job there are at least 5-10 exact duplicates on the web. By nature the job boards have good domain authority so they always rank above us but I would still expect to see more in the way of long-tail traffic.
Is it necessary for me to claim our own job listings as the original source and if so, how do I go about doing it?
Thanks
-
Hi,
Having a self referencing canonical tag on your own pages is not a problem. The canonical tag needs to go into the head of the page though (it is not valid if in the body of the html) so just make sure that the 3rd party syndication service actually provides this - it might - but it might not I am guessing. Even with the canonical I would still include a clean text link back to the original page if this is possible (both as a second indication of origin but also for the visits it might send).
-
Thanks Lynn, that's perfect!
Another question then - the job listings are syndicated out to the job boards automatically via a 3rd party (probably used by 75% of Uk recruitment companies). If I was to put a rel=canonical tag on each job listing it should be carried over to each of the job boards which would get around the duplicate content problem. However, each job listing page on our site would carry the re=canonical tag essentially pointing back to itself. Would this cause any issues?
Thanks
-
HI,
There was a recent WBF about syndicated content which runs down the various technical ways you can attribute your listings as the original source - check it out here. It would probably also help to make sure your listings are the first ones into the index which can be done by internally linking to the new jobs (obviously), quickly adding them to your sitemap, sharing them through social channels (especially twitter) - all of which should help make sure your content is indexed quickly and ideally before it is replicated on other sites.
If the other sites have stronger authority than yours you would still really want to get one of the 3 options discussed in the video implemented. It sounds like you already have the link back to your site (option 3 in the video) so perhaps the link is not 'clean' ie a straight text link that leads to the exact job on your site and is not no-followed?
It might also depend on what kind of long tail you are looking at ranking for. Individual job ads might not pull a lot of organic traffic by themselves if they are not aggregated by type or location for example - at which point the higher authority domains are likely to show an advantage (just a thought).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are only a few of our pages being indexed
Recently rebuilt a site for an auctioneers, however it has a problem in that none of the lots and auctions are being indexed by Google on the new site, only the pages like About, FAQ, home, contact. Checking WMT shows that Google has crawled all the pages, and I've done a "Fetch as Google" on them and it loads up fine, so there's no crawling issues that is standing out. I've set the "URL Parameters" to no effect too. Also built a sitemap with all the lots in, pushed to Google which then crawled them all (massive spike in Crawl rate for a couple days), and still just indexing a handful of pages. Any clues to look into would be greatly appreciated. https://www.wilkinsons-auctioneers.co.uk/auctions/
Technical SEO | | Blue-shark0 -
Pages Indexed Not Changing
I have several sites that I do SEO for that are having a common problem. I have submitted xml sitemaps to Google for each site, and as new pages are added to the site, they are added to the xml sitemap. To make sure new pages are being indexed, I check the number of pages that have been indexed vs. the number of pages submitted by the xml sitemap every week. For weeks now, the number of pages submitted has increased, but the number of pages actually indexed has not changed. I have done searches on Google for the new pages and they are always added to the index, but the number of indexed pages is still not changing. My initial thought was as new pages are added to the index, old ones are being dropped. But I can't find evidence of that, or understand why that would be the case. Any ideas on why this is happening? Or am I worrying about something that I shouldn't even be concerned with since new pages are being indexed?
Technical SEO | | ang1 -
Linking to AND canonicalizing to a page?
I am using cross domain rel=canonical to a page that is very similar to mine. I feel the page adds value to my site so I want users to go to it, but I ultimately want them to go to the page I'm canonicalizing to. So I am linking to that page as well. Anyone foresee any issues with doing this? And/or have other suggestions? Thanks.
Technical SEO | | ThridHour0 -
Duplicate page content
Hello, My site is being checked for errors by the PRO dashboard thing you get here and some odd duplicate content errors have appeared. Every page has a duplicate because you can see the page and the page/~username so... www.short-hairstyles.com is the same as www.short-hairstyles.com/~wwwshor I don't know if this is a problem or how the crawler found this (i'm sure I have never linked to it). But I'd like to know how to prevent it in case it is a problem if anyone knows please? Ian
Technical SEO | | jwdl0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Wordpress duplicate pages
I am using Wordpress and getting duplicate content Crawler error for following two pages http://edustars.yourstory.in/tag/edupristine/ http://edustars.yourstory.in/tag/education-startups/ These two are tags which take you to the same page. All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
Technical SEO | | bhanu22170 -
Schema.org for category pages
Hi, A long time ago I've added schema.org product syntax to my product detail pages. until today there is no evidence for any rich snippet on my serps. needless to say that testing tool is working ok with reviews ratings prices etc... Will adding rich snippet syntax to category pages(product lists) be any better? Thanks, Asaf
Technical SEO | | AsafY0 -
Discrepency between # of pages and # of pages indexed
Here is some background: The site in question has approximately 10,000 pages and Google Webmaster shows that 10,000 urls(pages were submitted) 2) Only 5,500 pages appear in the Google index 3) Webmaster shows that approximately 200 pages could not be crawled for various reasons 4) SEOMOZ shows about 1,000 pages that have long URL's or Page Titles (which we are correcting) 5) No other errors are being reported in either Webmaster or SEO MOZ 6) This is a new site launched six weeks ago. Within two weeks of launching, Google had indexed all 10,000 pages and showed 9,800 in the index but over the last few weeks, the number of pages in the index kept dropping until it reached 5,500 where it has been stable for two weeks. Any ideas of what the issue might be? Also, is there a way to download all of the pages that are being included in that index as this might help troubleshoot?
Technical SEO | | Mont0