Disallow: /jobs/? is this stopping the SERPs from indexing job posts
-
Hi,
I was wondering what this would be used for as it's in the Robots.exe of a recruitment agency website that posts jobs. Should it be removed?Disallow: /jobs/?
Disallow: /jobs/page/*/Thanks in advance.
James -
Hi James,
So far as I can see you have the following architecture:
- job posting: https://www.pkeducation.co.uk/job/post-name/
- jobs listing page: https://www.pkeducation.co.uk/jobs/
Since from the robots.txt the listing page pagination is blocked, the crawler can access only the first 15 job postings are available to crawl via a normal crawl.
I would say, you should remove the blocking from the robots.txt and focus on implementing a correct pagination. *which method you choose is your decision, but allow the crawler to access all of your job posts. Check https://yoast.com/pagination-seo-best-practices/
Another thing I would change is to make the job post title an anchor text for the job posting. (every single job is linked with "Find out more").
Also if possible, create a separate sitemap.xml for your job posts and submit it in Search Console, this way you can keep track of any anomaly with indexation.
Last, and not least, focus on the quality of your content (just as Matt proposed in the first answer).
Good luck!
-
Hi Istvan,
Sorry I've been away for a while. Thanks for all of your advice guys.
Here is the url if that helps?
https://www.pkeducation.co.uk/jobs/
Cheers,
James
-
The idea is (which we both highlighted), that blocking your listing page from robots.txt is wrong, for pagination you have several methods to deal with (how you deal with it, it really depends on the technical possibilities that you have on the project).
Regarding James' original question, my feeling is, that he is somehow blocking their posting pages. Cutting the access to these pages makes it really hard for Google, or any other search engine to index it. But without a URL in front of us, we cannot really answer his question, we can only create theories that he can test
-
Ah yes when it's pointed out like that, it's a conflicting signal isn't It. Makes sense in theory, but if you're setting it to noindex and then passing that on via a canonical it's probably not the best is it.
They're was link out in that thread to a discussion of people who still do that with success, but after reading that I would just use noindex only as you said. (Still prefer the no index on the robots block though)
-
Sorry Richard, but using noindex with canonical link is not quite a good practice.
It's an old entry, but still true: https://www.seroundtable.com/noindex-canonical-google-18274.html
-
I don't think it should be blocked by robots.txt at all. It's stopping Google from crawling the site fully. And they may even treat it negatively as they've been really clamping down on blocking folders with robots.txt lately. I've seen sites with warning in search console for: Disallow: /wp-admin
You may want to consider just using a noindex tag on those pages instead. And then also use a canonical tag that points back to the main job category page. That way Google can crawl the pages and perhaps pass all the juice back to the main job category page via the canonical. Then just make sure those junk job pages aren't in the sitemap either.
-
Hi James,
Regarding the robots.txt syntax:
Disallow: /jobs/? which basically blocks every single URL that contains /jobs/**? **
For example: domain.com**/jobs/?**sort-by=... will be blocked
If you want to disallow query parameters from URL, the correct implementation would be Disallow: /jobs/*? or even specify which query parameter you want to block. For example Disallow: /jobs/*?page=
My question to you, if these jobs are linked from any other page and/or sitemap? Or only from the listing page, which has it's pagination, sorting, etc. is blocked by robots.txt? If they are not linked, it could be a simple case of orphan pages, where basically the crawler cannot access the job posting pages, because there is no actual link to it. I know it is an old rule, but it is still true: Crawl > Index > Rank.
BTW. I don't know why you would block your pagination. There are other optimal implementations.
And there is always the scenario, that was already described by Matt. But I believe in that case you would have at least some of the pages indexed even if they are not going to get ranked well.
Also, make sure other technical implementations are not stopping your job posting pages from being indexed.
-
I'd guess that the jobs get pulled from a job board. If this is the case, then the content ( job description, title etc.) will just be a duplication of the content that can be found in many other locations. If a plugin is used, they sometimes automatically add a disallow into the robots.txt file as to not hurt the parent version of the job page by creating thousands of duplicate content issues.
I'd recommend creating some really high-quality hub pages based on job type, or location and pulling the relevant jobs into that page, instead of trying to index and rank the actual job pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Mobile indexing and tabs
Hello, With the new mobile indexing 1 st do search engine (google) give as much value to content in tabs and no visible in the 1 st place as content which is visible on the page ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Sever SERP Issues on Teem.com (Long Post)
I work for Teem.com. Here is the story:We used to be Eventboard.io and we enjoyed strong rankings and a healthy organic presence. We changed our name, our website, and expanded upon on our product offerings and launched Teem.com. A lot of content was very similar from Eventboard.io. We had 11 of our big keywords ranking in the top 3 positions. We launched the new website in early October of 2016.Here is what we have done: We setup a network of 301 redirects for the homepage, company pages, events page, blog posts, and each and every long tail page. Everything from the old site had a new place to live on the new site with very similar content. This list was then passed to our server/it folk to implement (we run a StaticPress site so we don't control those from within WordPress). Both the old and new site are WordPress websites. Setup a site domain move through Google Search Console Combed through Teem.com to take care of SEO issues using various tools. We know there are still some issues (speed, etc.) that aren't helping us, but we are in a good state overall in terms of technical SEO. Deep dive into the domain name history, backlinks, internal linking (which could be better). Developed more long-tail content (more coming). Here is what is weird: We have almost no organic traffic (or traction) since our rebrand. We understood we would be hit hard as the domain name was changed, the content changed, and the CMS was revamped. The only real organic search traffic we get is branded to our old name (which is luckily the name of one of our products): Eventboard. We rank well for this and see high conversion from this keyword. We rank very well for "conference room displays" on Bing for our long tail and home page, but we show up at position 23 for our iTunes app page on Google and 33 for the long tail page. We dominate in bing for our company name "Teem" and finally show up Google for our Facebook page in position 13th. Our website is way way down the list (beyond page 5) for the exact company name with super low competition. Site performance has been good, user feedback has been good, site uptime has been great. No red flags here. No blaring errors in search console besides maybe a few 404 pages that are cleaned up every few weeks. We have no idea what to do. Have engaged with multiple SEO agencies. Been told over and over to be patient because of the changes we have made, but we still see no progress 6 months later.We think the issue might be related to something misfiring with our 301 redirects, based on some referral information.Any insight would be greatly, greatly appreciated. We are stumped. Thanks for any help!
Intermediate & Advanced SEO | | brycedmorgan0 -
How to check if the page is indexable for SEs?
Hi, I'm building the extension for Chrome, which should show me the status of the indexability of the page I'm on. So, I need to know all the methods to check if the page has the potential to be crawled and indexed by a Search Engines. I've come up with a few methods: Check the URL in robots.txt file (if it's not disallowed) Check page metas (if there are not noindex meta) Check if page is the same for unregistered users (for those pages only available for registered users of the site) Are there any more methods to check if a particular page is indexable (or not closed for indexation) by Search Engines? Thanks in advance!
Intermediate & Advanced SEO | | boostaman0 -
Allowing Guest Posts on Website
I am planning to allow Guest posts on my website blog where authors or writers can post articles to my blog. All content will be manually approved by me. But i would like to know if guest posts will cause any harm to my site or is it good way to generate content. Another thing as guest posts will be having do-follow links & author section too will be up. So firstly i would like to know whether its a good step or not? Would even like to know what checks should i do before approving a guest posts?
Intermediate & Advanced SEO | | welcomecure0 -
Duplicate content on .com .au and .de/europe/en. Would it be wise to move to .com?
This is the scenario: A webstore has evolved into 7 sites in 3 shops: example.com/northamerica example.de/europe example.de/europe/en example.de/europe/fr example.de/europe/es example.de/europe /it example.com.au .com/northamerica .de/europe/en and .com.au all have mostly the same content on them (all 3 are in english). What would be the best way to avoid duplicate content? An answer would be very much appreciated!
Intermediate & Advanced SEO | | SEO-Bas0 -
Page not appearing in SERPs
I have a regional site that does fairly well for most towns in the area (top 10-20). However, one place that has always done OK and has great content is not anywhere within the first 200. Everything looks OK, canonical link is correct, I can find the page if I search for exact text, there aren't any higher ranking duplicate pages. Any ideas what may have happened and how I can confirm a penalty for example. TIA,
Intermediate & Advanced SEO | | Cornwall
Chris0 -
How to have pages re-indexed
Hi, my hosting company has blocked one my web site seeing it has performance problem. Result of that, it is now reactivated but my pages had to be reindexed. I have added my web site to Google Webmaster tool and I have submitted my site map. After few days it is saying: 103 number of URLs provided 39 URLs indexed I know Google doesn't promesse to index every page but do you know any way to increase my chance to get all my pages indexed? By the way, that site include pages and post (blog). Thanks for your help ! Nancy
Intermediate & Advanced SEO | | EnigmaSolution0 -
Should I index tag pages?
Should I exclude the tag pages? Or should I go ahead and keep them indexed? Is there a general opinion on this topic?
Intermediate & Advanced SEO | | NikkiGaul0