Use of Robots.txt file on a job site
-
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
-
Happy to help!
-
Thanks Jennifer! Great answer - I wasn't sure if which strategy would be better. Your answer makes a lot of sense. Thanks for your input!
-
Hi Oliver!
Before coming to SEOmoz I used to work for OntargetJobs which is a company that has multiple niche job boards. Here's what I would recommend:
- Keep those pages followed because people will link to them and you want to preserve as much of the link equity as you possibly can. So how do you do that?
- Make sure that when a job expires (or gets removed, whatever) that the page gets 301 redirected to the category page the job is posted under. Depending on the niche, it may be locale based, in that case redirect it to the location. The idea here is to send the user to a helpful page for good user experience and conserve some link equity at the same time.
- On the page that gets redirected to, program it so when a redirection happens that it displays a message at the top of the page. Something along the lines of "Oops! The job you were looking for is no longer active. However here are similar jobs in XYZ category"
Again as I mentioned above, this is a good way to help user experience, plus keep some of that link equity from the inevitable links job posting pages get.
I hope this helps!
Jen
-
I do not know if I understand correctly
Do you want insert no following to all the job posting that expire in 60 days?
If it 's so, you can put a control in the cms for the date of expiry of the job postingIf somebody click on the offer expired by SERP, you can retrieve a little script with a 301 redirect to the job posting similar category to the expired.Ciao
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to use robots.txt to block areas on page?
Hi, Across the categories/product pages on out site there are archives/shipping info section and the texts are always the same. Would this be treated as duplicated content and harmful for seo? How can I alter robots.txt to tell google not to crawl those particular text Thanks for any advice!
Technical SEO | | LauraHT0 -
Windows Acces used for e-commerce site - help needed
Hello everybody, I am working on this e-commerce website built on windows access and it's a nightmare to change the html content on it.has anyone used it before? It doesn't allow me to change the content for the html tags even though it should and i don't have a clue about what to do. Thanks oscar
Technical SEO | | PremioOscar0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Question about duplicate images used within a single site
I understand that using duplicate images across many websites was become an increasingly important duplicate content issue to be aware of. We have a couple dozen geotargeted landing pages on our site that are designed to promote our services to residents from various locations in our area. We've created 400+ word pieces of fresh, original content for each page, some of which talks about the specific region in some detail. However, we have a powerful list of top reasons to choose us that we'd like to use on each page as is, without rewriting them for each page. We'd like to simply present this bulleted list as an image file on each page to get around any duplicate written copy concerns. This image would not appear on any other websites but would appear on about two dozen landing pages for a single site. Is there anything to worry about this strategy from a duplicate content or duplicate image perspective in terms of SEO?
Technical SEO | | LeeAbrahamson0 -
Pros & Cons of deindexing a site prior to launch of a new site on the same domain.
If you were launching a new website to completely replace an older existing site on the same domain, would there be any value in temporarily deindexing the old site prior to launching the new site? Both have roughly 3000 pages, will launch on the same domain but have a completely new url structure and much better optimized for the web. Many high ranking pages will be redirected with 301 to the corresponding new page. I believe the hypothesis is this would eliminate a mix of old & new pages from sharing space in the serps and the crawlers are more likely to index more of the new site initially. I don't believe this is a great strategy, on the other hand I see some merit to the arguments for it.
Technical SEO | | medtouch0 -
Time on site
From what I understand, if you search for a keyword say "blue widgets" and you click on a result, and then spend 10 seconds there, and go back to google and click on a different result google will track that first result as being not very relevant. What I don't understand is what happens when (and this happens all the time, i did it today) you click on a result go to that page, find it (not?) relevant and then get distracted, phone call, or someone calls you into another room in the office. You end up accidentally leaving the tab open all day long, and never go back to the google search. So your time on site to google is what? infinity? there must be an upper cap here? at some point they must say, ok, the user is gone, time on site = our maximum = 5 minutes?!? Get me? any insight?
Technical SEO | | adriandg0 -
Traffic has dropped from my site.
Hello, I never had amazing traffic, but during the last week my site seems to have almost dropped of search engines. Nothing drastic has changed during this time that I can see would have caused this. The site is http://www.comparebestodds.com Does any one have any ideas that can help? Thanks
Technical SEO | | jwdesign0 -
Will using https across our entire site hurt our external backlinks?
Our site is secured throughout, so it loads sitewide as https. It is canonicalized properly - any attempt to load an existing page as http will force to https. My concern is with backlinks. We've put a lot of effort into social media, so we're getting some nice blog linkage. The problem is that the links are generally to http rather than https (understandable, since that's the default for most web users). The site still loads with no problem, but my concern is that since a redirect doesn't transfer all the link juice across, we're leaking some perfectly good link credit. From the standpoint of backlinkage, are we harming ourselves by making the whole site secure by default? The site presently isn't very big, but I'm looking at adding hundreds of new pages to the site, so if we're going to make the change, now is the time to do so. Let me know what you think!
Technical SEO | | ufmedia0