403 error for a member site
-
Perhaps a stupid question but SEOmoz registers 403 errors for pages behind a membersite (ie. they are restricted on purpose).
Should I noindex these pages or just let SEOmoz register these "errors"?
-
Yeah that sounds like the easiest/best option.
Thank you for your answer John.
-
Block them in robots.txt, there's no value in them being crawled.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
SEO and page redirects from a high ranking site quandary
We are launching a site on a new domain that is taking the place of a group (subset) of pages in an existing domain. BUT The pages on the existing domain have really good SEO rankings in a very competitive category and we want to leverage the traffic it gets today in the best way, so.... Which of the following would be the best practice in this case (in regards to SEO)? Modify the existing pages' content so that there are prominent calls to action that lead the users to the new domain. Create permanent redirects for the existing pages for their counterparts on the new domain. This is more direct for the user but we don't know how it will affect the current ranking. Something other than the above. Many thanks for you help Gary
Moz Pro | | gazza10 -
Site Explorer shows links as followable but they have nofollow tags
Hello, I am looking at site explorer and sites linking to my site moneyfact.co.uk. I've got thousands of links showing as 'followable' but when i check them they have rel="nofollow" tags. e.g: http://www.dianomioffers.co.uk/partner/moneyfacts.co.uk/brochures.epl?partner=93&partner_id=93&partner_variant_id=33 Why would they show as followable when the links are nofollowed? Thanks Steve
Moz Pro | | SteveBrumpton0 -
SEO Moz analysing US site when I want it to analyse UK site
Hi there, I am trying to analyse the UK version of my website. However as the website is set to redirect people to the most relevant site and SEO Moz is based in the US, it is redirecting all the pages your crawling and therefore analysing US pages. So I am trying to look at the UK homepage but all the factors relate to the US site... Is there a way around this?? Thanks
Moz Pro | | MyTights1 -
Can we add sites to the crawl queue for OSE?
Is it possible to request that Open Site Explorer crawls a new URL on its next run? This tool is the first place I go to when working on a new site, and when there is "No Data Available" this is a little frustrating. I fully appreciate that this lack of data is usually a signal that the website is either very new or of low quality, however that if often the reason that I am brought in and would very much like to benchmark and provide initial analysis using this tool. It would make sense that OSE crawls the sites that Moz members are working on wouldnt it? Scott.
Moz Pro | | eseyo0 -
SEOMoz Crawler and rel_canonical_tag Errors
This tag is showing up on category pages (that do not have a duplicate page on the site). In mid November Google cut our traffic by 30%. Could this tag be confusing the spider? According to the moz crawler - we seemed to be dinged for this on 95% of our pages. Is this hurting us? It seems to direct back to the same page.EG: From the FMI3600 Page http://www.brick-anew.com/FMI-3600-Fireplace-Doors.html: http://www.brick-anew.com/FMI-3600-Fireplace-Doors.html"> There is only one page for the FMI 3600 Fireplace Door category - however, it does have the same products on it as other FP Door Category pages,
Moz Pro | | SammyT0 -
Fetch googlebot for sites you don't own?
I've used the "fetch as googlebot" tool in Google webmaster tools to submit links from my site, but I was wondering if there was any type of tool or submission process like this for submitting links from other sites that you do not own? The reason I ask is, I worked for several months to get a website to accept my link as part of their dealer locator tool. The link to my site was published a few months ago, however I don't think google has found it and the reason could be because you have to type in your zip code to get the link to appear. This is the website that I am referencing: http://www.ranchhand.com/dealers.php?zip=78070&radius=20 (my website is www.rangeroffroad.com) Is there any way for Google to index the link? Any ideas?
Moz Pro | | texmeix0 -
SEOMoz site crawlers created an issue for our servers
I have set up a number of campaigns with your pro tool. Unfortunately we have 7 sites on our server and our IT dept have said that we had an issue when your site crawlers visited for several sites at the same time - is there any way that I can retain the campaigns but have the sites crawled on request rather than automatically?
Moz Pro | | StephenALee0