First Crawl Report
-
Just joined SEOMoz today and am slightly overwhelmed, but excited about learning loads from it.
I've just received my Crawl Report and there is a
404 : UserPreemptionError:
http://www.iainmoran.com/comments/feed/This is a WordPress site and I've no idea what the best course of action to take. I've done some searching on Google and a couple of sites suggest removing that url from within the robots.txt file. I'm using the Yoast Plugin which apparently creates a robots.txt file, but I can't see any way to edit it. Is there another solution for resolving the 404 error?
Many thanks,
Iain.
-
John, thank you so much for your help with this. It's very much appreciated.
That does explain why I couldn't find the robots.txt file.
Is there another way to resolve the 404 error warning that SEOMoz is telling me about?
404 : UserPreemptionError:
http://www.iainmoran.com/comments/feed/I understand that if I disable comments on my WP site, that would fix the issue, but don't really want to do that if it can be avoided.
Thanks again,
Iain.
-
Actually, on further investigation, blocking the includes folder does not effect ranking, as the content is loaded server side (PHP)
I don't imagine that the robot.txt file is your issue, as its just stopping people accessing your include and admin folder, which you want
-
Iain
I dont use WP, but just Googled this
"I believe WordPress itself creates the robots.txt file and it is a virtual file, meaning there won't be a hard copy on your server for you to edit. You could create one yourself and upload it to your server and I think search engines will use that one instead, or you can use a plugin like this one, that will let you edit your robots.txt file from the WordPress admin."
Explains why you cant find it !
Basically that suggest creating one yourself and uploading it to the server.
Create a note pad file, add your content, name file robot.txt (must change extension to .txt), and that should work
-
Iain
Just to be sure, its actually just called a robot.txt (not .text)
I just checked, you do have one in your root file
http://www.iainmoran.com/robots.txt
Bizarrely, its blocking Google indexing your includes, which is a problem cause most of your links will be included in some sort of nav include, not to mention most of your website content.
Its defiantly there, maybe check your public folder and www folder
Let me know how you get on
John
-
Thanks for your quick response, Johnny,
I've just looked at my root directory via both my CPanel and FTP, but there is no robots.text file there.
I don't understand why this is, as in the Yoast section within each of my pages, there are some which I have set to "nofollow"
Also, within the Yoast CP there is a section titled: Robots.txt and says the following under it:
"If you had a robots.txt file and it was editable, you could edit it from here."Iain.
-
Iain
You robots.xt file can be found within your root directory
Where ever your website is hosted, log in there (if you have cpanel which is more than likely) and navigate to your public folder. the robots.text file should reside there, download it and edit in any software really, wordpad etc.
If you don't have cpanel, you can add your FTP credential using an FTP client and navigate the same way
When you edit it, upload it back to your server.
Regards
J
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will Google crawl and rank our ReactJS website content?
We have 250+ products dynamically inserted and sorted on our site daily (more specifically our homepage... yes, it's a long page). Our dev team would like to explore rendering the page server-side using ReactJS. We currently use a CDN to cache all the content, which of course we would like to continue using. SO... will Google be able to crawl that content? We've read some articles with different ideas (including prerendering): http://andrewhfarmer.com/react-seo/
Technical SEO | | Jane.com
http://www.seoskeptic.com/json-ld-big-day-at-google/ If we were to only load the schema important to the page (like product title, image, price, description, etc.) from the server and then let the client render the remaining content (comments, suggested products, etc.), would that go against best practices? It seems like that might be seen as showing the googlebot 1 version and showing the site visitor a different (more complete) version.0 -
Google only crawling a small percentage of the sitemap
Hi, The company which I work for have developed a new website for a customer, there URL is https://www.wideformatsolutions.co.uk I've created a sitemap which has 25,555 URL's. I submitted this to Google around 4 weeks ago and the most crawls that have ever occurred has been 2,379. I've checked everything I can think of, including; Speed of website Canonical Links 404 errors Setting a preferred domain Duplicate content Robots Txt .htaccess Meta Tags I did read that Matt Cutts revealed in an interview with Eric Enge that the number of pages Google crawls is roughly proportional to your pagerank. But I'm sure it should crawl more than 2000 pages. The website is based on Opencart, if anyone has experienced anything like this I would love hear from you.
Technical SEO | | chrissmithps0 -
Ignore these external links reported in GWT?
Taking a long, Ace-Ventura-like breath here. This question is loaded. Here we go: No manual actions reported against my client's site in GWT HOWEVER, a link: operator search for external links to my client's website shows NO links in results. That seems like a very bad omen to me. OSE shows 13 linking domains, but not the one that's listed in the next bullet point. Issue: In Google Webmaster Tools, I noticed 1,000+ external links to my client's website all coming from riddim-donmagazine.com (there are a small handful of other domains listed, but this one stuck out for the large quantity of links coming from this domain) Those external links all point to two URLs on my client's website. I have no knowledge of any campaigns run by client that would use this other domain (or any schemes for that matter) It appears that this website riddim-donmagazine.com has been suspended by hostgator All of the links were first discovered last year (dates vary but basically August through December 2013) There have not been any newly discovered links from this website reported by GWT since those 2013 dates All of the external links are /? based. Example: http://riddim-donmagazine.com/?author=1&paged=31 If I run that link in preceding bullet point through http://www.webconfs.com/http-header-check.php, or any others from riddim-donmagazine.com those external links return 302 status. My best guess is at one time the client was running an advertising program and this website may have been on that network. One of the external links points to an ad page on the client's website.
Technical SEO | | EEE3
(web.archive.org confirms this is a WordPress site and that it's coverage of Bronx news could trigger an ad for my client or make it related to my client's website when it comes to demographics.) Believe me, this externally linked domain is only a small problem in comparison with the rest of my client's issues (mainly they've changed domains, then they changed website vendors, etc., etc.), but I did want to ask about that one externally linked domain. Whew.Thanks in advance for insights/thoughts/advice!0 -
20 000 duplicates in Moz crawl due to Joomla URL parameters. How to fix?
We have a problem of massive duplicate content in Joomla. Here is an example of the "base" URL: http://www.binary-options.biz/index.php/Web-Pages/binary-options-platforms.html For some reason Joomla creates many versions of this URL, for example: http://www.binary-options.biz/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html or http://www.binary-options.biz/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html?q=/index.php/Web-Pages/binary-options-platforms.html So it lists the URL parameter ?q= and then repeats part of the beforegoing URL. This leads to tens of thousands duplicate pages in our content heavy site. Any ideas how to fix this? Thanks so much!
Technical SEO | | Xmanic0 -
Why is the report telling I have duplicate content for 'www' and No subdomain?
i am getting duplicate content for most of my pages. when i look into in your reports the 'www' and 'no subdomian' are the culprit. How can I resolve this as the www.domain.com/page and domain.com/page are the same page
Technical SEO | | cpisano0 -
My 404 page shows in the report as an error.
How can i make my actual 404 page not show up as a 404 error in the report?
Technical SEO | | LindseyNewman0 -
Duplicate Page Content Report
In Crawl Diagnostics Summary, I have 2000 duplicate page content. When I click the link, my Wordpress return "page not found" and I see it's not indexed by Google, and I could not find the issue in Google Webmaster. So where does this link come from?
Technical SEO | | smallwebsite0 -
How can we fix duplicate title tags like these being reported in GWT?
Hi all, I posted this in the GWT Forum on Monday and still no answers so I will try here. Our URL is http://www.ccisolutions.com
Technical SEO | | danatanseo
We have over 200 pages on our site being flagged by GWT as having
duplicate title tags. The majority of them look similar to this: Title: <a>JBL EON MusicMix 16 | Mixer | CCI Solutions</a> GWT is reporting these URLs to have all the same title: /StoreFront/product/R-JBL-MUSICMIX.prod/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Category/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Footer/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Header/StoreFront/product/R-JBL-MUSICMIX.prod?origin=../StoreFront/product/R-JBL-MUSICMIX.prod?origin=GoogleBase These are all the same page. There was a time when we used these origin codes, but we stopped using them over a year ago. We also added canonical tags to every page to prevent us from having duplicate content issues. However, these origin codes are
still showing up in GWT. Is there anything we can do to fix this problem. Do we have a technical issue with our site code and the way Google is seeing our dynamic URLs? Any suggestions on how we can fix this problem? The same is true in our report for Meta descriptions. Thanks
you,
Dana Tan0