Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Website can't be crawled
-
Hi there,
One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me?
Thanks,
Tamara
-
@Yenlo same issue. I am facing in my https://jeem.pk/ website. After the Google Core Update. This website was totally deindex from google. and the spam score were increase up to 70 percent.
-
Hello,
I was facing issue in my client website. after the last google update this website https://jeem.pk/ was drop the ranking. I can't fetch the issue. Now, mostly pages were deindexed.
Spam Score is increase after Google update. upto 70.
Let me guide I have 2 questions.
-
-
sometimes web designers when they first design a website they might put the no index on every page, obviously this needs to be removed at a later date if you want to get the website indexed
-
We had this problem before with the website, where the blog posts couldn't be crawled.
We would highly recommend, speak to an seo agency for advice, as this can negatively affect the organic SEO if the page can't be crawled and indexed.
We had this problem with a company that sells summerhouses; the blog posts couldn't get crawled and negatively affected the organic seo.
-
Hi Jo,
Thanks for checking! I will ask our website provider (HubSpot) if they block AWS or rogerbot.
Thanks,
Tamara -
Hi there,
Jo here from the Moz Help Team.
The best way to investigate issues like this, is to start with our troubleshooting guide here https://moz.com/help/moz-pro/site-crawl/crawl-troubleshooting
I have tried this myself and I've gotten to step 3 where I can see that there is an error when I try you site in this third party status checker.
This is an indication that crawlers like ours and this tools are being blocked from accessing your site at a server level.
Please check with your website admin to see if they are not blocking AWS, or rogerbot. Once you've fixed this with your website admin you can trigger a recrawl of your site if you have a Medium subscription or higher https://moz.com/help/moz-pro/site-crawl/overview
If you're still stuck please reach out to help@moz.com.
Best
Jo
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Unsolved 403 crawl error
Hi, Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
Product Support | | ghrisa65
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc. These are some of the pages that have been reported with 403 error but loading fine: https://www.medistaff24.co.uk/hourly-home-care-in-evesham/ https://www.medistaff24.co.uk/contact-us/0 -
Site Crawl Status code 430
Hello, In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
Product Support | | ianatkins
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors If I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report? Thanks, Ian.0 -
False 5xx Errors for ColdFusion website
For several years month after month MOZ crawl reports 5xx errors on many pages. Almost every time all the pages work fine as fa as i could see. Google webmaster tools does not notice any errors. Could anyone explain how to fix this situation? Should i get a refund from MOZ?
Product Support | | Elchanan0 -
I have a client who wants to have their own Moz login. I want to move them out on to a new plan from the Standard (5 campaigns) plan. Can I do this?
For years I've been managing campaigns in Moz through Moz Pro. This enables up to 5 campaigns. One of my clients wants access to their campaign and are happy to pay for a new Moz Standard account just for them. My question is - can this be done? They would like my company (or more specifically me) to still have admin access to the account. I'm confused as to whether multi-seat is the right way to go? I don't want my client to login to my current Moz Pro account as I have other client's campaigns on it. Can anyone help as to the best approach.
Product Support | | T_Cooper0 -
Crawling issue
Hello,
Product Support | | Benjamien
I have added the campaign IJsfabriek Strombeek (ijsfabriekstrombeek.be) to my account. After the website had been crawled, it showed only 2 crawled pages, but this site has over 500 pages. It is divided into four versions: a Dutch, French, English and German version. I thought that could be the issue because I only filled in the root domain ijsfabriekstrombeek.be , so I created another campaign with the name ijsfabriekstrombeek with the url ijsfabriekstrombeek.be/nl . When MOZ crawled this one, I got the following remark:
**Moz was unable to crawl your site on Feb 21, 2018. **Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with ijsfabriekstrombeek.be/nl in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ijsfabriekstrombeek.be/nl. Typically errors like this should be investigated and fixed by the site webmaster. I have checked the robots.txt and that is fine. There are also no robots meta tags in the code, so what can be the problem? I really need to see an overview of all the pages on the website, so I can use MOZ for the reason that I prescribed, being SEO improvement. Please come back to me soon. Is there a possibility that I can see someone sort out this issue through 'Join me'? Thanks0 -
How to block Rogerbot From Crawling UTM URLs
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like: User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
Product Support | | Firestarter-SEO0 -
I have removed a subdomain from my main domain. We have stopped the subdomain completely. However the crawl still shows the error for that sub-domain. How to remove the same from crawl reports.
Earlier I had a forum as sub-domain and was mentioned in my main domain. However i have now discontinued the forum and have removed all the links and mention of the forum from my main domain. But the crawler still shows error for the sub-domain. How to make the crawler issues clean or delete the irrelevant crawl issues. I dont have the forum now and no links at the main site, bu still shows crawl errors for the forum which doesnt exist.
Product Support | | potterharry0