Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is my Crawl Report Showing Thousands of Pages that Do Not Exist?
-
Hi,
I just downloaded a Crawl Summary Report for a client's website. I am seeing THOUSANDS of duplicate page content errors. The overwhelming majority of them look something like this:
This page doesn't exist and results in a 404 page. Why are these pages showing up? How do I get rid of them? Are they endangering the health of my site as a whole?
Thank you,
Jenna
<colgroup><col width="1051"></colgroup>
| | -
Hi Jenna,
It's not so much the fact you have 404 pages that is the problem for SEO, but rather the fact your site is creating a problem for the search engines to crawl the site correctly and efficiently since they are getting caught in an endless loop. This can be a problem because the crawlers may get caught in the endless loop and just give up on your site and leave, which means the search engines may not be able to access the rest of the pages on your site and may have a negative impact on your rankings as a whole. One of the most important parts of SEO is to make your website as "friendly" to the search engines as possible so if they caught in endless loops then that is definitely not ideal. Hope that helps!
Patrick
-
Hi Streamline -
Thanks for your help thus far. Could you elaborate on some of the SEO challenges this presents? After a bit of research, I'm seeing people say that having hundreds or thousands of 404s are okay, if they are in fact non-existant pages. I'm not that well educated on this, so just looking for a bit of clarification.
I will look into the relative URL issue. I just recently took over the work on this site, and I'm still digging in to what the original web developer created.
Jenna
-
It looks like the crawler is being caught in an endless loop, most likely a result of using relative URLs somewhere on your site. Yes, this is a problem for the site as a whole so I highly recommend implementing absolute URLs throughout the entire site.
Edit - I just looked at your site and this is exactly what it is. The links in your navigation are relative, such as "<a <="" span="">href="</a>../development/default.aspx"" so just change it to absolute URLs such as http://www.yoursite.com/development/default.aspx and it should fix the problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My last site crawl shows over 700 404 errors all with void(0 added to the ends of my posts/pages.
Hello, My last site crawl shows over 700 404 errors all with void(0 added to the ends of my posts/pages. I have contacted my theme company but not sure what could have done this. Any ideas? The original posts/pages are still correct and working it just looks like it did duplicates and added void(0 to the end of each post/page. Questions: There is no way to undo this correct? Do I have to do a redirect on each of these? Will this hurt my rankings and domain authority? Any suggestions would be appreciated. Thanks, Wade
Intermediate & Advanced SEO | | neverenoughmusic.com0 -
Paginated Pages Which Shouldnt' Exist..
Hi I have paginated pages on a crawl which shouldn't be paginated: https://www.key.co.uk/en/key/chairs My crawl shows: <colgroup><col width="377"></colgroup>
Intermediate & Advanced SEO | | BeckyKey
| https://www.key.co.uk/en/key/chairs?page=2 |
| https://www.key.co.uk/en/key/chairs?page=3 |
| https://www.key.co.uk/en/key/chairs?page=4 |
| https://www.key.co.uk/en/key/chairs?page=5 |
| https://www.key.co.uk/en/key/chairs?page=6 |
| https://www.key.co.uk/en/key/chairs?page=7 |
| https://www.key.co.uk/en/key/chairs?page=8 |
| https://www.key.co.uk/en/key/chairs?page=9 |
| https://www.key.co.uk/en/key/chairs?page=10 |
| https://www.key.co.uk/en/key/chairs?page=11 |
| https://www.key.co.uk/en/key/chairs?page=12 |
| https://www.key.co.uk/en/key/chairs?page=13 |
| https://www.key.co.uk/en/key/chairs?page=14 |
| https://www.key.co.uk/en/key/chairs?page=15 |
| https://www.key.co.uk/en/key/chairs?page=16 |
| https://www.key.co.uk/en/key/chairs?page=17 | Where is this coming from? Thank you0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Putting "noindex" on a page that's in an iframe... what will that mean for the parent page?
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Intermediate & Advanced SEO | | Philip-DiPatrizio0 -
Does an H1 have to be at the top of a page?
Because H1 "may" carry some weight with Google does it have to be placed at the top of the page? Can I place it towards the bottom of the page instead in normal body size? My goal is to keep the main keywords in the H1 but create a much friendlier title for the customer to read at the top of the page.
Intermediate & Advanced SEO | | PottyScotty0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0