Identify Page Not Found Visits
-
Hello everyone! I have always known enough about Google Analytics and SEO to be dangerous, but was not a focus for me. I am working on a project were I am looking at stuff where my knowledge is limited.
The scenario is that the domain I am looking at will serve a 404 error, but keeps the url, I guess for tracking purposes.
At the same time, there is a page "Page_Not_Found" that has elevated visits.
-
I am not sure how to tell where the visits are coming from to the PNF since the Previous Page is mostly identified as "(entrance)"
-
Is the PNF correlated to the process of serving an error page but not changing the URL?
Ideally, I am looking to identify and improve the 404 visits. I hope that I provided clear enough information. Happy to provide more as needed.
-
-
@cayk I am not sure how to do that, but I might be able to research and ask around. Thank you.
-
@hankhoffmeier Additional to the other replies here, if you can add an image unique to the 404 error, you could get the information you're hunting for server-side by measuring the hits to the image in question. Many server side reporting tools will also allow showing image downloads by URL, which would workaround your core issue that with regard to the 404 URLS themselves.
-
@lynnpatchett Thank you. I tried ScreamingFrog. Where is did help with some items, it did not help me with the overall problem. I will keep researching.
-
@lynnpatchett said in Identify Page Not Found Visits:
If you think you have internal links to 404 pages try https://www.screamingfrog.co.uk/ - it is a tool to crawl your site and you can filter for your 404s and then see which of your pages the links are on. It is a real timesaver! The free version has some restrictions but should be enough to get you started,
Thank you for the link. That worked for me!
-
@lynnpatchett Thank you. I will take a look at both!
-
@hankhoffmeier Hi, bit of a late reply but try the following:
-
In analytics you can go to the behavior -> site content -> landing page report and the filter for your page not found url (or page title if it is common across 404s). This will give you external sites that are linking to one of your 404 pages.
-
If you think you have internal links to 404 pages try https://www.screamingfrog.co.uk/ - it is a tool to crawl your site and you can filter for your 404s and then see which of your pages the links are on. It is a real timesaver! The free version has some restrictions but should be enough to get you started,
Hope it helps!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Unsolved How does Moz compile the "Important pages on your site are returning a 4xx error!" report?
I have over 200 links in this report (mostly from a staging site). I have deleted that staging site and I cannot find the reference to the other links. So my question is, where is Moz finding these links?
Moz Pro | | nomad_blogger0 -
I have over 3000 4xx errors on my site for pages that don't exist! Please help!
Hello! I have a new blog that is only 1 month old and I already have over 3000 4xx errors which I've never had on my previous blogs. I ran a crawl on my site and it's showing as my social media links as being indexed as pages. For example, my blog post link is:
Technical SEO | | thebloggersi
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/
My site is then creating a link like the below:
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/twitter.com/aliciajthomps0n
But these are not real pages and I have no idea how they got created. I then paid someone to index the links because I was advised by Moz, but it's still not working. All the errors are the same, it's indexing my Twitter account and my Pinterest. Can someone please help, I'm really at a loss with it.
2f86c9fe-95b4-4df5-aeb4-73570881938c-image.png0 -
Doorway page penalty
Has Google changed their interpretation of Doorway pages?We do not sell widgets but allow me to use Widget for this example;If we sold 25 very different widgets an online vendor would typically have 1 "mother" website with 25 different inner pages, each page to explain each type of widget they sell.However, for the past 9 years our approach is to have 25 different websites, one for each widget. With these 25 sites we concentrated on ranking the home page only . All these sites link back to our (No idexed) "Mother' site via no follow links where we have our Shopping Cart and Terms of Business. We did this partly to avoid having 25 separate Shopping Carts and to avoid having to change our Terms 25 times each time that became necessary. But yes we also did this as it was so much easier to rank each different type of widget in the SERPS. Also we think its a better user experience as in our business buyers of yellow widgets will not be interested in blue widgetsWe have been reading for years that google does not like doorways pages but we were not 100% certain if they might regard our sites as such .This is because our approach has worked great for nine years. That is until December last year when all 95% our sites fell dramatically in the SERPS usually from page 1 to page 2 or 3. First thing we did was to go through all our sites and search for the obvious; toxic links, duplicate content, keyword density, https issues, mobility issues, anchor text, etc etc and of course content. We found no obvious problems that could affect 95% of the sites at the same time but we ordered new homepage content for most of our sites from expert seo writers. However, after putting on this new content 3 -4 weeks ago our sites have not moved up the SERPS at all.So we are left with the inescapable conclusion that our problem is because google sees and devalues our sites as doorway pages especially as 95% of your sites have been affected all at the same time Would any SEO experts on this forum agree or be able to offer an opinion?If so, what might be the solution going forward? We have 2 solutions under consideration;1) Remove all links from each of our 25 sites to our "mother Site" and put a shopping cart and our TOS on each of the 25 sites so they are all truly independent stand alone websites.2) Create 25 inner pages on our mother site (after removing the no index) , for each of the 25 widgets we sell , then 301 each of the 25 individual sites home pages to its inner page on the mother site . I think this might be the best solution partly as almost all of our higher ranking competitors are ranking their inner pages not their homepage. But I worry if these 25 sites will really pass much link juice if they have been devalued by Google.?Any advice will be gratefully received.
Intermediate & Advanced SEO | | apcsilver90 -
Paginated Pages Page Depth
Hi Everyone, I was wondering how Google counts the page depth on paginated pages. DeepCrawl is showing our primary pages as being 6+ levels deep, but without the blog or with an infinite scroll on the /blog/ page, I believe it would be only 2 or 3 levels deep. Using Moz's blog as an example, is https://moz.com/blog?page=2 treated to be on the same level in terms of page depth as https://moz.com/blog? If so is it the https://site.comcom/blog" /> and https://site.com/blog?page=3" /> code that helps Google recognize this? Or does Google treat the page depth the same way that DeepCrawl is showing it with the blog posts on page 2 being +1 in page depth compared to the ones on page 1, for example? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
Weight of content further down a page
Hi, A client is trying to justify a design decision by saying he needs all the links for all his sub pages on the top level category page as google won't index them; however the links are available on the sub category and the sub category is linked to from the top level page so I have argued as long as google can crawl the links through the pages they will be indexed and won't be penalised. Am I correct? Additionally the client has said those links need to be towards the top of the page as content further down the page carries less weight; I don't believe this is the case but can you confirm? Thanks again, Craig.
Intermediate & Advanced SEO | | CSIMedia1 -
Will pages irrelevant to a site's core content dilute SEO value of core pages?
We have a website with around 40 product pages. We also have around 300 pages with individual ingredients used for the products and on top of that we have some 400 pages of individual retailers which stock the products. Ingredient pages have same basic short info about the ingredients and the retail pages just have the retailer name, adress and content details. Question is, should I add noindex to all the ingredient and or retailer pages so that the focus is entirely on the product pages? Thanks for you help!
Intermediate & Advanced SEO | | ArchMedia0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0