How to find orphan pages
-
Hi all,
I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site.
However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too).
Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract?
Thanks!
-
Yes I mentioned in my case I use Semrush and there is a dedicated space for that specific parameter. The easiest way to get your log files is logging into your cPanel and find an option called Raw Log Files. If you are still not able to find it, you may need to contact your hosting provider and ask them to provide the log files for your site.
Raw Access Logs allow you to see what the visits to your website were without displaying graphs, charts, or other graphics. You can use the Raw Access Logs menu to download a zipped version of the server’s access log for your site. This can be very useful when you want to quickly see who has visited your site.
Raw logs may only contain a few hours’ worths of data because they are discarded after the system processes them. However, if archiving is enabled, the system archives the raw log data before the system discards it. So go ahead and ensure that you are archiving!
Once you have your log file ready to go, you now need to gather the other data set of pages that can be crawled by Google, using Screaming Frog.
Crawl Your Pages with Screaming Frog SEO Spider
Using the Screaming Frog SEO Spider, you can crawl your website as Googlebot would, and export a list of all the URLs that were found.
Once you have Screaming Frog ready, first ensure that your crawl Mode is set to the default ‘Spider’.
Then make sure that under Configuration > Spider, ‘Check External Links’ is unchecked, to avoid unnecessary external site crawling.
Now you can type in your website URL, and click Start.
Once the crawl is complete, simply
a. Navigate to the Internal tab.
b. Filter by HTML.
c. Click Export.
d. Save in .csv format.Now you should have two sets of URL data, both in .csv format:
All you need to do now is compare the URL data from the two .csv files, and find the URLs that were not crawlable.If you decided to analyze a log file instead, you can use the Screaming Frog SEO Log File Analyser to uncover our orphan pages. (Keep in mind that Log File Analyzer is not the same tool that SEO spyder)
The tool is very easy to use (download here), from the dashboard you have the ability to import the two data sets that you need to analyze
If the answer were useful do not forget to mark it as a good answer ....Good Luck
-
Hi Roman,
Out of interest, is there an option to expert an orphan page report like there is in Screaming Frog? (Reports / Orphan Pages).
I guess the true and most realistic option is to get the list from the dev team as using the sitemap isn't plausible as these pages should still get indexed. The new Google Search Console also lets you test individual pages and as long as they're in the sitemap, they should (hopefully) be indexed.
Still, trying to get a list of ALL pages on a site, without dev support, seems to be a challenge I'm trying to solve
-
Even Screaming-frog have problems to find all the orphan-pages, I use Screaming-frog, Moz, Semrush, Ahrefs, and Raven-tools in my day to day and honestly, Semrush is the one that gives me better results for that specific tasks. As an experience, I can say that a few months ago I took a website and it was a complete disaster, no sitemap, no canonical tags, no meta-tags and etc.
I run screaming-frog and showed me just 200 pages but I knew it was too much more at the end I founded 5k pages with Semrush, probably even the crawler of screaming frog has problems with that website so I commenting that as an experience.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page ranked then disappeared
Recently there have been a a couple of pages form my website that ranked well, in top 5 for a couple of days then they disappear suddenly, they are not at all seen in google search results no matter how narrow I search for them. I checked my search console, there seems to be no issues with the page, but when I check google analytics, I do not get any data from that page since the day it disappeared, and it does not even show up on the 'active pages' section no matter I keep the url open in multiple computers.
Technical SEO | | JoelssonMedia
Has anyone else faced this issue? is there a solution to it?0 -
Pages with Title Missing or Empty
When I crawl my site MOZ gave me 275 Pages with Title Missing or Empty within the High Priority Issues. So I went I reviewed it and the pages are Tag's that I use in my Blog posts. For example: https://www.cuencacigars.com/blog/tag/+cuenca+cigars https://www.cuencacigars.com/blog/tag/+florida https://www.cuencacigars.com/blog/tag/+Mobile How do I fix this? Is that a real error or is just MOZ been MOZZY? The pages have Authority 11, may I put some content on those pages since they had a decent rank? Maybe I can use this to my advantage? Or maybe I should choose my tags better? Is there anything to learn to tag?
Technical SEO | | mcuenca24100 -
Canonicalisation and Dynamic Pages
We have an e-commerce single page app hosted at https://www.whichledlight.com and part of this site is our search results page (http://www.whichledlight.com/t/gu10-led-bulbs?fitting_eq=GU10). To narrow down products on the results we make heavy use of query parameters. From an SEO perspective we are telling GoogleBot to not index pages that include these query parameters to prevent duplicate content issues and to not index pages where the combination of query parameters has resulted in no results being returned. The only exception to this is the page parameter. We are posting here to check our homework so to speak. Does the above sound sensible? Although we have told GoogleBot to not index these pages, Moz will still crawl them (to the best of my knowledge), so we will continue to see crawl errors within our Moz reports where in fact these issues don't exist. Is this true? Is there anyway to make Moz ignore pages with certain query parameters? Any other suggestions to improve the SEO of our results pages is most appreciated. Thanks
Technical SEO | | TrueluxGroup0 -
Cant find internal links on one of my pages.
When I run open site explorer for www.kingremodeling.com/ss.php?pid=5 it says there are 40 links to it on my site. However, I cannot find these links on any of the pages that open site explorer lists such as my homepage www.KingRemodeling.com. Totally confused!
Technical SEO | | allb830 -
Wordpress Page vs. Posts
My campaigns are telling me I have some duplicate content. I know the reason but not sure how to correct it. Example site here: Bikers Blog is a "static page" referencing each actual "blog post" I write. This site is somewhat orphaned and about to be reconstituted. I have a number of other sites with a similar problem. I'm not sure how to structure the "page" so it only shows a summary of the blog post on the page not the whole post. Permalinks is set as "/%postname%/" I've posted on Wordpress.org with no answer. Since this is an SEO issue I thought maybe someone with WP experience could chime in. Thanks, Don
Technical SEO | | NicheGuy0 -
Advice on Duplicate Page Content
We have many pages on our website and they all have the same template (we use a CMS) and at the code level, they are 90% the same. But the page content, title, meta description, and image used are different for all of them. For example - http://www.jumpstart.com/common/find-easter-eggs
Technical SEO | | jsmoz
http://www.jumpstart.com/common/recognize-the-rs We have many such pages. Does Google look at them all as duplicate page content? If yes, how do we deal with this?0 -
How to find all the links to my site
hi i have been trying to find all the links that i have to my site http://www.clairehegarty.co.uk but i am not having any luck. I have used the open explorer but it is not showing all the links but when i go to my google webmaster page it shows me more pages than it does on the semoz tool. can anyone help me sort this out and find out exactly what links are going into my site many thanks
Technical SEO | | ClaireH-1848860 -
How do I eliminate duplicate page titles?
Almost...I repeat almost all of my duplicate page titles show up as such because the page is being seen twice in the crawl. How do I prevent this? <colgroup><col width="336"> <col width="438"></colgroup>
Technical SEO | | ENSO
| www.ensoplastics.com/ContactUs/ContactUs.html | Contact ENSO Plastics |
| ensoplastics.com/ContactUs/ContactUs.html | Contact ENSO Plastics | This is what is from the CSV...there are many more just like this. How do I cut out all of these duplicate urls?0