SEOMoz only crawling 5 pages of my website
-
Hello,
I've added a new website to my SEOmoz campaign tool. It only crawls 5 pages of the site. I know the site has way more pages then this and also has a blog.
Google shows at least 1000 results indexed.
Am I doing something wrong? Could it be that the site is preventing a proper crawl?
Thanks
Bill
-
You should have setup a subdomain (which is what you are very linkely to have done anyway) but this linking issue is a real sticking point for you at the moment.
It's difficult to give you concrete advise without knowing your friend's business model, marketing strategy and content, owever, lets just say for neatness he wants to keep his main squeeze page as it is - at www.kingofcopy.com - you could separate all of the squeeze pages from the 'subscribers' content by creating a sub folder called 'members-area' for example - so www.kingofcopy.com contains the squeeze page where it is now (and additional sqeeze pages reside at www.kingofcopy.com/maxoutsideusa.html etc)
and all of the opt in content is moved to www.kingofcopy.com/members-area/ ensuring all of the good info that shouldn't be visible is noindexed accordingly.
Of course, this advise is based on the assumption that you only want to rank squeeze pages.
If I were undertaking this project I would do things a little differently - as I believe that sqeeze pages have now lost some of their kick - perhaps due to the huge numbers of them I have seen... So instead I would have a lot of teaser articles and videos - which contain a lot of good keyword targeted content all SEOd to the max, making sure that there are some good nuggets of info in them - so that the reader thinks - Wow! If the stuff he gives away for free is this good then I can't wait to find out how much better the paid for stuff is!
In terms of onpage SEO and campaign management - separate content which you want highly visible from the members only content - store non-indexed pages emembers pages within a sub-folder - link all of the content you want visible and indexing in some way
-
well I am just doing this for a friend of mine. His site is not ranking as well as he would like it to. I know he has some issues but first I wanted to see what major errors I could find and then fix. Then of course I am only getting 5 pages of content.
The rest of his site is indexed in google. You can find lots of his pages. I was just trying to figure out why the tool is only crawling 5 pages.
I don't recall which campaign I set it up originally. Which one do you recommend?
-
Hey Bill,
Can you tell us what campaign type you initially setup: Sub domain, Root domain or Sub folder?
I believe you are going to struggle setting up your campaign to monitor all of these pages due to the current configuration - based on the link architecture/navigation.
Would it be fair to say that you are actually only concerned about monitoring the performance of the visible Sqeeze pages in the SERPS - because if every other page should only be visible when you opt in then it stands to reason that you would be better to have all of this content hidden using noindex, to preserve the value of the content within those pages - to give potential customers every reason to opt in?
If we had a better idea of what your end goal was it might help us better assist you.
-
I think what he has here is a squeeze page set as his home page. You can not access the rest of the site unless you optin. Of course some of the other subpages are indexed in Google so you can bypass the home page.
Because he is using a squeeze page with no navigation is this why there is no link to the rest of the sites content?
Sorry-Trying to follow along.
-
http://www.kingofcopy.com/sitemap.xml - references only 3 files (with the index and sitemap/xml link making it up to 5)
However the other sections of the site are installed into sub folders or are disconnected from the content referenced from your root www.kingofcopy.com
take a look at this sitemap further into the site from one of your subfolders http://www.kingofcopy.com/products/sitemap.xml and you will see what looks to be the 1000+ pages you refer to.
However, there is no connection between the root directory and these other pages and sub folders.
It appears that your main page is http://www.kingofcopy.com/main.html
Ordinarily you would want to bring them into one common, connected framework - with all accessible pages linked to in a structured and logical way - and if you have other exclusive squeeeze pages/landing pages that you do not want to show up in search results - and just direct users to them using mail shots etc then you can prevent them getting indexed - for example - you may want to prevent a pure sqeeze page like http://www.kingofcopy.com/max/maxoutsideusa.html from appearing in the SERPS.
To prevent all robots from indexing a page on your site, place the following meta tag into the section of your page:
Personally, I would consider a restructure to bring this content into the root directory - noindexing the squeeze pages as required - but this would need to be carefully planned and well executed with 301 redirects in place where content has moved from one directory to another
However, you could always shuffle around the first few pages - renaming main.html to index html and having the copy you currently have at www.kingofcopy.com in a lightbox/popup or similar over the top of the main page ?
I think the problem with the main.html page not being found as your default root/home page and the lack of connections between certain pages is the cause for a lot of the issues with your campaign crawling so few of the pages.
Incidentally, if you did restructure consider using Wordpress as would be a great fit with what you have produced already (and there are plently of wordpress squeeze page/product promotion themes available.
-
I feel like I've checked just about everything. I do not have access to his GWT.
Ryan, thanks for helping me with this.
-
Can you share the URL?
There are several things to check, starting with the robots.txt file and your site's navigation.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why SEOmoz bot consider these as duplicate pages?
Hello here, SEOmoz bot has recently marked the following two pages as duplicate: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3 http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf I don't personally see how these pages can be considered duplicate since their content is quite different. Thoughts??!!
Moz Pro | | fablau0 -
Question #3) My last question has to do with Some SEOmoz crawl diagnostics -
I recently fixed (or well, I am asking to make sure that this was the right thing to do in my first question posted a few minutes ago), a problem where all of my internal main sidebar category pages were linking using https://, which to my knowledge means SECURE pages. anyways, OSE, and google seem to be not recognizing the link juice. but my rank fell for one of my main keywords by 2 positions about a week after i made the fix to have the pages be indexable. Making my pages properly linked can't be a bad thing right? That's what I said. So I looked deeper, and my crawl diagnostics reports showed a MASSIVE reduction in warnings (about 3,000 301 redirects were removed by changing the https:// to http:// because all the secure pages were re-directing to http:// regular structure) and an INCREASE, in Duplicate Page Titles, and Temporary redirects... Could that have been the reason the rank dropped? I think I am going to fix all the Duplicate Page Title problems tonight, but still, I am a little confused as to why such a major fix didn't help and appeared to hurt me. I feel like it hurt the rank, not because of what I did, but because what I did caused a few extra re-directs, and opened the doors for the search engine to discover more pages that had problems (which could have triggered an algo that says hey, these people have to much duplicate problems) Any thoughts will be GREATLY appreciated thumbed, thanked, and marked as best answers! Thanks in advance for your time, Tyler A.
Moz Pro | | TylerAbernethy0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category. When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination. The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages. Is this something that can be an error on the crawler side of things? Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike Xovep.jpg?1 Xovep.jpg?1
Moz Pro | | VanadiumInteractive1 -
I have corrected the Problems in Crawl Diagnostics. When would it refresh/ re-crawl my site ?
I have corrected most of the problems shown in crawl diagnostics and changed the meta desc. , titles etc. When will SEOMOZ recrawl those pages and show that Its correct now ?
Moz Pro | | VarunBansal0 -
How do you get Mozbot to crawl your website
I trying to get the mozbot to crawl my site so I can get new crawl diagnostics info. Anyone know how this can be done?
Moz Pro | | Romancing0 -
SEOmoz Crawl CSV in Excel: already split by semicolon. Is this Excel's fault or SEOmoz's?
If for example a page title contains a ë the .csv created by the SEOmoz Crawl Test is already split into columns on that point, even though I haven't used Excel's text to columns yet. When I try to do the latter, Excel warns me that I'm overwriting non-empty cells, which of course is something I would rather not do since that would make me lose valuable data. My question is: is this something caused by opening the .csv in Excel, or earlier in the process when this .csv is created?
Moz Pro | | DeptAgency2 -
Initial Crawl Questions
Hello. I just joined and used the Crawl tool. I have many questions and hoping the community can offer some guidance. 1. I received an Excel file with 3k+ records. Is there a friendly online viewer for the Crawl report? Or is the Excel file the only output? 2. Assuming the Excel file is the only output, the Time Crawled is a number (i.e. 1305798581). I have tried changing the field to a date/time format but that did not work. How can I view the field as a normal date/time such as May 15, 2011 14:02? 3. I use the ™ symbol in my Title. This symbol appears in the output as a few ascii characters. Is that a concern? Should I remove the trademark symbol from my Title? 4. I am using XenForo forum software. All forum threads automatically receive a Title Tag and Meta Description as part of a template. The Crawl Test report shows my Title Tag and Meta Description as blank for many threads. I have looked at the source code of several pages and they all have clean Title tags and I don't understand why the Crawl Report doesn't show them. Any ideas? 5. In some cases the HTTP Status Code field shows a result of "3". Why does that mean? 6. For every URL in the Crawl Report there is an entry in the Referrer field. What exactly is the relationship between these fields? I thought the Crawl Tool would inspect every page on the site. If a page doesn't have a referring page is it missed? What if a page has multiple referring pages? How is that information displayed? 7. Under Google Webmaster Tools > Site Configurations > Settings > Parameter Handling I have the options set as either "Ignore" or "Let Google Decide" for various URL parameters. These are "pages" of my site which should mostly be ignored. For example a forum may have 7 headers, each on of which can be sorted in ascending or descending order. The only page that matters is the initial page. All the rest should be ignored by Google and the Crawl. Presently there are 11 records for many pages which really should only have one record due to these various sort parameters. Can I configure the crawl so it ignores parameter pages? I am anxious to get started on my site. I dove into the crawl results and it's just too messy in it's present state for me to pull out any actionable data. Any guidance would be appreciated.
Moz Pro | | RyanKent0