Potential site architecture problem.
-
We are experiencing a problem in which category pages do not appear to be passing Page Authority to subcategory pages. See the example below:
Home page has PA of 47
The next level category page down has a PA of 34
http://www.minespress.com/category/apparel
The next sub category down has PA of 1, and shows no links at all in spite of the fact that we got to it from a category link
http://www.minespress.com/category/golf-shirts
The page in question is crawlable, is indexed and has nothing that we can see that would be prevent it from showing Page Authority and a link profile using your tools.
Do we have a site architecture problem?
-
Hi Steven, this is a difficult one to crack, pointing out the exact reason for that category page not showing in the tools is not as easy, but I do see some issues with your site that might be affecting the flow of authority:
yes, Architecture. You have:
http://www.minespress.com/ (Home)
http://www.minespress.com/category/printed-products (Category 1)
http://www.minespress.com/category/printed-envelopes (Category 2)
http://www.minespress.com/category/business-envelopes (Category 3)
http://www.minespress.com/products/10-business-envelopes (and the product)
I see what you are trying to do, but you have a long road to let the user get to the product,this is what I would do:
http://www.minespress.com/printed-products/business-envelopes/10-business-envelopes/
http://www.minespress.com/promotional-products/key-tags/house-stress-ball-keychain/
I think that having a category called "printed envelopes" its just redundant (is there any other kind of envelope?)
Maybe consider linking to the most important categories from the main navigation.
Also on a side note: Please check this http://www.seomoz.org/beginners-guide-to-seo/basics-of-search-engine-friendly-design-and-development
The idea of controlling link juice by using no-follow is not useful anymore.
-
While I'm not the SEO guy here, I do know that we no-followed the image-links to control the flow of page juice. This deep link is way over 45 days old and should have been crawled by now. Most or all of our deep links (ie. product pages) are suffering this problem. We just chose this one as an example. There are hundreds if not thousands of product pages with the same symptom.
-
I'm guessing this is because this is a site with a lot of links. The more links you have on one page, the less weight each one of them passes. He's probably just trying to pass as much juice as possible by nofollowing the duplicate links in the images.
-
It looks ok to me - built pretty well from what I can tell. I pulled up an OSE report on your http://www.minespress.com/category/golf-shirts link and it shows no data. This is probably just an issue of OSE not having data for that deep down in your site.
Here is what OSE will tell you if no data is available for a URL.
No Data Available for this URL
Although our index is large, there are a number of reasons why we may not have data for the page you've requested. These can include:
-
Recency of Page Creation:
Linkscape crawls the web constantly, but we update the index only once every 30-40 days. Thus, pages and links created since the last index update won't be available until we've seen them. A typical timeline for getting a page/site included in Linkscape is 45-60 days, sometimes less for very important or well-linked-to pages. -
Deep Down in the Web:
Our crawl focuses on a breadth-first approach, and thus we nearly always have content from the homepage of websites, externally linked-to pages and pages higher up in a site's information hierarchy. However, deep pages that are buried beneath many layers of navigation are sometimes missed and it may be several index updates before we catch all of these. -
Blocked Pages:
If our crawlers or data sources are blocked from reaching your URLs, they may not be included in our index (though links that points to those pages will still be available) -
No Links:
the URLs seen by Linkscape must be linked-to by other documents on the web or our index will not include them.
If anyone else catches anything that I'm missing please feel free to say so. I can't see any problems though.
-
-
I'm curious, why are your image links rel=nofollowed, but the text links are not? I'm not saying this has anything to do with your issue, just curious.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved How does Moz compile the "Important pages on your site are returning a 4xx error!" report?
I have over 200 links in this report (mostly from a staging site). I have deleted that staging site and I cannot find the reference to the other links. So my question is, where is Moz finding these links?
Moz Pro | | nomad_blogger0 -
How to increase DA and PA of my site.
I have been working on my site for the last 4 months and still, I am not getting improvement in site DA, PA, and site ranking. Also, I am generating unique content. Here is the Website link https://techynewx.com/
Moz Pro | | randomghrytu0 -
Our crawler was not able to access the robots.txt file on your site.
Good morning, Yesterday, Moz gave me an error that is wasn't able to find our robots.txt file. However, this is a new occurrence, we've used Moz and its crawling ability many times prior; not sure why the error is happening now. I validated that the redirects and our robots page are operational and nothing is disallowing Roger in our robots.txt. Any advice or guidance would be much appreciated. https://www.agrisupply.com/robots.txt Thank you for your time. -Danny
Moz Pro | | Danny_Gallagher0 -
Duplicate Site Content found in Moz; Have a URL Parameter set in Google Webmaster Tools
Hey, So on our site we have a Buyer's Guide that we made. Essentially it is a pop-up with a series of questions that then recommends a product. The parameter ?openguide=true can be used on any url on our site to pull this buyer's guide up. Somehow the Moz Site Crawl reported each one of our pages as duplicate content as it added this string (?openguide=true) to each page. We already have a URL Parameter set in Google Webmaster Tools as openguide ; however, I am now worried that google might be seeing this duplicate content as well. I have checked all of the pages with duplicate title tags in the Webmaster Tools to see if that could give me an answer as to whether it is detecting duplicate content. I did not find any duplicate title tag pages that were because of the openguide parameter. I am just wondering if anyone knows:
Moz Pro | | MitchellChapman
1. a way to check if google is seeing it as duplicate content
2. make sure that the parameter is set correctly in webmaster tools
3. or a better way to prevent the crawler from thinking this is duplicate content Any help is appreciated! Thanks, Mitchell Chapman
www.kontrolfreek.com0 -
Cleaning Up Bad 301 External Links From Old Site
A relatively new site I'm working on has been hit really hard by Panda, due to over optimization of 301 external links which include exact keyword phrases, from an old site. Prior to the Panda update, all of these 301 redirects worked like a charm, but now all of these 301's from the old url are killing the new site, because all the hyper-text links include exact keyword matches. A couple weeks ago, I took the old site completely down, and removed the htaccess file, removing the 301's and in effect breaking all of these bad links. Consequently, if one were to type this old url, you'd be directed to the domain registrar, and not redirected to the new site. My hope is to eliminate most of the bad links, that are mostly on spammy sites, that aren't worth linking to. My thought is these links would eventually disappear from G. My concern is that this might not work, because G won't re-index these links, because once they're indexed by G, they'll be there forever. My fear is causing me to conclude I should hedge my bets, and just disavow these sites using the disavow tool in WMT. IMO, the disavow tool is an action of last resort, because I don't want to call attention to myself, since this site doesn't have a manual penalty inflected on it. Any opinions or advise would be greatly appreciated.
Moz Pro | | alrockn0 -
Open Site Explorer vs Webmaster Tools
Hi there. OSE is showing 53 linking domains and WMT is showing 161.
Moz Pro | | JeromeSavon
Why are so many missing from OSE. They are all links of a decent age. Thanks0 -
In open site explorer, why are subdomain stats different to domain stats?
Hello there I have just compared some sites in Open Site Explorer. The page domain, subdomain and domain stats are all different. I get the difference between page and domain stats. But why are the subdomain stats different? There are no subdomains on the site so I expected them to be the same. What is my confused brain missing??? Many thanks Wendy
Moz Pro | | Chammy0 -
CSV export of Open Site Explorer is incomplete.
I exported my back links report from the Open Site Explorer toolbar as a CSV but the file it showed was only about 400 urls. The tool bar is listing over 1,200 links, so at first I thought maybe it was only exporting one link for each unique domain, but it only lists 200 or so unique domains linking to my site. I know it will only export 10,000 urls, but obviously I'm significantly below this level. Here is a link to a competitors site which is having the same issue.
Moz Pro | | bbelgard
http://www.opensiteexplorer.org/links?site=www.zoobooks.comListing about 900 links and only generating about 500 in the CSV report. Any help would be much appreciated.0