Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to outrank a directory listing with high DA but low PA?
-
My site is at 4th place, 3 places above it is a gumtree (similar to yell, yelp) listing. How can you figure out how difficult it would be outrank those pages? I mean obviously the pages would have low PA and they are top based on the high DA of the site.
This also seems to go back to keyword research and difficulty, when I'm doing keyword research and I see a wikipedia site in top 5 rank, or a yell.com or perhaps an article in forbes.com outranks your site. Typically the problem seems to be Google giving a lot of credit to these pages rankings based on the high DA rather than PA of the pages. How would you gauge the difficulty of that keyword then if the competition are pages with very high DA which is impossible to compete with but low PA?
Thanks
-
Most of my work is writing articles that take between three days and a week to author. I also have employees who assist with these articles by taking photos, making graphics, doing research, collecting data and posting them to websites. Some of these articles attack very difficult keywords.
After doing this for about 12 years on the same website, I still don't know how these articles are going to rank. A year or two after posting some are on the first page of Google defeating popular websites that surprise me. Others, perplex me because I am being beaten by pissants - in SERPs that I would judge to be much easier. I suspect that semantics, keyword diversity and titles that elicit clicks help the pissants beat me but I don't know for sure.
I can't predict how my own rankings will turn out on a website that I know well, in an industry where I have worked for 40 years and against competitors who are often people who I know by name or are even my own customers. The SERPs can be very difficult to understand. One thing that I will say with confidence is that DA and PA explain nothing and give zero guidance in winning a fight. They count as zero importance in my decisions. I can't even tell you those numbers for my own websites unless I go look. That's how little attention I give to them.
-
Sorry I couldn't help.
From above DA is a factor and one that is very difficult for you to catch up to or surpass - BUT certainly not a massive % of the ranking algorithm.
-
Actually I don't, hence why I came here to ask the question. I think Egol answered it above, they are beatable as small businesses beat them every day. That's what I needed to know, whether they are beatable, how easy/hard is it to beat them as that is a deciding factor as to whether to invest more time and money into SEO or not (money could be better spent on ads for example).
You gave me a philosophical answer that basically said, "you can't change what they do or have so work on what you can change in yourself", which is all fine and dandy but its a loose, vague, cookie cutter spiritual science answer. I mean could I theoretically outrank "British Cancer Research" Website for the keyword cancer "cancer research"? Obviously the answer is yes, using your advice, I can just keep working on my cancer research site, maybe throw a million £ into its SEO and a couple of years and I'll outrank them. We know that, everyone knows that, everyone knows that with hard work and enough time/money/effort you can achieve anything - that is not the question, the question was "how hard/easy is it?" as that is obviously a big factor when considering to continue with that strategy or not.
I mean no disrespect, I think you just misunderstood my question from the start as a "complaining type of question", perhaps you interpreted it as me whinging about the high DA competition and you were trying to encourage me to not focus on that DA. I wasn't complaining, it was a straight up question of how much that high DA is a factor in their site outranking mine as I have built a lot of backlinks to my page, they have none to theirs and therefore must rely on their site DA and traffic.
-
If you did not want constructive suggestions, why even ask? You obviously already knew the answer you wanted.
Best
-
You came here asking how to beat a directory and you got good answers and an action plan.
Unfortunately you look at metrics that google does not use, metrics that are based upon a domain, metrics that have nothing to do with the methods of winning a SERP. Don't allow rubbish metrics to frighten you away.
These are directory sites that you are trying to defeat. Directories.
They are not the Library of Congress or the Pope. Pages on these sites are defeated by small businesses every day. Pages on Amazon are defeated by small businesses every day. These small businesses didn't run because they faced competition. They got to work.
If you are willing to work hard you should not fear competition. Because where there is competition there is usually a lot of search volume on a lot of diverse keywords. And, where there is competition there is usually a lot of money changing hands. Attack there with long content with diverse keywords and excellent quality. There is a good chance that you will earn traffic. Attack that keyword with multiple pages, each of excellent quality and targeting the long tail. One of more of those pages might eventually gain rankings for the short tail keyword.
Maybe you will not win if you fight. But you will never win if you run.
-
PA is built with inbound links to that page. That page has 0 backlinks. If it has a 29 PA which I had already checked, it is boosted probably by traffic, internal links to that page which is all a direct cause of Gumtree having massive traffic and DA.
I am not taking it personally, I'm just looking for a reasonable answer as to how much DA weigh in as a factor in rankings? I have a pragmatic approach to things so for example:
Site A has DA30 PA29, my site has DA25 PA28 - This gives me a good idea of what I need to do. I need to study site A's backlink profile, onsite SEO and try to raise my DA to match or beat theirs. Its clear pragmatic approach.
Now example B:
Site A has DA90 and PA 29, my site has DA30 and PA45. - Now for a logical approach, this is much harder to approach pragmatically on how to beat it. Mainly because we know we can never achieve a DA90, and my higher PA isn't overcoming that DA gap. So the question is, how much of a factor is that huge DA? This is important from a business decision perspective because if it means 6 months of high quality backlinks could overcome that, they maybe it's a go, but if it means that I may need to achieve a DA of 60 and PA 60 to outrank that site, then there would be no point. And yes I know DA/PA doesnt mean ranking, but its the closest measuring stick we have to how successful a site will be in ranking.
And I don't mean to be rude but the idea of just "overcoming" something by doing things better in things you can change is a very vague idea. If I want to be the best boxer in the world and I'm 50 years old, you could follow some rara and claim that you can't change your age so you might as well just train and work on what you can change. But fact of the matter is, its nearly impossible to be the best boxer in the world at 50yrs old so although theres a 0.01% chance it could be done, its not a worthwhile investment. And this is why I asked a simple question of how much of a factor that DA is, its to figure out whether its worth investing into more SEO.
And you're right in the context that keyword difficulty doesn't change based on whos above you, however if you are aiming for no.1 position and you are no.2 but no.1 is almost impossible to take over, then yes that keyword is hard. When building niche sites, part of keyword research is to research your competition for that keyword before embarking on that niche site project.
But thank you for taking your time to answer.
-
Magusara,
I think you are taking this a bit personally. Yes the pages above you are ultimately owned by someone even if that someone is a stockholder in the company that owns the page. The link you provide (https://www.gumtree.com/removal-services/bournemouth) has a page authority of 29.
When you complain about the authority reference with the Facebook example, you are missing the point. The issue is not 100% DA and you are fixated on that. Sorry, it is simply my opinion that you cannot fixate on the thing you say is insurmountable (increase your DA above theirs) and then say any other way to deal with the roadblock is in some way not relevant or that whomever supplied the suggestion is just wrong.
You are wrong about keyword difficulty. You say: And I disagree, who ranks above you is a reflection on keyword difficulty. Obviously if you are trying to rank #1 for the keyword "dog training", you are currently 2nd but no.1 is occupied by Facebook deciding to have a specific dog training page target for your area, it would be next to impossible to overtake them. Hypothetical situation but you get what I mean.
Who ranks above you is NOT a reflection on keyword difficulty. If today I erect a page that is purple dog jellybeans and I add content weekly to it. If in three months you erect a page that is purple dog jellybeans and index it, most likely, it will initially rank below my page. That doesn't mean that the term purple dog jellybeans is a competitive keyword.
That is determined by the keyword within the context of all the competition in a given vertical for that term. It is not determined by the site above you with a lot of DA.
Everyone involved in SEO on these Q&A pages have faced the same hurdles you are experiencing and all we can give you is our experience. Yes, DA is a factor and one that is very difficult for you to catch up to or surpass - BUT certainly not a massive % of the ranking algorithm.
The points we were making were to suggest you quit looking at that one roadblock (DA) and go about developing everything else that your competitor cannot. You KNOW the area the directory participates in. Use what your advantage is against them and quit being argumentative with those who only want to assist you. Focus is where we place our gaze and apply our energy. You are focusing on the roadblock of DA. We are suggesting you focus on everything else and make the roadblock cease to exist.
I will apply a golf analogy and then say goodbye:
If you are faced with water you must cross to get to the green with your shot, you can focus on the water or on the green. Ask any golfer where the ball goes if you are focused (caught up with avoiding) the water. Just a fact of life for me.
We wish you the very best,
Robert
-
I don't hate directories or envy their DA. The pages above me are NOT owned by a person. They are a directory listing.
I say "obviously they have a low PA" because these pages are not owned by anyone therefore they dont have any SEO or owner building traffic or backlinks to them. They are the result of the directory listing. Example: https://www.gumtree.com/removal-services/bournemouth
That URL has 0-1 backlinks according to ahrefs or moz.com site explorer. The page ranks highly literally because the DA of the site is 70. As for researching what they are doing, well thats kinda like saying research what Facebook or yell.com is doing to see if you can achieve the same DA as their site.
And I disagree, who ranks above you is a reflection on keyword difficulty. Obviously if you are trying to rank #1 for the keyword "dog training", you are currently 2nd but no.1 is occupied by Facebook deciding to have a specific dog training page target for your area, it would be next to impossible to overtake them. Hypothetical situation but you get what I mean.
The purpose of the question is not to hate on those sites above me, it's to estimate how difficult directory results are to outrank in search engines. Because they can't be held to the same ranking factors as other personally owned sites because of the huge DA they hold, such as a Forbes article. You can build more links to your page than there are to the Forbes article but the Forbes DA will also weigh in, I guess the question is how much of a factor is the DA over the PA.
-
I really enjoyed Robert's answer because we all see so many of these DA and PA envy questions.
To build on Robert's theme of "leverage your advantages" - these directories are usually built by people who know very little about your local area or industry. They are also generally cookie-cutter sites built by factory workers who live 1000 miles away. For those reasons, it is often very easy for a local expert or an industry expert to build content that is vastly superior and to provide a landing page experience that impresses the visitor enough that he will share it with others. You probably also know the visitor better than the factories who build these sites too.
Put some time into your content and presentation. Winning there can significantly improve the success of your page in the SERPs. It can also significantly improve your chances of attracting the searcher to your business instead of the business of your competitors. Show your expertise!
-
Magusara
In my opinion, if you wish to outrank anyone, you should first take a step back and not draw conclusions without first researching. You state, "...obviously the pages would have low PA and they are top based on the high DA of the site." If it is obvious, then why research to see what exactly they are doing?
If you have spent any time on Moz's Q&A you will have seen ton's of "How is this page outranking me?" questions. The key to combatting those pages is to learn more about them than they know about themselves. You cannot do that if you assume or believe anything to be obvious.Yes, I hate directories (when it serves me to not work with them) as much as any other SEO professional, but they do get many things correct. One thing you can do is grow your DA and yes, it will take a bit of time. But look closer, is everything as it really appears?
My point is this: Knowing they have an advantage is not going to assist you in combatting them, knowing what the advantages and disadvantages they have will help you fight them if you choose to exploit them. So, is it worth the time to fight it? If it is, then know your opponent and leverage your own advantages.
Hope that helps,
Robert
PS the keyword difficulty does not change based on who is above you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Image Audit: Getting a list of *ALL* Images on a Site?
Hello! We are doing an image optimization audit, and are therefore trying to find a way to get a list of all images on a site. Screaming Frog seems like a great place to start (as per this helpful article: https://moz.com/ugc/how-to-perform-an-image-optimization-audit), but unfortunately, it doesn't include images in CSS. 😞 Does the community have any ideas for how we try to otherwise get list of images? Thanks in advance for any tips/advice.
Intermediate & Advanced SEO | | mirabile0 -
Low text-HTML ratios
Are low text-HTML ratios still a negative SEO ranking factor? Today I ran SEMRUSH site audit that showed 344 out of 345 pages on our website (www.nyc-officespace-leader.com) show an text-HTML ratio that ranges from 8% to 22%. This is characterized as a warning on SEMRUSH. This error did not exist in April when the last SEMRUSH audit was conducted. Is it worthwhile to try to externalize code in order to improve this ratio? Or to add text (major project on a site of this size)? These pages generally have 200-400 words of text. Certain URLs, for example www.nyc-officespace-leader.com/blog/nycofficespaceforlease more text, yet it still shows an text-HTML ratio of only 16%. We recently upgraded to the WordPress 4.2.1. Could this have bloated the code (CSS etcetera) to the detriment of the text-HTML ratio? If Google has become accustomed to more complex code, is this a ratio that I can ignore. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Nofollow Outbound Links on Listings from Travel Sites?
We oversee a variety of regional, county, and town level tourism websites, each with hundreds (or even thousands) of places/businesses represented with individual pages. Each page contains a link back to the place's main web presence if available. My fear is that a large portion of these linked to sites are low quality, and may even be spammy. With our budgets there is no way to sort through them and assign nofollows as needed. There are also a number of broken links that we try to stay on top of but at times some slip through due to the sheer number of pages. I am thinking about adding a nofollow to these outbound links across the board. This would not be all outbound links on the website, just the website links on the listing pages. I would love to know peoples thoughts on this.
Intermediate & Advanced SEO | | Your_Workshop0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Buying Expired Domains with Decent DA/PA for SEO Purposes
Hey guys, i've seen some stuff about this before but I recently found an opportunity to put it into action and wanted to make sure I knew what I was getting into! I am looking at buying a domain (expired and now only 10 dollars) that has a decent domain authority and has some keywords in it related to my clients practice. I plan on using a 301 redirect to pass "link juice" because this client is looking for a quick bump in rankings. Thoughts? Benefits? Problems with this?
Intermediate & Advanced SEO | | RickyShockley1 -
How to do geo targeting for domain and sub directories in Webmaster tool?
Hello All, How can i do geo targeting in multiple countries on my ** root domain and sub **directories in Webmaster tool. My domain is "abc.com" and i want to target three countries UAE , Kuwait, Saudi arabia. So, Can i assign geo targeting in Webmaster tool , Root domain for UAE country and make other two sub directories for Kuwait and saudi ? abc.com - UAE (geo targeting) abc.com/kw - Kuwait (geo targeting) abc.com/sa - Saudi (geo targeting) Or Root doamain should be not assign for any country and Make three sub directories for UAE, Kuwait , and saudi and targeting them there geo locations. abc.com - Unlisted (geo targeting) abc.com/uae/ - UAE (geo targeting) abc.com/kw/ - Kuwait (geo targeting) abc.com/sa/ - Saudi (geo targeting)
Intermediate & Advanced SEO | | rahul110 -
Bing flags multiple H1's as an issue of high importance--any case studies?
Going through Bing's SEO Analyzer and found that Bing thinks having multiple H1's on a page is an issue. It's going to be quite a bit of work to remove the H1 tags from various pages. Do you think this is a major issue or not? Does anyone know of any case studies / interviews to show that fixing this will lead to improvement?
Intermediate & Advanced SEO | | nicole.healthline0 -
All page files in root? Or to use directories?
We have thousands of pages on our website; news articles, forum topics, download pages... etc - and at present they all reside in the root of the domain /. For example: /aosta-valley-i6816.html
Intermediate & Advanced SEO | | Peter264
/flight-sim-concorde-d1101.html
/what-is-best-addon-t3360.html We are considering moving over to a new URL system where we use directories. For example, the above URLs would be the following: /images/aosta-valley-i6816.html
/downloads/flight-sim-concorde-d1101.html
/forums/what-is-best-addon-t3360.html Would we have any benefit in using directories for SEO purposes? Would our current system perhaps mean too many files in the root / flagging as spammy? Would it be even better to use the following system which removes file endings completely and suggests each page is a directory: /images/aosta-valley/6816/
/downloads/flight-sim-concorde/1101/
/forums/what-is-best-addon/3360/ If so, what would be better: /images/aosta-valley/6816/ or /images/6816/aosta-valley/ Just looking for some clarity to our problem! Thank you for your help guys!0