Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why isn't my homepage number #1 when searching my brand name?
-
Hi! So we recently (a month ago) lunched a new website, we have great content that updates everyday, we're active on social platforms, and we did all that's possible, at the moment, when it comes to on site optimization (a web developer will join our team this month and help us fix all the rest). When I search for our brand name all our social profiles come up first, after them we have a few inner pages from our different news sections, but our homepage is somewhere in the 2nd search page... What may be the reason for that? Is it just a matter of time or is there a problem with our homepage I'm unable to find?
Thanks!
-
You're the best
Thanks! -
100% agree. It's going to take some time. Right now Google doesn't really understand that term as a brand name and the search results are a bit wacky in my opinion. You shouldn't have a problem moving up.
The more backlinks you earn with 'primepair' as the anchor text, the better off you will be.
Good luck!
-
Thank you Lewis!
-
The site's pretty new, so it may take some time. According to Moz, you have no backlinks to your site and a DA/PA of 1. Just keep up with what you're doing, build some decent links and you'll soon see the fruits of your labour.
Lewis
-
Hello! Thanks for the help! I wasn't sure if I can publish the url here

our site url is - http://primepair.com brand name "primepair", there's a wiki page and some edu sites on the first page... I assume they'll be a difficult competitors for the #1 place...
We're on all the webmaster toold, excluding yandex, which I'll connect now.
-
The two or three main reasons I've ran into for sites not ranking for their own branded searches are...
-
The site has way too little content -- a couple of pages less than 200 characters per page.
-
The site is very new and in none of the Webmaster Tools
-
Or the brand is a generic word that is reasonably competitive.
As Sam mentioned above, with more details the community can get you a much more definitive and nuanced answer. Within your social profiles make sure you're linking back to your site and if your site has a physical location make sure to optimize via local search as well.
-
-
Thanks for the question! I'm sure the community would love to help, but the question is a little vague. If it's possible, could you tell us your brand name and website URL? There are numerous issues that could be involved, so it'd be hard to help unless we can get more details.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Company name ranking
Hi all, I hope somebody can share their thoughts on the below. A web designer launched my client's new website and I have been tasked with the SEO. I was approached with an immediate problem, www.clientswebsite.co.uk was ranking 9th for their company name after being indexed by Google. The search results above www.clientswebsite.co.uk were related to my client but not all, for example a direct competitor was also ranking. I have been working on the SEO for 2-3 weeks and I just managed to get to 3rd position for the company name, and then www.clientswebsite.co.uk disappeared from page 1! And now instead, an irelevant sub page is now ranking for the company name on page 2 (a contact page). I have checked and the home page is still indexed (did a site: check). The only problem software picks up is a redirect chain (http://homepage -> http://www.homepage -> https://homepage) the web developers said it wouldn't impact rankings (when I asked them to edit the htaccess file to fix it) I've listed below the SEO tasks I completed whilst attempting to rank the company name: I set up analytics and webmaster tools, in which I set up preferred domain (www) Added a sitemap Edited meta data making sure company name was included I contacted the websites above www.clientswebsite.co.uk that were relevant and asked them to place a link linking to their new website, I was successful with a couple of these. I placed www.clientswebsite.co.uk on all of their social media profiles I reformatted headers on their home page, making sure the H1 included my client's company name I found 2 extra versions of my client's home page (not exact copies, but very similar content) that had been published, so I decided to 301 redirect these to the correct home page Activated SSL and forced to HTTPS I would really appreciate it if anyone could share their thoughts here, whether it be explanations or possible solutions Adam
Technical SEO | | SO_UK0 -
Can I use a 301 redirect to pass 'back link' juice to a different domain?
Hi, I have a backlink from a high DA/PA Government Website pointing to www.domainA.com which I own and can setup 301 redirects on if necessary. However my www.domainA.com is not used and has no active website (but has hosting available which can 301 redirect). www.domainA.com is also contextually irrelevant to the backlink. I want the Government Website link to go to www.domainB.com - which is both the relevant site and which also should be benefiting from from the seo juice from the backlink. So far I have had no luck to get the Government Website's administrators to change the URL on the link to point to www.domainB.com. Q1: If i use a 301 redirect on www.domainA.com to redirect to www.domainB.com will most of the backlink's SEO juice still be passed on to www.domainB.com? Q2: If the answer to the above is yes - would there be benefit to taking this a step further and redirect www.domainA.com to a deeper directory on www.domianB.com which is even more relevant?
Technical SEO | | DGAU
ie. redirect www.domainA.com to www.domainB.com/categoryB - passing the link juice deeper.0 -
If I'm using a compressed sitemap (sitemap.xml.gz) that's the URL that gets submitted to webmaster tools, correct?
I just want to verify that if a compressed sitemap file is being used, then the URL that gets submitted to Google, Bing, etc and the URL that's used in the robots.txt indicates that it's a compressed file. For example, "sitemap.xml.gz" -- thanks!
Technical SEO | | jgresalfi0 -
Category URL Pagination where URLs don't change between pages
Hello, I am working on an e-commerce site where there are categories with multiple pages. In order to avoid pagination issues I was thinking of using rel=next and rel=prev and cannonical tags. I noticed a site where the URL doesn't change between pages, so whether you're on page 1,2, or 3 of the same category, the URL doesn't change. Would this be a cleaner way of dealing with pagination?
Technical SEO | | whiteonlySEO0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
404 error - but I can't find any broken links on the referrer pages
Hi, My crawl has diagnosed a client's site with eight 404 errors. In my CSV download of the crawl, I have checked the source code of the 'referrer' pages, but can't find where the link to the 404 error page is. Could there be another reason for getting 404 errors? Thanks for your help. Katharine.
Technical SEO | | PooleyK0 -
Adding 'NoIndex Meta' to Prestashop Module & Search pages.
Hi Looking for a fix for the PrestaShop platform Look for the definitive answer on how to best stop the indexing of PrestaShop modules such as "send to a friend", "Best Sellers" and site search pages. We want to be able to add a meta noindex ()to pages ending in: /search?tag=ball&p=15 or /modules/sendtoafriend/sendtoafriend-form.php We already have in the robot text: Disallow: /search.php
Technical SEO | | reallyitsme
Disallow: /modules/ (Google seems to ignore these) But as a further tool we would like to incude the noindex to all these pages too to stop duplicated pages. I assume this needs to be in either the head.tpl or the .php file of each PrestaShop module.? Or is there a general site wide code fix to put in the metadata to apply' Noindex Meta' to certain files. Current meta code here: Please reply with where to add code and what the code should be. Thanks in advance.0