Local and Organic Listings
-
Hi,
My client has a number of stores across the country (UK) and ideally I would like them to appear in both the local and organic listings - at the moment I appear more often than not on page one for one or the other - I have noticed however that some pages appear in both.
I understand that Google will not place a listing for the same page in both local and organic so I need to optimise a page on the site for organic and point my local listing to a different page (home page?). On some results though I am seeing my local result appearing with the home page URL listed but the actual link points to the internal store page which is the same page that is appearing in the organic listing (both on page one). Other local listings of mine appear with the store page URL showing in the result.
I haven't set anything up differently for these stores. Can anyone explain why this is happening?
Thanks,
Dan
-
You are very welcome, HippieChick. Glad this helped to clear up a big question at your office!
-
Thank you, Miriam! I've had my boss so mad at me for loosing page 1 ranking (we have Local ranking) and I couldn't explain what happened, even when I've worked so hard on the SEO of the site. (I'm no expert but there's a only a small budget at our company, and I'm the cheapest option). I've explained this to him and finally everyone's happy!
-
Thanks for your help guys,
I'll do some testing and see if I get any positive results.
I'll let you know if anything works.
Cheers,
Dan
-
Hey Dan,
Sorry, got ya.
You know, I am not sure how you go about this with any kind of reliability. Like you say, sometimes it crops up that someone has both but not very often in our experience. That said, the more abstract and hopeless the results then the more wacky and repetitive the organic results seem to get of late.
You could try and optimise some of your citations to get them up in the results on page one and possibly look at content on other sites to try and crowd out the results some with other pages that ultimately link to your clients site. We have had some good success with that for local clients.
Or, alternatively, like Miriam mentioned, try and create other, authoritative pages on the site that are also highly relevant but that may be a bit of a thankless task compared to the relative ease of standard local work.
Sounds like a greedy client wanting both.
Marcus
-
Obviously, not a direct answer, but you could always look at getting some of your citations to rank within the results for your targeted search terms. Also, this is where possible content on other sites can be used to further crowd out those results a little more.
It's good when all rivers run to the same place!
-
Hi Dan,
In general, you are correct that Google doesn't commonly show a double organic/local ranking on the first page for most companies. There are two main exceptions to this:
-
If the query,locale or both have little competition or Google lacks data about them.
-
The scenario in which a second page on the website is authoritative enough to gain an organic listing, independent of the page that is being linked to from the local result.
Around the Venice update in early 2012, double rankings became almost impossible to find. Slowly, it appears to me that they have become more common in recent times, typically in the above scenarios. There may be other exceptions, as well, but I believe these are the most typical.
-
-
Hi,
I suggest the following for Local SEO
- Add address in footer with local schemas
- NAP - Get your site with its address if possible in local directories or magazines or blog
- Add Site to Google's Map Maker
Good luck
Carla
-
Hi Marcus,
Thanks for your reply.
I think I pretty much have all bases covered that you mention, I probably didn't explain myself very well. Probably best I give an example:
A search for 'Self Storage Barking' brings up both local and organic listings obviously but I don't understand how Big Yellow have a local listing and organic listing pointing to the same page. A search for 'Self Storage Eastbourne' again brings up similar results this time for Safestore - both results point to the same page but a search for 'Self Storage Bristol' adds the long store page URL to the local listing, I guess stopping an organic listing too...
I don't understand why one local listing displays the home page URL and points to the store page and another displays the store page URL and blocks the organic listing if they are all set up the same.
Hope that explains better.
Dan
-
Hey Dan
That's kind of confusing to wrap my head around. Have you got an example you can post?
In essence, if you have a local business with multiple locations you are doing two categories of work here.
1. Organic SEO for the site as a whole
2. Local SEO for each location
Local SEO for each location requires a few things to work well in my experience.
2.1. A page on the site optimised for each distinct location (think address, schema, NAP etc)
2.2. An individual Google+ Local listing for each location linked back to the location page
Then the normal rules of local SEO apply and you need citations, local links, reviews etc for each of your individual locations.
A consistent address is important for businesses with a single location but here the scope for things to go wrong is greater so you have to be fastidious in your approach to keeping this consistent.
Audit existing citations, standardise everything, try to get reviews for each location and if they can mention location and service it seems to help more ("If you could mention the office location and service in your review we would really appreciate it").
Your locally optimised page for each store should be able to pop up in local and organic depending on obvious factors (competition etc) and you are creating a solid landing page for localised organic traffic and pure local results (7 pack etc).
The one point here is that you will usually have lots of citations kicking about that you have not created so be sure to audit the existing listings and treat each location as a separate entity. Competition and the approach may be different in Birmingham, Manchester, London etc and the approach may need to be tailored for each location depending on the strength of competition in each area in each location.
Hope that helps!
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
List of stop words
Hello, I have that search engine discounts stop words in order to gain computing power. Is there a list of those words that are discounted ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Local SEO - Do I need it if I don't do business locally?
Super confused about this. Our office is located in Los Angeles, but it is not a storefront, and our clients are from all over the country... and our business involves travel to other countries. So there is nothing "local" about us. But everything I read seems to say we should be doing local SEO. How to approach this?
Intermediate & Advanced SEO | | benenjerry1 -
Value in creating an 'All listings' sitemap?
Hello, I work for the Theater discovery website, theatermania.com. Users can browse current shows on a city-by-city basis, such as New York: http://www.theatermania.com/new-york-city-theater/shows/ My question is, is there any SEO benefit in us creating a single page that lists all shows (both current and non-current) across the US? My boss mentioned that this could help our long tail results, but I'm not so sure.
Intermediate & Advanced SEO | | TheaterMania0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
How is Google crawling and indexing this directory listing?
We have three Directory Listing pages that are being indexed by Google: http://www.ccisolutions.com/StoreFront/jsp/ http://www.ccisolutions.com/StoreFront/jsp/html/ http://www.ccisolutions.com/StoreFront/jsp/pdf/ How and why is Googlebot crawling and indexing these pages? Nothing else links to them (although the /jsp.html/ and /jsp/pdf/ both link back to /jsp/). They aren't disallowed in our robots.txt file and I understand that this could be why. If we add them to our robots.txt file and disallow, will this prevent Googlebot from crawling and indexing those Directory Listing pages without prohibiting them from crawling and indexing the content that resides there which is used to populate pages on our site? Having these pages indexed in Google is causing a myriad of issues, not the least of which is duplicate content. For example, this file <tt>CCI-SALES-STAFF.HTML</tt> (which appears on this Directory Listing referenced above - http://www.ccisolutions.com/StoreFront/jsp/html/) clicks through to this Web page: http://www.ccisolutions.com/StoreFront/jsp/html/CCI-SALES-STAFF.HTML This page is indexed in Google and we don't want it to be. But so is the actual page where we intended the content contained in that file to display: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff As you can see, this results in duplicate content problems. Is there a way to disallow Googlebot from crawling that Directory Listing page, and, provided that we have this URL in our sitemap: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff, solve the duplicate content issue as a result? For example: Disallow: /StoreFront/jsp/ Disallow: /StoreFront/jsp/html/ Disallow: /StoreFront/jsp/pdf/ Can we do this without risking blocking Googlebot from content we do want crawled and indexed? Many thanks in advance for any and all help on this one!
Intermediate & Advanced SEO | | danatanseo0 -
Has anyone else seen a Google Plus Local listing displace a regular search listing?
I have a particular site that I have been working on for about eight months and had the site on Page 1 of Google search results for eight keywords (they are fairly small local-based keywords, so I'm really not trying to boast). Perhaps six weeks ago for two of the keywords we popped into the #2 position for Google Plus Local results. When this happened the site completely disappeared from the regular search results. A couple weeks later, the Google Plus Local listing was gone, and the site was back on Page 1 in the regular listings. This has gone back and forth several times, with either a very high Local result or a very high regular search result, but only one at a time. I suppose it would make sense for the same site to only be able to have one position on the front page at any given time, but my searches for info on this have been entirely fruitless. Has anyone else seen anything like this or have any thoughts? Cheers.
Intermediate & Advanced SEO | | IanKietzman271 -
Pay on Organic Search Results
Are there companies out there that accept payment on the results they get for organic search listings? I have a site that I want to be number 1 for two terms on Google UK and .com and I while I dont think it will take much effort I would like to find a decent SEO company or person that can do this and be paid for the result. What do people think?
Intermediate & Advanced SEO | | clayts0 -
800 Number vs. Local Phone
I have a client with multiple locations throughout the US. They are currently using different 800 numbers on their site for their different locations. As they try to optimize their local presence but submitting to local directories, we are trying to determine two things: Does having a local number reroute to an 800 number devalue the significance of it being a local number (I've never heard of this, but someone told them it did) Locality and consistency are important. Assuming they can't remove the 800 numbers from the site, are they better off keeping the 800 numbers on their site and using local numbers every else online OR just using the 800 numbers for all of their local listings?
Intermediate & Advanced SEO | | Caleone0