Weird situation with our local listing.
-
A couple of weeks ago I was having problems with my real estate local listing. I made some changes (like removing anything remotely like keyword stuffing and a few other things). Then, we re-emerged. But now, instead of having 4 citations we have 221. It looks like Google has merged our listing with all of the other agents in our office.
So, now if you type in to Google: ABC Realty in OurCity the very first listing is a 1-box that has our listing: Jane and John Doe, Sales Representatives, ABC Realty and our phone number. We actually rank higher than the ABC Realty office's own web page. We are getting phone calls from people who think they are calling the main office but instead call us. (This is not at all bad for business...but perhaps there is an ethical issue?)
My problem is that if you click on our places listing, there is one photo on there of a realtor who is not us. Additionally, we lost our two reviews that we had, but we have one review for another realtor who is not us. The rest of the listing is totally ours - our photos, our description, our website, our phone number. If I go to edit the listing, the option to remove that photo is not there.
So, now we have a conundrum. One one hand, it's great to have this boost. We are appearing #1 for searches for our office and this brings us business. But, I want to be ethical. Realtors can be nasty and I don't want other realtors thinking that I have done bad manipulative stuff to steal other peoples' business.
Can anything be done? What would you do?
-
lol I expected to have to search harder than that but just goes to show how common a problem it must be (not that it's a problem for you by the sounds of things lol):
-
That's happened loads, I've seen blogs and stuff about it all over the web. Can't remember if they found a solution or what it was but I'll have a look.
-
Solution #2: They should hire you as their SEO.
-
This is exactly what is happening. Our office has about 100 agents. It doesn't have its own local listing. Google thinks that our listing is the office listing.
-
Since you're in real estate I would sell your office to ABC Realty for a hefty fee due to your office having prime Google SERP location.
Slightly more practical: If the corporate office submitted a bulk listing for its real estate agents and their own location make sure this is setup properly (http://www.google.com/support/places/bin/answer.py?hl=en&answer=173669) Or, they might not have done this at all, and Google is now considering you as the corporate office.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 or 410 status code after deleting a real estate listing
Hi there, We manage a website which generates an overview and detailpages of listings for several real estate agents. When these listings have been sold, they are removed from the overview and pages. These listings appear as not found in the crawl error overview in Google Search Console. These pages appear as 404's, would changing this to 410's solve this problem? And if not, what fix could take care of this problem?
Intermediate & Advanced SEO | | MartijntenCaat0 -
Canonical tags for duplicate listings
Hi there, We are restructuring a website. The website originally lists jobs that will have duplicate content. We have tried to ask the client not to use duplicates but apparently their industry is not something they can control. The recommendations I had is to have categories (which will have the idea description for a group of jobs), and the job listing pages. The job listing pages will then have canonical tags pointing to the category page as the primary URL to be indexed. Another opinion came from a third party that this can be seen as if we are tricking Google and would get penalised, **Is that even true? **Why would Google penalise for this if thats their recommendations in the first place? This third party suggested using nofollow on the links to these listings, or even not not index them all together. What are your thoughts? Thanks Issa
Intermediate & Advanced SEO | | iQi0 -
Is Building a Local Directory of Businesses on a Subdomain Good SEO?
Hello Fellow Moz'ers: I own a small digital shop in a major US city. We had a marketing idea which I'd like some input on the soundness of. We are creating a professional services directory of 'digital professional services providers' in our hometown. The directory's membership will only be open to firms located within our city limits. The directory will be curated and maintained, ongoing, by us. Our motivation is 75% selfish and 25% benevolent. The idea is that, by building the directory on our subdomain, we hopefully will collect links, which ultimately will enhance search visibility. But I'm concerned about the devaluation directories have incurred in recent years and I've even seen advice given to the effect that listings in some directories might be harmful to a site's link profile. It is not our intention to harm those who might list in our directory. Any thoughts on this matter would be greatly appreciated!
Intermediate & Advanced SEO | | Daaveey0 -
Website not properly listed on google organic despite SEO efforts
Hello, I have worked thoroughly on my website tags including HTML titles, URLs, H1 headers and text on each section. The problem is that despite this effort, my website does not seem to improve in terms of ranking (the onsite optimization has been done 6 months ago already). We have a sitemap, we have done link building and everything but still no tangible progress. The anomaly I am experiencing is the following: If I search on Google.com.lb for "aparment for sale in lebanon" I don't get the section on my website that is optimized for that particular query (which is the Buy section located here http://www.ramcolb.com/apartment-sale-beirut-lebanon). my site appears only on page 6 but the Homepage appears which is very counter intuitive because it is not optimized for the "apartment for sale in Lebanon" keyword. And this anomaly is present on almost all sections and their relevant queries. The relevant section for the particular query never appears, it is always another irrelevant section that appears but far in the listings (beyond page 6). It is as if Google hasn't indexed properly my website and is mixing up the sections... Has anyone experienced this type of problems? what can be done? Thanks in advance
Intermediate & Advanced SEO | | ImadKaram0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Google Places: Multiple company listings. How to rank the HQ page over a branch location.
Hi Moz experts! I have a client with Google Place listings for multiple branch locations and for some reason the fully SEO optimized Head Office listing is being beaten by an un-optimized branch listing. The HQ listing gets a tonne of traffic where as the ranking and unoptimized branch location doesn't and is the main listing when searching through Google. Any help would be greatly appreciated. Thanks
Intermediate & Advanced SEO | | Jon_bangonline1 -
Multiple Google+ Local (Google Place) under one email address
As a automotive dealership group, we have 15+ business listings set up under one Google+ local account. Google+ Local (Google Places) offers the ability to upload a data file for 10+ listings, so we've kept all listings under one login for efficiency. Is there any specific local SEO benefit or any general benefit at all to having each business listing set up under their own separate email address?
Intermediate & Advanced SEO | | autoczar0 -
Hundreds of thousands of 404's on expired listings - issue.
Hey guys, We have a conundrum, with a large E-Commerce site we operate. Classified listings older than 45 days are throwing up 404's - hundreds of thousands, maybe millions. Note that Webmaster Tools peaks at 100,000. Many of these listings receive links. Classified listings that are less than 45 days show other possible products to buy based on an algorithm. It is not possible for Google to crawl expired listings pages from within our site. They are indexed because they were crawled before they expired, which means that many of them show in search results. -> My thought at this stage, for usability reasons, is to replace the 404's with content - other product suggestions, and add a meta noindex in order to help our crawl equity, and get the pages we really want to be indexed prioritised. -> Another consideration is to 301 from each expired listing to the category heirarchy to pass possible link juice. But we feel that as many of these listings are findable in Google, it is not a great user experience. -> Or, shall we just leave them as 404's? : google sort of says it's ok Very curious on your opinions, and how you would handle this. Cheers, Croozie. P.S I have read other Q & A's regarding this, but given our large volumes and situation, thought it was worth asking as I'm not satisfied that solutions offered would match our needs.
Intermediate & Advanced SEO | | sichristie0