Using a stop word when optimizing pages
-
I have a page (for a spa) I am trying to fully optimize and, using AdWords have run every conceivable configuration (using Exact Match) to ascertain the optimal phrase to use. Unfortunately, the term which has come up as the 'best' phrase is "spas in XXX" [xxx represents a location].
When reviewing the data, phrases such as "spas XXX" or "spa XXX" doesn't give me an appropriate search volume to warrant optimizing.
So, with that said, do I optimize the page without the word "in", and 'hope' we get the search volume for searches using the word "in", or optimize using the stop word?
Any thoughts?
Thank you!
-
The assumption it appears some are making is that a search for Spas in Key West only returns pages with Spas in Key West as the "optimized term" which is not true. If your traffic for the query is such that Spas "in" (each specific location) is higher, use in. Typically, that is not the query. Typically the way searches search in the broader sense is to alleviate that connector. If your research shows more for utilizing "in" , I would use it.
Hope that clarifies.
Good luck.
-
My dilemma, however, is when using your example of 'Spas [location]' or '[location] Spas', the search volume figures I'm receiving are extremely low. Search volume figures I'm receiving for 'Spas in [location]' are significantly higher.
So, would the recommendation be to use 'Spas [location]' then? Not trying sound stupid here, but would really like to fully optimize the site to attract the most traffic as possible...
Thank you!
-
Thank you for your reply HeaHea! I appeciate it.
-
AA in Florida,
I would not use the connector "in" for optimization. The reason being is the query utilizing "in" is rarely made. So, when optimizing for Spas Florida, you say Florida Spas ( don't worry about someone searching for Spas Florida; they will see you).
If the product is a Spa, and you are using a geo location term with it, no matter what the traffic is, you need to optimize for it: that is the business you or your client is in.Hope that helps,
Robert
-
I would optimize using the stop word as it has the volume, plus use of the phrase "spas in $location" is more natural sounding vs "keyword optimizationish". We actually use "$product in $city" in our title tags and rank really well for "$product $city" searches.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any SEO disadvantages with creating pages under a directory page which doesn't exists?
Hi, Let's say we are going to create pages in the URL path www.website.com/directory/sub-pages/. In case this page www.website.com/directory/ doesn't exists or redirected; will the pages created in this URL path like stated above have any issues in-terms of SEO? We will link these pages from somewhere in the website and planning to redirect the /directory/ to homepage. Suggestions please.
Algorithm Updates | | vtmoz1 -
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
Duplicate Content on Product Pages with Canonical Tags
Hi, I'm an SEO Intern for a third party wine delivery company and I'm trying to fix the following issue with the site regarding duplicate content on our product pages: Just to give you a picture of what I'm dealing with, the duplicate product pages that are being flagged have URLs that have different Geo-variations and Product-Key Variations. This is what Moz's Site Crawler is seeing as Duplicate content for the URL www.example.com/wines/dry-red/: www.example.com/wines/dry-red/_/N-g123456 www.example.com/wines/dry-red/_/N-g456789 www.example.com/wines/California/_/N-0 We have loads of product pages with dozens of duplicate content and I'm coming to the conclusion that its the product keys that are confusing google. So we had the web development team put the canonical tag on the pages but still they were being flagged by google. I checked the of the pages and found that all the pages that had 2 canonical tags I understand we should only have one canonical tag in the so I wanted to know if I could just easily remove the second canonical tag and will it solve the duplicate content issue we're currently having? Any suggestions? Thanks -Drew
Algorithm Updates | | drewstorys0 -
Pages fluctuating +/- 70 positions in Google SERPs?
I've got some pages that appear somewhere around #25 in Google. Every now and then, it just goes away from the top 100 results for a few days (even up to a week) and then it comes back. I've got other pages that rank around #8 which falls down to about #75 for a while and then it comes back. But while a page may be gone from the top 100 results in the US, it still ranks at about the same place everywhere else in the world (+/- 10 positions). I've seen this happen in the past but never it happened so often. What gives?!?
Algorithm Updates | | sbrault740 -
So, useless link exchange pages still work?!
After 3 years out of SEO I thought things might have moved on, but apparently not. Bit of back link research and all the top sites in my niche have tons of reciprocal links to barely relevant sites. Do I really have to do this? I mean I thought this was so out of date, it's not much better than keyword stuffing. So, should I just forget my lofty principles asking myself 'is this of any value to my users?' and just take the medicine?
Algorithm Updates | | Cornwall0 -
Advice on breaking page 3 SERP?
I've worked extremely hard re-ranking for the key word "kayak fishing" after our site migration and panda/penguin our site www.yakangler.com was lost in the doldrums. I couldn't find us on any of the results page 1 thru page 64. The good news is we have managed to make it to page 3 for the past month but I feel like we have hit a wall. I can't seem to get any more moment in my SERP for "kayak fishing" Google US. Anyone have any recommendations on what I should do to move us up more? We have good content that is updated daily, an active community and forum. site: www.yakangler.com key word: kayak fishing
Algorithm Updates | | mr_w0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0