301 Re-Directs Puzzling Question on Page Returned in Search Results
-
On our website, www.BusinessBroker.net, we have 3 different versions of essentially the same page for each of our State Business for Sale Pages. Back in August, we did a test and did 301 redirects using 5 States. For a long while after doing the redirects, the pages fell out of Google search results - we used to get page 1 rankings. Just recently they started popping back up on Page 1. However, I noticed that the new page meta data is not what is being picked up -- here is the example.
Keyword Searched for in Google -- "Maine Business for Sale"
Our listing shows up on Page 1 -- # 8 Result
URL returned is correct preferred version: - http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx
However, the Page Title on this returned page is still the OLD page title -
OLD TITLE -- maine Business for Sale Ads - maine Businesses for Sale & Business Brokers - Sell a Business on Business Broker
Not the title that is designated for this page -
New Title - Maine Businesses for Sale - Buy or Sell a Business in ME | BusinessBroker.net
Ditto for Meta Description.
Why is this happening?
Also have a problem with lower case showing up rather than upper case -- what's causing this?
http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx
versus -- http://www.businessbroker.net/State/Maine-Businesses_For_Sale.aspx
Any help would be appreciated.
Thanks, MM
-
thanks - we did some more research on our end and our developer found this --
The problem with the title, description and keywords is because we updated these for just Wyoming, West Virginia, Vermont, Maine and Florida. I made the mistake of assuming the URL would always have the proper case of the state in the URL but as we have discovered that was a bad assumption. The code was looking for those 5 states with the first letter capitalized and the link from Google was not so it defaulted to the format for the other states that we haven't changed yet. I have fixed that code so now those 5 states will display the correct title, description and keywords regardless of the case of the state in the URL. I will update the live site in the morning so this issue will be taken care of. We will still need to discuss the how best to handle the URLs that Google is getting with the incorrect case.
-
I can't say for sure what happened last time since I am not exactly sure what you did. But as long as the 301 redirects are set up correctly and Google is not having any trouble accessing and crawling them, then you shouldn't experience any major negative results over the long term.
Now that I've read your initial post again, I see that the Maine page is one of the States you tried to redirect as part of your test. However, as I posted above, the old page is not being 301 redirected to the new page, so Google may have dropped your site in the rankings since you essentially had two very similar pages competing against each other for the same terms.
-
The page that is ranking #8 in Google for me is http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx, and on that page, it has the old Title tag and it is not redirected to the version of the URL with the new Title tag.
When I visit http://www.businessbroker.net/State/Maine-Businesses_For_Sale.aspx, I am seeing the new Title tag.
Since these are two completely different pages you will need to 301 redirect the URL with the old Title tag to the new one. That should solve your problems.
-
follow up question regarding the upper and lower case question from our web developer ---
The question hasn't been how to do it. The question is what happens to all of the pages that are indexed by Google improperly when we do this? Are we going to see the same thing as when we redirected the states with a big drop for 6 months?
Keith
-
Thanks for the response, appreciate it. I'm pretty confident we re-directed the non-preferred URLs to this preferred page --
http://www.businessbroker.net/State/Maine-Businesses_For_Sale.aspx
This page has the updated Title Tag, Meta Description, etc. however, is not the one that shows up in the Google Search Result for "Maine Business for Sale"
-
I visited the page http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx and the Title tag in the HTML is "maine Business for Sale Ads - maine Businesses for Sale & Business Brokers - Sell a Business on Business Broker" so perhaps you did not publish the new versions of the Title tags?
As for your lower case/upper case issue, I went to both URLs and they both resolve to an active page. I would suggest making the URLs consistent to minimize the risk of duplicate content. First, I would set the designated URL in the rel="canonical" tag for each page. And depending on the type of server, I would suggest forcing the URLs to 301 redirect to a single version of the URL. Here is a good blog post on how to address this specific issue - http://www.seomoz.org/blog/common-technical-seo-problems-and-how-to-solve-them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How get google reviews on search results?
Hi, We have good google reviews. (4,8) Can we get this rating stars also on our organic search results ? Best remco
Intermediate & Advanced SEO | | remcoz0 -
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Better to 301 or de-index 403 pages
Google WMT recently found and called out a large number of old unpublished pages as access denied errors. The pages are tagged "noindex, follow." These old pages are in Google's index. At this point, would it better to 301 all these pages or submit an index removal request or what? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
How Google organic search results differ in Local Searches?
We all know Google displays nearby results by locating our ip address. My question is how does these results differ? For eg 1. If someone from Newyork search for "chinese Restaurant in Newyork" 2. Someone from California search for "chinese Restaurant in Newyork" 3. Someone from California changes his location to Newyork and search for "chinese Restaurant in Newyork" What are the factors the Google SERP looks into to display the result in local terms?
Intermediate & Advanced SEO | | rajeevEDU0 -
Google+ Page Question
Just started some work for a new client, I created a Google+ page and a connected YouTube page, then proceeded to claim a listing for them on google places for business which automatically created another Google+ page for the business listing. What do I do in this situation? Do I delete the YouTube page and Google+ page that I originally made and then recreate them using the Google+ page that was automatically created or do I just keep both pages going? If the latter is the case, do I use the same information to populate both pages and post the same content to both pages? That doesn't seem like it would be efficient or the right way to go about handling this but I could be wrong.
Intermediate & Advanced SEO | | goldbergweismancairo0 -
My website (non-adult) is not appearing in Google search results when i have safe search settings on. How can i fix this?
Hi, I have this issue where my website does not appear in Google search results when i have the safe search settings on. If i turn the safe search settings off, my site appears no problem. I'm guessing Google is categorizing my website as adult, which it definitely is not. Has anyone had this issue before? Or does anyone know how to resolve this issue? Any help would be much appreciated. Thanks
Intermediate & Advanced SEO | | CupidTeam0 -
No matter what I do, my website isn't showing up in search results. What's happening?
I've checked for meta-robots, all SEO tags are fixed, reindexed with google-- basically everything and it's not showing up. According to SEOMoz all looks fine, I am making a few fixes, but nothing terribly major. It's a new website, and i know it takes a while, but there is no movement here in a month. Any insights here?
Intermediate & Advanced SEO | | Wabash0 -
Block all search results (dynamic) in robots.txt?
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from: /search?12345&productblue=true&id789 to /product/search/blue_widgets/womens/large As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
Intermediate & Advanced SEO | | rhutchings0