Grr . . . Just can't seem to get there
-
mrswitch.com.au is one site that we are consistantly struggling with . . . It has a page rank of 3 which beats most of the competitors, but when it comes to Google AU searches such as Sydney Electrician and Electrician Sydney etc, we just can't seem to get there and the rankings keep dropping.
We backlink and update the pages on a regular basis
Any ideas? - Could it be the custom CMS system?
-
Thanks for the help! - Perfect advice!
-
No probs Steve, glad I could help
-
Awesome, yeah that does help. Thanks
-
It's a micro format of HTML.
You can use it to, for want of a better word, 'tag' your address, so basically it is a way of saying to bots that hey, you know this string of characters that follows, it is an address... there is some conjecture as to its usefulness, but I believe it is best practice to use it, especially with local search focused projects.
More info here: http://microformats.org/wiki/hcard
And a nice tool here: http://microformats.org/code/hcard/creator
I hope that helps
-
What's the score with hcards? I don't know much about it.
-
Also, when you say you backlink regularly, from where? Try to get backlinks from local sites to local pages... i.e. Sydney Electrician: get backlinks from local sydney directories, blogs & sites about Sydney, etc...
I haven't checked any of these for whether they're paid, nofollow, etc... but just as examples:
http://www.sydney-city-directory.com.au/
http://www.sydneycity.net/directory.htm
http://www.sydneybusinessdirectory.net/
http://www.expat-blog.com/en/directory/oceania/australia/sydney/
http://sydney-city.blogspot.com/
http://blogs.usyd.edu.au/sydneylife/
And obviously mix that with relevant links to do with electricians... and preferably sites that are electrician and Sidney based combined if possible... easier said that done I expect.
The same for other trades and other towns.
-
I agree fully with what Ryan suggested above, if local is your target. Also consider using the hcard microformat on your address, as that can't hurt either.
-
Are you trying to get in the local listings? If that's the case just get the address on the page and start submitting through Google Local. If it's a chain submit the bulk listings and/or service areas. Pages like this: http://www.mrswitch.com.au/location are going to hurt you as the engines will see it as a keyword stuffing attempt to manipulate results around any Sydney suburb + "Electrician". I'd recommend getting rid of the location page in its current format and replace it with a few of your actual locations. Use your keywords sparingly, and use the tools Google provides, especially mapping and reviews.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hi! I first wrote an article on my medium blog but am now launching my site. a) how can I get a canonical tag on medium without importing and b) any issue with claiming blog is original when medium was posted first?
Hi! As above, I wrote this article on my medium blog but am now launching my site, UnderstandingJiuJitsu.com. I have the post saved as a draft because I don't want to get pinged by google. a) how can I get a canonical tag on medium without importing and b) any issue with claiming the UJJ.com post is original when medium was posted first? Thanks and health, Elliott
Technical SEO | | OpenMat0 -
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Why Doesn't All Structured Data Show in Google Webmaster?
We have more than 80k products, each of them with data-vocabulary.org markup on them, but only 17k are being reported as having the markup in Google Webmaster (GW). If I run a page that GW isn't showing as having the structure data in the structured data testing tool (http://www.google.com/webmasters/tools/richsnippets), it passes. Any thoughts on why this would be happening? Is it because we should switch from data-vocabulary.org to schema.org? Example of page that GW is reporting that has structured data: https://www.etundra.com/restaurant-equipment/refrigeration/display-cases/coutnertop/vollrath-40862-36-inch-cubed-glass-refrigerated-display-cabinet/ Example of page that isn't showing in GW as having structured data: https://www.etundra.com/kitchen-supplies/cutlery/sandwich-spreaders/mundial-w5688-4-and-half-4-and-half-sandwich-spreader/
Technical SEO | | eTundra0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Can't get Google to Index .pdf in wp-content folder
We created an indepth case study/survey for a legal client and can't get Google to crawl the PDF which is hosted on Wordpress in the wp-content folder. It is linked to heavily from nearly all pages of the site by a global sidebar. Am I missing something obvious as to why Google won't crawl this PDF? We can't get much value from it unless it gets indexed. Any help is greatly appreciated. Thanks! Here is the PDF itself:
Technical SEO | | inboundauthority
http://www.billbonebikelaw.com/wp-content/uploads/2013/11/Whitepaper-Drivers-vs-cyclists-Floridas-Struggle-to-share-the-road.pdf Here is the page it is linked from:
http://www.billbonebikelaw.com/resources/drivers-vs-cyclists-study/0 -
My blog page isn't ranking in Google
Hi, I noticed that my blog page on my site isn't in Google when i search for full URL link http://www.asggutter.com/blog/ instead i see page that isn't even working asggutter.com/sitemap.xml screen shot http://screencast.com/t/6OVFLwL8nTL How i can i fix that. Thanks
Technical SEO | | tonyklu0 -
What can i do to stop my site from dropping in the rankings
Hi, we were number one in google for the keyword lifestyle magazine but now our magazine website www.in2town.co.uk is doing very bad in the rankings. One week ago we were around 8 then we went to 12 and now we are on the third page and i am not sure what is happening. We wanted and needed our home page to rank for the keywords of lifestyle magazine, lifestyle news but none of these keywords are doing very well with google can anyone please point me in the right direction so i can stop my site falling any further I am not sure if the home page is properly optimized but i have never had trouble with it before many thanks
Technical SEO | | ClaireH-1848860 -
Pictures 'being stolen'
Helping my wife with ecommerce site. Selling clothes. Some photos are given by producer, but at times they are not too good. Some are therefore taking their own photos and i suspect ppl are copying them and using them on their own site. Is there anyting to do about this - watermarking of course, but can they be 'marked' in anyway linking to your site ?
Technical SEO | | danlae0