Woops, didn't see Sean's reply when I posted mine!
- Home
- riplash
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
riplash
@riplash
Job Title: Digital Marketing
Company: Retailer
Website Description
British retailer of hot tubs, swim spas, saunas, garden buildings and other home leisure products.
Favorite Thing about SEO
It's never boring!
Latest posts made by riplash
-
RE: What is keyword rich anchor text?
-
RE: What is keyword rich anchor text?
Keyword rich anchor text refers to links containing your target keywords within the link text - as opposed to links which say "click here", "link", the domain name, the website name, and so on.
If, for example, you sold grey outdoor widgets, keyword rich anchor text would be phrases like "Buy grey outdoor widgets online", "this selection of grey outdoor widgets", etc.
Exact match anchor text - e.g. "grey outdoor widgets" - would be considered a type of keyword rich anchor text - but post-penguin, something to avoid overdoing. Instead you should be trying to get a variety of links, some keyword rich, some with no keywords at all (like "click here", the URL, etc). A smattering of exact match anchor text is still useful, though.
-
RE: Indexed Pages in Google, How do I find Out?
If you don't have access to Webmaster Tools, the most basic way to see which pages Google has indexed is obviously to do a site: search on Google itself - like "site:google.com" - to return pages of SERPs containing the pages from your site which Google has indexed.
Problem is, how do you get the data from those SERPs in a useful format to run through Screaming Frog or similar?
Enter Chris Le's Google Scraper for Google Docs
It will let scrape the first 100 results, then let you offset your search by 100 and get the next 100, etc.. slightly cumbersome, but it will achieve what you want to do.
Then you can crawl the URLs using Screaming Frog or another crawler.
-
RE: How to detect a bad neighborhood links?
You might want to also check out these posts.
http://www.seomoz.org/blog/link-profile-tool-to-discover-linking-activity
http://seogadget.com/bad-backlink-checking/
SEOGadget also have a free tool you might find useful - http://tools.seogadget.co.uk/ - you can export your backlink data from OSE/GWT/Majestic/etc and feed them into the tool 200 links at a time.
-
RE: Capital Letters in URLS?
Whilst it's not necessarily "bad" per se, the implications are, so this kind of canonicalisation issue needs to be taken care of using URL rewrites/permanent 301 redirects.
Typically, on a Windows-based server (without any URL rewriting), a 200 (OK) status code will be returned for each version regardless of the combination of upper/lower-case letters used - giving search engines duplicate content to index, and others duplicate content to link to. This naturally dilutes rankings and link equity across the two (or more) identical pages.
There is an excellent section on solving canonicalisation issues on Windows IIS servers in this SEOmoz article by Dave Sottimano.
On a Linux server (without any URL rewriting) you will usually get a 200 for the lower-case version, and a 404 (Not Found) for versions with upper-case characters. Whilst search engines wont index the 404, you are potentially wasting link equity passed to non-existent pages, and it can be really confusing for users, too.
There is a lot of info around the web about solving Linux canonicalisation issues (here is an article from YouMoz). If your site uses a CMS like Joomla or Wordpress, most of these issues are solved using the default .htaccess file, and completely eliminated when you combine this with a well chosen extension or two.
You can help the search engines figure out which version of a page you regard as the original by using the rel="canonical" meta tag in the html . This passes link equity and rankings from duplicate versions to the main, absolute version.
-
RE: How does Google treat multiple backlinks on the same page?
The second link will count in the example you linked to, as it's to a different landing page, and in this case, with different anchor text, too. If, however, it linked to the homepage twice instead, even with different anchor text, only the first link/anchor text would count.
-
RE: Schema.org on Youtube iframe embed?
Whilst I don't have the technical answer you're looking for, for us recently, Google has been showing video previews in some SERPs for pages where we simply have used YouTube's iframe embed code, and made no other effort to use schema.org or other Rich Snippets markup.
-
RE: Tactics to Influence Keywords in Google's "Search Suggest" / Autocomplete in Instant?
I've noticed more and more companies doing this on TV advertising campaigns of late - instead of giving out a campaign specific URL, as was the done thing maybe 2-5 years ago, companies are increasingly likely to give a specific search term out.
I wondered about the motives behind it, and recognised that this may be one of them.
Best posts made by riplash
-
RE: What is keyword rich anchor text?
Keyword rich anchor text refers to links containing your target keywords within the link text - as opposed to links which say "click here", "link", the domain name, the website name, and so on.
If, for example, you sold grey outdoor widgets, keyword rich anchor text would be phrases like "Buy grey outdoor widgets online", "this selection of grey outdoor widgets", etc.
Exact match anchor text - e.g. "grey outdoor widgets" - would be considered a type of keyword rich anchor text - but post-penguin, something to avoid overdoing. Instead you should be trying to get a variety of links, some keyword rich, some with no keywords at all (like "click here", the URL, etc). A smattering of exact match anchor text is still useful, though.
-
RE: Capital Letters in URLS?
Whilst it's not necessarily "bad" per se, the implications are, so this kind of canonicalisation issue needs to be taken care of using URL rewrites/permanent 301 redirects.
Typically, on a Windows-based server (without any URL rewriting), a 200 (OK) status code will be returned for each version regardless of the combination of upper/lower-case letters used - giving search engines duplicate content to index, and others duplicate content to link to. This naturally dilutes rankings and link equity across the two (or more) identical pages.
There is an excellent section on solving canonicalisation issues on Windows IIS servers in this SEOmoz article by Dave Sottimano.
On a Linux server (without any URL rewriting) you will usually get a 200 for the lower-case version, and a 404 (Not Found) for versions with upper-case characters. Whilst search engines wont index the 404, you are potentially wasting link equity passed to non-existent pages, and it can be really confusing for users, too.
There is a lot of info around the web about solving Linux canonicalisation issues (here is an article from YouMoz). If your site uses a CMS like Joomla or Wordpress, most of these issues are solved using the default .htaccess file, and completely eliminated when you combine this with a well chosen extension or two.
You can help the search engines figure out which version of a page you regard as the original by using the rel="canonical" meta tag in the html . This passes link equity and rankings from duplicate versions to the main, absolute version.
-
RE: How does Google treat multiple backlinks on the same page?
The second link will count in the example you linked to, as it's to a different landing page, and in this case, with different anchor text, too. If, however, it linked to the homepage twice instead, even with different anchor text, only the first link/anchor text would count.
-
RE: How to detect a bad neighborhood links?
You might want to also check out these posts.
http://www.seomoz.org/blog/link-profile-tool-to-discover-linking-activity
http://seogadget.com/bad-backlink-checking/
SEOGadget also have a free tool you might find useful - http://tools.seogadget.co.uk/ - you can export your backlink data from OSE/GWT/Majestic/etc and feed them into the tool 200 links at a time.
-
RE: Indexed Pages in Google, How do I find Out?
If you don't have access to Webmaster Tools, the most basic way to see which pages Google has indexed is obviously to do a site: search on Google itself - like "site:google.com" - to return pages of SERPs containing the pages from your site which Google has indexed.
Problem is, how do you get the data from those SERPs in a useful format to run through Screaming Frog or similar?
Enter Chris Le's Google Scraper for Google Docs
It will let scrape the first 100 results, then let you offset your search by 100 and get the next 100, etc.. slightly cumbersome, but it will achieve what you want to do.
Then you can crawl the URLs using Screaming Frog or another crawler.
Right now I do SEO, web administration and development for a UK retailer.
We still have unfinished business in the markets we are building authority in, so I hope to continue working here for a while.
Looks like your connection to Moz was lost, please wait while we try to reconnect.