Differences between Lynx Viewer, Fetch as Googlebot and SEOMoz Googlebot Rendering
-
Three tools to render a site as Googlebot would see it:
SEOMoz toolbar.
Lynxviewer (http://www.yellowpipe.com/yis/tools/lynx/lynx_viewer.php )
Fetch as Googlebot.I have a website where I can see dropdown menus in regular browser rendering, Lynxviewer and Fetch as Googlebot. However, in the SEOMoz toolbar 'render as googlebot' tool, I am unable to see these dropdown menus when I have javascript disabled.
Does this matter? Which of these tools is a better way to see how googlebot views your site?
-
Each tool processes pages differently, attempting to emulate the actual Googlebot crawler. You may want to jump over to SEOmoz's Help Desk to get specific info on the Moz version, however the only way to know that you'll always be able to see what Googlebot actually sees, even when the Googlebot might change over time, is to use Google Webmaster Tools.
Sign into GWT, then click to "Diagnostics" and then "Fetch as Googlebot". There you'll be able to enter a URL. It may take a few minutes to get the results, but you'll see what they see.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google keeps marking different pages as duplicates
My website has many pages like this: mywebsite/company1/valuation mywebsite/company2/valuation mywebsite/company3/valuation mywebsite/company4/valuation ... These pages describe the valuation of each company. These pages were never identical but initially, I included a few generic paragraphs like what is valuation, what is a valuation model, etc... in all the pages so some parts of these pages' content were identical. Google marked many of these pages as duplicated (in Google Search Console) so I modified the content of these pages: I removed those generic paragraphs and added other information that is unique to each company. As a result, these pages are extremely different from each other now and have little similarities. Although it has been more than 1 month since I made the modification, Google still marks the majority of these pages as duplicates, even though Google has already crawled their new modified version. I wonder whether there is anything else I can do in this situation? Thanks
Technical SEO | | TuanDo96270 -
Some URLs were not accessible to Googlebot due to an HTTP status error.
Hello I'm a seo newbie and some help from the community here would be greatly appreciated. I have submitted the sitemap of my website in google webmasters tools and now I got this warning: "When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted." How do I fix this? What should I do? Many thanks in advance.
Technical SEO | | GoldenRanking140 -
GWT False Reporting or GoogleBot has weird crawling ability?
Hi I hope someone can help me. I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day. Example: http://www.mydomain.com/category/article/ (this would be a correct structure for the site). GWT reports: http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl. This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods. The site is using a htacess file with redirects and a rewrite condition. Rewrite Condition: Need to redirect when no trailing slash RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | baldnut
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301] The above condition forces the trailing slash on folders. Then we are using redirects in this manner: Redirect 301 /article.html http://www.domain.com/article/ In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so: <ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s] RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule> Is there anything that I have done that would cause this type of redirect 'loop' ? Any help greatly appreciated.\0 -
Huge ranking difference between google and bing
I am trying to rank for the keyword "trash bags" I did a lot of on-page optimization and link building. We started ranking #2 on bing and yahoo but google seems to be stubbornly fluctuating between being as high as 20 and as low as 45 and even dropped our rankings for a couple of weeks. Is there any need for concern if google is acting so different from bing/yahoo?
Technical SEO | | EcomLkwd0 -
Internet Explorer and Chrome showing different SERP's
Well the title says it all really. Same query, different browsers, same computer and different search results. I thought at first it may have differed because I was logged into my google profile on chrome but I logged out and tested and still different results. Is this normal ?
Technical SEO | | blinkybill0 -
Google Cache Version and Text Only Version are different
Across various websites we found Google cache version in the browser loads the full site and all content is visible. However when we try to view TEXT only version of the same page we can't see any content. Example: we have a client with JS scroller menu on the home page. Each scroller serves a separate content section on the same URL. When we copy paste some of the page content in Google, we can see that copy indexed in Google search results as well as showing in Cache version . But as soon as we go into Text Only version we cant see the same copy. We would like to know which version we should trust, Google cache version or the TEXT only version.
Technical SEO | | JamesDixon700 -
How can I prevent sh404SEF Anti-flood control from blocking SEOMoz?
I'm using sh404SEF on my Joomla 1.5 website. Last week, I activated the security functions of the tool, which includes an anti-flood control feature. This morning when I looked at my new crawl statistics in SEOMoz, I noticed a significant drop in the number of webpages crawled, and I'm attributing that to the security configurations that I made earlier in the week. I'm looking for a way to prevent this from happening so the next crawl is accurate. I was thinking of using sh404SEFs "UserAgent white list" feature. Does SEOMoz have a UserAgent string that I could try adding to my white list? Is this what you guys recommend as a solution to this problem?
Technical SEO | | JBradySD0 -
Switching to another CMS with different permalink structure, how not to lose organic traffic and Google rankings? Urgent Please!
One of my customers is in online jewellery sales business and they are going to change their CMS; the problem is there is no co-relation between URL permalink structure of the current CMS and the new one. What should we do so that we don’t lose our current organic traffic (also Google rankings) coming from indexed URLs in Google due to permalink structure change ? Typical example: Current URL: http://www.abc.com/pirlanta-nedir,AR-3.html The new URL is going to look like this: http://www.abc.com/tek-tas-pirlanta-ar-4.html
Technical SEO | | merkal20050