Does SEOMOZ bot not know where to look for AJAX site snapshots?
-
snapshot://www.fubo.tv/?escaped_fragment=video/Nigeria_out_to_stop_Messi
-
Not being web developers on the help team, I'm hesitant diving into the code and tell you what's wrong here. Running some quick searches on Google though, it does look like there may be something wrong with the site setup. I'd really recommend reviewing the article I linked to above. It's a little old but the material is still relevant.
Once everything is corrected, Roger still may not be able to read the content. The workaround presented above was to show a potential solution but it isn't a guaranteed fix since he does have issues understanding AJAX in general.
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
I definitely understand how frustrating this can be and wish I could do more to help. Hopefully someone in the community will be able to dive in a bit more on the technical web dev side of things.
-
We are already using pushstate on supported browsers via angularjs ui.router library, which is why you see clean URL on our site without #! notation...
-
Hi there,
This question is a bit intricate.
With AJAX content like this, I know Google's full specifications: https://developers.google.com/webmasters/ajax-crawling/docs/specification indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method.
Click here to find out how to create crawl-able content using pushState . This should help our crawler read AJAX content. I hope this helps
-
I can't answer the question exactly, but I did notice that if you go to the url directly, the content displays properly but returns a 404 error. That may cause issues. It doesn't appear to be affecting Google's crawl but I could see it messing up Moz's crawler, especially because I'm fairly positive that Bing doesn't handle ajax properly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz bot has trouble crawling Angular JS - I believe it's seeing the SPA (Single Page Application) before Universal. Anyone else have this issue? What is the fix?
The Moz bot user agent detection settings are able to read Universal, but the Single Page Application (SPA) version partially loads on the website before Universal. Because of this, Moz (and possibly search engines) think we have massive duplicate content issues. For example, the crawl report said a particular product page (which has about 1,000 words) has 33,000 words and has duplicate content with over 300 other pages. This makes me believe it's only picking up the SPA version. Has anyone come across this, and what would be the fix?
Moz Bar | | laurengdicenso1 -
The factors considered in the new domain authority algorithm? On-site factors we can use to compare with competitors? Is having an NA as a spam score a bad thing?
Does anyone know the factors considered in the new domain authority algorithm other than spam score and complex distributions of links based on quality and traffic? Does anyone know of on-site factors we can use to compare with competitors to try and improve DA? Is having an NA as a spam score a bad thing?
Moz Bar | | CQMarketing0 -
How do I go about fixing my High Priority issues that SEO moz says I have on a PHP site?
I am been trying to deal with this problem for some time now. I have talked to several IT people and SEO moz. None seem to know how to fix these issues on the type of site our company is. Our biggest issue with is Duplicate Page Content. We also have some title issues. Our site is built with PHP coding and variable, meaning the site is not a typical static website. We have a handful of pages that are dynamic depending on what the users chooses to see and do. So, my problem is I can't just go to a specific page and put the canonical or the redirect. It isn't multiple pages for our category pages, for example, it is just one that builds the page depending on the search. Please help!
Moz Bar | | JoshMaxAmps0 -
MOZ crawl test is not reporting on all the pages on my site.
I've run the crawl test one of the sites I've taken over SEO for, however its only picking all the pages. For instance it indexes all the pages under xxxxx/us but none under xxxxx/au or xxxxx/uk The pages are being indexed as they're ranking in Google. Thanks.
Moz Bar | | ahyde0 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0 -
Given that I am currently using a bot sniffer...How can I identify the MOZ bot in order to whitelist it?
MOZ is currently blocked from crawling my sites because I use a bot sniffer. Does anyone know how I can properly identify the MOZ bot in order to whitelist it? MOZ is using Amazon web services and thus employs thousands of dynamic IPs to crawl.
Moz Bar | | Felix_LLC0