Does SEOMOZ bot not know where to look for AJAX site snapshots?
-
snapshot://www.fubo.tv/?escaped_fragment=video/Nigeria_out_to_stop_Messi
-
Not being web developers on the help team, I'm hesitant diving into the code and tell you what's wrong here. Running some quick searches on Google though, it does look like there may be something wrong with the site setup. I'd really recommend reviewing the article I linked to above. It's a little old but the material is still relevant.
Once everything is corrected, Roger still may not be able to read the content. The workaround presented above was to show a potential solution but it isn't a guaranteed fix since he does have issues understanding AJAX in general.
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
I definitely understand how frustrating this can be and wish I could do more to help. Hopefully someone in the community will be able to dive in a bit more on the technical web dev side of things.
-
We are already using pushstate on supported browsers via angularjs ui.router library, which is why you see clean URL on our site without #! notation...
-
Hi there,
This question is a bit intricate.
With AJAX content like this, I know Google's full specifications: https://developers.google.com/webmasters/ajax-crawling/docs/specification indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method.
Click here to find out how to create crawl-able content using pushState . This should help our crawler read AJAX content. I hope this helps
-
I can't answer the question exactly, but I did notice that if you go to the url directly, the content displays properly but returns a 404 error. That may cause issues. It doesn't appear to be affecting Google's crawl but I could see it messing up Moz's crawler, especially because I'm fairly positive that Bing doesn't handle ajax properly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have too many tittle tag issue for my site on moz site crawl error
I have too many tittle tag issue in site crawl error but when I checked manually for the error there is no title in source code. Please Help me to understand
Moz Bar | | Nileshaggarwal0 -
How do I disallow crawl on a directory when it's a prefix to my site's URL?
I am trying to disallow our media repository (hosted elsewhere, but appears as a directory on our site) from being crawled by robots but it is not a subdirectory of the site, it's a prefix. So I need to disallow: mediabank.mywebsite.org Not: mysite.org/mediabank What would I need to put in my robots.txt and/or the other host's robots.txt to make this happen? Thanks!
Moz Bar | | Simon-Plan0 -
Why is Moz Crawling More Pages Than My Site Actually Has?
Hi I have a site that only has 5k pages but Moz has crawled 50K pages on the site when I initiated the site crawl. I don't exactly know why Moz is reporting me back so many pages but I was wondering why this is and if any of you out in the Moz community know anything about this. Thanks
Moz Bar | | drewstorys0 -
The Moz Spam Score tells me my site has too few backlinks for such a large site. How many links per page would I need to not trigger this filter and stop appearing spammy?
Hello! One of my sites is triggering the 'too few backlinks for large site' filter. I am wondering how many backlinks I need so as not to trigger this. Many thanks for your help. Toby
Moz Bar | | T0BY0 -
I'm checking keyword difficulty for two different sites. Would love to view the results by site instead of just one large list. Is that possible? Or would it just be easier to keep the lists separate in Excel and just import when I want an updated report?
I have keyword lists for two sites. Is there a way to label them in the keyword difficulty tool (List A, List B) so I can just view results for a particular site? Or do I need to run the report with List A, export results, delete those keywords, then run the report for List B?
Moz Bar | | JohnNovakLV0 -
Crawling password protected sites such as dev or staging areas to look at sites b4 going live ?
Hi Ive instructed clients to password protect dev areas so dont get crawled and indexed but how do we set up Moz crawl software so we can crawl theses sites for final check of any issues before going live ? Is there an option i havnt seen to add logins/passwords for crawl software to access ? cheers dan
Moz Bar | | Dan-Lawrence0 -
"Sorry! We weren't able to find that page when we crawled your site." Please help!
Can someone please explain whey I am getting this error for this link "http://lensoutloud.com/san-antonio-real-estate-photography/" when I attempt to perform an on page SEO grading? The link is indexed and ranking very well but for some reason Moz says it can't find the page when it crawled my site. This has also happened when I attempt to grade other pages on my site. Thanks in advance!
Moz Bar | | AndreGant0 -
Given that I am currently using a bot sniffer...How can I identify the MOZ bot in order to whitelist it?
MOZ is currently blocked from crawling my sites because I use a bot sniffer. Does anyone know how I can properly identify the MOZ bot in order to whitelist it? MOZ is using Amazon web services and thus employs thousands of dynamic IPs to crawl.
Moz Bar | | Felix_LLC0