Angular.js + Crawlers
-
I am working with a site that recently deployed Angular.js on the site. From an SEO standpoint its a little more tricky than we thought. We have deployed a couple updates to render pages for the bots but we not seeing changes in Moz weekly reports.
When it comes to Angular.js, will the Moz bots read/access the site the same as the other major engines? I'm trying to figure out if our deployments are working or if there's something off in the Moz reports.
Thanks.
-
I am using prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (http://www.exambazaar.com/). Hence it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this?
List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff
Adding dotbot:
prerender.crawlerUserAgents.push('dotbot'); -
Within prerender you are able to determine which user agents will receive the HTML snapshot. It is here that you can add rogerbot. This is allowing Moz to crawl the site as if they were Google and receive the HTML snapshot version.
Additionally, you can always use the fetch as bot function within Webmaster Tools, to see exactly what is being presented/indexed.
-
With the current direction of web development this is something that needs to be addressed. Google has already confirmed that they are in fact crawling Javascript based sites.
Reference:
http://ng-learn.org/2014/05/SEO-Google-crawl-JavaScript/
https://support.google.com/webmasters/answer/174992?hl=enThe solution in this case is an HTML snapshot which, you could roll your own, but there are services like https://prerender.io/ that can do it for you.
This doesn't quite help the case for Moz Bot, maybe the HTML snapshots do work here - I haven't tested it yet. Either way, Javascript is becoming more and more a dominant language to code up websites. I hope Moz recognizes this because this toolset is awesome and I'd love to continue using it.
-
Is there still no update to this by MOZ?
A number of sites I work on are using Angularjs pushstate. Is there a way to point moz bot to the escaped fragment static pages?
-
Static rendering is not cloaking. It's a very common practice that Google actually recommends. The issue with angular js is that everything is code based. If you were to look at the code all the pages would look the same. In fact, MozBot sees this as every page is duplicate content.
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
It would be nice to see the MozBot act more like Google-bot.
-
What do you mean by "We have deployed a couple updates to render pages for the bots" that sounds like clocking?
-
Hello, Josh
Currently our crawlers do not process any kind of javascript found on pages (including pages created with angular.js.) I don't if the major search engines have this restriction or not.
For moz's crawlers, this means that links created through AJAX or other javascript will not be picked up. Links appearing in static content, including those within
<noscript>tags, should be noticed and indexed. Be aware that even if you've already made changes exposing links in the page's static content, it can take up to a week for the campaign crawl to catch up.</p> <p>Hopefully that answered your questions! Let us know if you have any more.</p></noscript>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawlers reporting upper case letter url versions although these have been 301'd to lower case !?
Hi I have a client e-com site who's dev platform is on a windows server Their product pages have been auto-named after the product title, with the first letter in each word being upper case, which has hence translated to the URL having upper cases instances too. I asked them to set up 301 redirects for all url's that had upper case instances to lower case versions, which they say they have done. However I'm still seeing url's with upper case instances showing up in webmaster tools and moz crawl reports but when I copy & paste them into a browser they do redirect to, & resolve in, the lower case version. Its also upper case versions reported in the Google cache! So how come webmaster tools & Moz etc are reporting the upper case versions, surely if redirected it should be the lower case versions All Best Dan
Moz Pro | | Dan-Lawrence0 -
I have a duplicate content on my Moz crawler, but google hasn't indexed those pages: do I still need to get rid of the tags?
I received an urgent error from the Moz crawler that I have duplicate content on my site due to the tags I have. For example: http://www.1forjustice.com/graves-amendment/ The real article found here: http://www.1forjustice.com/car-accident-rental-car/ I didn't think this was a big deal, because when I looked at my GWT these pages weren't indexed (picture attached). Question: should I bother fixing this from an SEO perspective? If Google isn't indexing the pages, then am I losing link juice? 6c2kxiZ
Moz Pro | | Perenich0 -
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
SEOmoz crawler not crawling my site
We set up a new campaign in SEOmoz on Friday. It is my understanding that the preliminary crawl can cover up to 250 and this has been our experience in the past. However, the preliminary crawl only went through 2 pages. This is a larger eCommerce site with many pages. Any ideas why more pages weren't crawled? We set up the campaign to track at the root domain level.
Moz Pro | | IMM0 -
Why do crawlers still track meta keywords if it is not needed in my site?
I have crawled three sites already and it returns more than 5000 errors most of which are MIssing Meta Keywords tags. The sites are on Wordpress and using my SEO plugin I can easily edit the meta keywords of each page, but I am having second thoughts. Well should I?
Moz Pro | | jernest0020 -
Seomoz crawler problems
I have had Seomoz for about a month. It has crawled about 1000 pages. I have about 10,000 pages total for the site. Why are these others being a problem? I have contacted help but the guy isn't any help, we have just been going back and forth for the last two weeks. Any suggestions?
Moz Pro | | EcommerceSite0 -
What Does the SEO Moz Crawler Take Into account?
I'm working on a page that has links from some decent pages pointing to it, but a lot of them are low-value blog comments. So I'm pretty sure that its Page Authority is higher than it should be, compared to where it's ranking. Does SEO moz take the type of link into account? i.e. if it's a footer link, blog comment, or forum signature link; this should carry less weight than a link in the content of the page itself, as it does with Google.
Moz Pro | | seanmccauley0 -
Why does SEOMoz crawler ignore robots.txt?
The SEOMoz crawler ignores robots.txt It also "indexes" pages marked as noindex. That means it is filling up the reports with things that don't matter. Is there any way to stop it doing that?
Moz Pro | | loopyal0