Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What's the best way to test Angular JS heavy page for SEO?
-
Hi Moz community,
Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works.
I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed.
https://sitebulb.com/resources/guides/javascript-seo-resources/
However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index.
Any thoughts on this, is this concern valid?
Thanks!
-
Hi Zack,
I think your concern here is valid (your render with Screaming Frog or any other client is unlikely to be precisely representative of what Googlebot will see/index). That said, I'm not sure there's much you can do to eliminate this knowledge gap for your QA process.
For instance, while we have seen Googlebot timing out JS rendering around the ~5s mark using the "Fetch & Render as Googlebot" functionality in Search Console (see slide 25 of Max Prin's slide deck here), there's no confirmation this time limit represents Googlebot's behavior in the wild.
Additionally, we know that Googlebot crawls with limited JS support - for instance, when a script uses JS to generate a random number, my colleague Tom Anthony found that Googlebot's random() JS function is deterministic (returns a predictable set) - so it's clear they have modified the headless version of Chrome they use to conserve computational expenses in this way. We can only assume they've taken other steps to save computing costs. This isn't baked-into Screaming Frog or any other crawling tool.
We have seen that with a 5s timeout set in Screaming Frog, the rendered result is pretty close to what "Fetch & Render as Googlebot" functionality demonstrates. And with the ubiquity of JS-driven content on the web today, provided links and content are rendered into the DOM fairly quickly (well ahead of that 5s mark), we've seen Google rendering and indexing JS content fairly reliable.
The ideal would be for your dev team to code these pages to degrade gracefully - so that even with JS support totally disabled, navigation and content elements are still rendered (they should be delivered in the page source, then enhanced with JS, if possible).
Failing that, the best you're likely to achieve here is reasonable confident that Googlebot can crawl, render and index these pages - there'll be some risk when you publish them to production.
Hope this helps somewhat - best of luck!
Thanks,
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify SEO - Double Filter Pages
Hi Experts, Single filter page: /collections/dining-chairs/black
Technical SEO | | williamhuynh
-- currently, canonical the same: /collections/dining-chairs/black
-- currently, index, follow Double filter page: /collections/dining-chairs/black+fabric
-- currently, canonical the same: /collections/dining-chairs/black+fabric
-- currently, noindex, follow My question is about double filter page above:
if noindexing is the better option OR should I change the canonical to /collections/dining-chairs/black Thank you0 -
If I'm using a compressed sitemap (sitemap.xml.gz) that's the URL that gets submitted to webmaster tools, correct?
I just want to verify that if a compressed sitemap file is being used, then the URL that gets submitted to Google, Bing, etc and the URL that's used in the robots.txt indicates that it's a compressed file. For example, "sitemap.xml.gz" -- thanks!
Technical SEO | | jgresalfi0 -
JavaScript page loader - SEO impact
Hello all,
Technical SEO | | Lvet
I am working on a site that has a bizarre page load system. All pages get loaded trough the same Javascript snippet, for example: Changing the values in the form changes the page that is loaded. The most incredible thing is that, against my expectations, pages do get indexed by Google.
My question is: "Does loading pages dynamically using JavaScript affect the overall SEO performance?" Why are pages getting indexed? Thank you for shedding light on this.
Cheers
Luca0 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
Does Title Tag location in a page's source code matter?
Currently our meta description is on line 8 for our page - http://www.paintball-online.com/Paintball-Guns-And-Markers-0Y.aspx
Technical SEO | | IstoresincThe title tag, however sits below a bunch of code on line 237
Does the location of the title tag, meta tags, and any structured data have any influence with respect to SEO and search engines? Put another way, could we benefit from moving the title tag up to the top? I "surfed 'n surfed" and could not find any articles about this. I would really appreciate any help on this as our site got decimated organically last May and we are looking for any help with SEO. NIck
0 -
Are Collapsible DIV's SEO-Friendly?
When I have a long article about a single topic with sub-topics I can make it user friendlier when I limit the text and hide text just showing the next headlines, by using expandable-collapsible div's. My doubt is if Google is really able to read onclick textlinks (with javaScript) or if it could be "seen" as hidden text? I think I read in the SEOmoz Users Guide, that all javaScript "manipulated" contend will not be crawled. So from SEOmoz's Point of View I should better make use of old school named anchors and a side-navigation to jump to the sub-topics? (I had a similar question in my post before, but I did not use the perfect terms to describe what I really wanted. Also my text is not too long (<1000 Words) that I should use pagination with rel="next" and rel="prev" attributes.) THANKS for every answer 🙂
Technical SEO | | inlinear0 -
What is the best way to find stranded pages?
I have a client that has a site that has had a number of people in charge of it. All of these people have very different opinions about what should be on the site itself. When I look at their website on the server I see pages that do not have any obvious navigation to them. What is the best way to find out the internal linking structure of a site and see if these pages truly are stranded?
Technical SEO | | anjonr0 -
What SEO considerations for multiple languages on a single page?
I am working on a language teaching site for Chinese speakers learning English. I consider myself above average when it comes to basic SEO issues, but all I know here is that Google doesn't like multiple languages on a single page. Without getting into too many details, both Chinese and English text will appear on the same page with links, tags, phonetic spellings, etc. I'm hoping someone here knows the science about using the lang="zh" xml:lang="zh" attributes within text and the effects on ranking for text within the declarations. And it'd be great if there was clarification on the link juice passed using the hreflang attribute for both internal and external links. Also, of course, any info on using both English and Chinese characters in the URL would be most helpful. A heads up on any other language specific SEO issues would also be much appreciated. My goal is to get the most out of both languages per page in terms of ranking.
Technical SEO | | kwoolf0