Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Site Audit Tools Not Picking Up Content Nor Does Google Cache
-
Hi Guys,
Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages.
I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content.
What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
-
@nezona
DM Fitrs
Facing issues with site audit tools and Google Cache not picking up content can be a technical puzzle to solve. It's crucial to address these challenges for a smoother online presence. Similarly, in managing our digital responsibilities, like checking PESCO online bills, reliability is key. Just as we troubleshoot website-related matters, staying on top of utility payments ensures a hassle-free experience. Navigate technical hiccups, both in website diagnostics and bill management, to maintain a seamlessly connected online routine. -
Hi Team,
I am facing problem with one of my website where google is caching the page when checked using cache: operator but displaying a 404 msg in the body of the cached version.
But when i check the same in 'text-only version' the complete content and element is visible to Google and also GSC shows the page with no issue and rendering is also fine.
The canonicals and robots are properly set with no issues on them.
Not able to figure out what is the problem. Experts advice would help! - topic:timeago_earlier,5 years
-
Hey Neil
Wow, we are really chuffed here at Effect Digital! I guess... we have a lot of combined experience - and we also try to give something back to the community (as well as making profit, obviously)
We didn't actually know how many people used the Moz Q&A forum until recently. It seemed like a good hub to demonstrate that, not all agency accounts have to exist to give shallow 1-liner replies from a position of complete ignorance (usually just so they can link spam the comments). Groups of people, **can **be insightful and 'to the point'
Again we're just really thrilled that you found our analysis to be useful. It also shows what goes into what we do. Most of the responses on here which are under-detailed have the potential to lead people down rabbit holes. Sometimes you just have to get into the thick of it right?
I think our email address is publicly listed on our profile page. Feel free to hit us up
-
My Friend,
That is some analysis you have done there!! and I am eternally greatful. It's people like you, who are clearly so passionate about SEO, that make our industry amazing!!
I am going to private message you a longer reply, later but i just wanted to publicly say thank you!!
Regards
Neil
-
Ok let's have a look here.
So this is the URL of the page you want me to look at:
I can immediately tell you that, from my end it doesn't look like Google has even cached this page at all:
- http://webcache.googleusercontent.com/search?q=cache:https%3A%2F%2Fwww.nubalustrades.co.uk%2F (live)
- https://d.pr/i/DhmPEr.png (screenshot)
As you know I can't fetch someone else's web page as Google, but I do know Screaming Frog pretty well so let's give that a blast
First let's try a quick crawl with no client-side rendering enabled, see what that comes back with:
- https://d.pr/f/u3bifA.seospider (SF crawl file)
- https://d.pr/f/9TfNR5.xlsx (Excel spreadsheet output)
Seems as if, even without rendered crawling the words are being picked up:
Only the rows highlighted in green (the 'core' site URLs) should have a word count anyway. The other URLs are fragments and resources. They're scripts, stylesheets, images etc (none of which need copy).
Let's try a rendered crawl, see what we get:
- https://d.pr/f/ijprbx.seospider (SF crawl file)
- https://d.pr/f/c8ljoF.xlsx (Excel spreadsheet output)
Again - it seems as if the words are picked up, though oddly fewer are picked up with rendered crawling than with a simple AJAX source scrape:
That could easily be something to do with my time-out or render-wait settings though (that being said I did give a pretty generous 23 seconds so...)
In any case, it seems to me that the content is search readable in either event.
Let's look at the homepage specifically in more detail. Basically if content appears in "inspect element" but not in "view source", **that's **when you know you have a real problem
- view-source:https://www.nubalustrades.co.uk/ - (you can only open this link with Chrome browser, it's free to download from Google)
As you can see, lots of the content does indeed appear in the 'base' source code:
That's a good thing.
That being said, each piece of content seems to be replicated twice in the source code which is really weird and may be creating some content duplication issues, if Google's more simple crawl-bots aren't taking the time to analyse the source code correctly.
Go back here:
- view-source:https://www.nubalustrades.co.uk/ - (this link only works in Chrome!)
Ctrl+F to find the string of text: "issued by the British Standards Institution". Hit enter a few times. You'll see the page jump about.
On the one hand you have this, further up the page which looks alright:
On the other hand you have this further down which looks like a complete mess, embedded within some kind of script or something?
Line 6,212 of the source code is some gigantic JavaScript thing which has been in-lined (and don't get me started on how this site is over-using inline code in general, for CSS, JS - everything). No idea what it's for or does, might be deferred stuff to boost page speed without breaking the visuals or whatever (there are many clever tricks like that, but they make the source code a virtually unreadable mess for a human - let alone a programmed bot!)
What really concerns me is why such a simple page needs to have 6,250 lines of source code. That's mental!
What we all forget is that, whilst the crawl and fetch bots pull information quickly - Google's algorithms have to be run over the top of that source code and data (which is a much more complex affair)
Usually people think that normalizing the code-to-text ratio is a pointless SEO maneuver and in most cases, yes the return is vastly outweighed by the time taken to do it. But in your case it's actually very extreme:
Put your URL in and you'll get this:
I tried like 5-8 different tools and this was the most favorable result :')
It is clear that, even were the page successfully downloaded by Google, their algorithms may have trouble hunting out the nuggets of content within the vast, sprawling and unnecessary coding structure. My older colleagues had always warned me away from Wix... now I can see why, with my own two eyes
Ok. So we know that Google isn't bothering to cache the page, and that - despite the fact your content can 'technically' be crawled, it may be a marathon to do that and dig it out (especially for non-intelligent robots)
But is the content being indexed? Let's check:
- https://www.google.co.uk/search?q=site%3Anubalustrades.co.uk+%22issued+by+the+British+Standards+Institution%22
- https://www.google.co.uk/search?num=100&ei=q_MYXMj3EM_srgSNh6LYCQ&q=site%3Anubalustrades.co.uk+%22product+and+your+happy+with%22
- https://www.google.co.uk/search?num=100&ei=6vMYXPuLC4yYsAXAoKfAAg&q=site%3Anubalustrades.co.uk+%22Some+customers+like+to+have+more+than+one+balustrade%22
- https://www.google.co.uk/search?num=100&ei=CPQYXOmJFYu6tQXi8arwBA&q=site%3Anubalustrades.co.uk+%22installations+which+will+help+you+visualise+your+future+project%22
- https://www.google.co.uk/search?num=100&ei=KvQYXMyhC4LStAWopbqACg&q=site%3Anubalustrades.co.uk+%22Cleanly-designed%2C+high-quality+handrail+systems+combined+with+attention%22
Those are all special Google search queries, designed to specifically search for strings of content on your website from all the different, primary content boxes
Good news fella, it's all being found:
Let's make up an invalid text string and see what Google returns when text can't be found, to validate our findings thus-far:
If nothing is found you get this:
So I guess Google can find your content and is indexing your content
Phew, crisis over! Onto the next one...
-
Hi There,
This is the URL:-
https://www.nubalustrades.co.uk/
Be great if you could give me your opinion. I am thinking that this content isn't being indexed.
Regards
Neil
-
If you can share a link to the site I can probably diagnose it. It's probably that the content is within the modified (client-side rendered) source code, rather than the 'base' (non-modified) source code. Google fetches pages in multiple different ways, so using fetch as Google artificially makes it seem as if they always use exactly the same crawling technology. They don't.
Google 'can' crawl modified content. But they don't always do it, and they don't do it for everyone. Rendered crawling takes like... 10x longer than basic source scraping. Their mission is to index the web!
The fetch tool shows you their best-case scenario crawling methodology. Don't assume their indexation bots, which have a mountain to climb - will always be so favourable
-
Just an update on this one
Looks like it may be a problem with Wix
https://moz.com/community/q/wix-problem-with-on-page-optimization-picking-up-seo
I have another client who also uses Wix and they also show now content in screaming frog but worryingly their pages show in a cached version of the site. I know the "cache" isn't the best way to see what content is indexed and the fetch as Google is fine.
I just get the feeling something isn't right.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How does Google treat Content hidden in click-to-expand tabs?
Hi Peeps I'm working a web build project and having some debates going on with our UX and SEO department regards hidden content in click-to-expand tabs. The UX team is suggesting using these tabs is a legitimate method of making large amounts of copy more easily digestible to readers. The tabs are for FAQs ( hopefully, you can view the wireframe URL ) and the SEO team are concerned that the content in these tabs contains some core keyword phrases which may not be indexed. I am the project lead on this and honestly can't claim to be an expert on either discipline so any advice would be very welcome. Can search engines index content hidden in these tabs? Thank you in advance for any advice shared. Nicky 213985904
Technical SEO | Feb 6, 2017, 7:57 PM | nickspiteri0 -
Migrating to new subdomain with new site and new content.
Our marketing department has decided that a new site with new content is needed to launch new products and support our existing ones. We cannot use the same subdomain(www = old subdomain and ww1 = new subdomain)as there is a technically clash between the windows server currently used, and the lamp stack required to run the new wordpress based CMS and site. We also have an aging piece of SAAS software on the www domain which is makes moving it to it's own subdomain far too risky. 301's have been floated as a way of managing the transition. I'm not too keen on that idea due to the double effect of new subdomain and content, and the SEO impact it might have. I've suggested uploading the new site to the new subdomain while leaving the old site in place. Then gradually migrating sections over before turning parts of the old site off and using a 301 at that point to finalise the move. The old site would inform user's there is a new version and it would then convert them to the new site(along with a cookie to auto redirect them in future.) while still leaving the old content in place for existing search traffic, bookmarks and visitors via static URLs. Before turning off sections on the old site we would create rel canonicals to redirect to the new pages based on a a mapped set of URLs(this in itself concerns me as the rel canonical is essentially linking to different content). Would be grateful for any advice on whether this strategy is flawed or whether another strategy might be more suitable?
Technical SEO | Jun 29, 2015, 6:06 PM | Rezza0 -
Google Cache showing a different URL
Hi all, very weird things happening to us. For the 3 URLs below, Google cache is rendering content from a different URL (sister site) even though there are no redirects between the 2 & live page shows the 'right content' - see: http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/tours/ http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/about/ http://webcache.googleusercontent.com/search?q=cache:http://giltedgeafrica.com/about/team/ We also have the exact same issue with another domain we owned (but not anymore), only difference is that we 301 redirected those URLs before it changed ownership: http://webcache.googleusercontent.com/search?q=cache:http://www.preferredsafaris.com/Kenya/2 http://webcache.googleusercontent.com/search?q=cache:http://www.preferredsafaris.com/accommodation/Namibia/5 I have gone ahead into the URL removal Tool and got denied for the first case above ("") and it is still pending for the second lists. We are worried that this might be a sign of duplicate content & could be penalising us. Thanks! ps: I went through most questions & the closest one I found was this one (http://moz.com/community/q/page-disappeared-from-google-index-google-cache-shows-page-is-being-redirected) but it didn't provide a clear answer on my question above
Technical SEO | Sep 25, 2014, 11:20 AM | SouthernAfricaTravel0 -
How does Google Crawl Multi-Regional Sites?
I've been reading up on this on Webmaster Tools but just wanted to see if anyone could explain it a bit better. I have a website which is going live soon which is going to be set up to redirect to a localised URL based on the IP address i.e. NZ IP ranges will go to .co.nz, Aus IP addresses would go to .com.au and then USA or other non-specified IP addresses will go to the .com address. There is a single CMS installation for the website. Does this impact the way in which Google is able to search the site? Will all domains be crawled or just one? Any help would be great - thanks!
Technical SEO | Oct 2, 2012, 9:16 PM | lemonz0 -
Google's "cache:" operator is returning a 404 error.
I'm doing the "cache:" operator on one of my sites and Google is returning a 404 error. I've swapped out the domain with another and it works fine. Has anyone seen this before? I'm wondering if G is crawling the site now? Thx!
Technical SEO | Jul 6, 2012, 11:45 AM | AZWebWorks0 -
Redirecting Entire Microsite Content to Main Site Internal Pages?
I am currently working on improving site authority for a client site. The main site has significant authority, but I have learned that the company owns several other resource-focused microsites which are stagnant, but which have accrued significant page authority of their own (thought still less than the main site). Realizing the fault in housing good content on a microsite rather than the main site, my thought is that I can redirect the content of the microsites to internal pages on the main site as a "Resources" section. I am wondering a: if this is a good idea and b: the best way to transfer site authority from these microsites. I am also wondering how to organize the content and if, for example, an entire microsite domain (e.g. microsite.com) should in fact be redirected to internal resource pages (e.g. mainsite.com/resources). Any input would be greatly appreciated!
Technical SEO | Jun 21, 2012, 2:15 PM | RightlookCreative1 -
When is the last time Google crawled my site
How do I tell the last time Google crawled my site. I found out it is not the "Cache" which I had thought it was.
Technical SEO | Feb 16, 2012, 7:06 AM | digitalops0 -
Does google use the wayback machine to determine the age of a site?
I have a site that I had removed from the wayback machine because I didn't want old versions to show. However I noticed that in many seo tools the site now always shows a domain age of zero instead of 6 years ago when I registered it. My question is what do the actual search engines use to determine age when they factor it into the ranking algorithm? By having it removed from the wayback machine, does that make the search engines think the site is brand new? Thanks
Technical SEO | Sep 4, 2011, 4:38 AM | FastLearner0