Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Site Audit Tools Not Picking Up Content Nor Does Google Cache
-
Hi Guys,
Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages.
I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content.
What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
-
@nezona
DM Fitrs
Facing issues with site audit tools and Google Cache not picking up content can be a technical puzzle to solve. It's crucial to address these challenges for a smoother online presence. Similarly, in managing our digital responsibilities, like checking PESCO online bills, reliability is key. Just as we troubleshoot website-related matters, staying on top of utility payments ensures a hassle-free experience. Navigate technical hiccups, both in website diagnostics and bill management, to maintain a seamlessly connected online routine. -
Hi Team,
I am facing problem with one of my website where google is caching the page when checked using cache: operator but displaying a 404 msg in the body of the cached version.
But when i check the same in 'text-only version' the complete content and element is visible to Google and also GSC shows the page with no issue and rendering is also fine.
The canonicals and robots are properly set with no issues on them.
Not able to figure out what is the problem. Experts advice would help!Regards,
Ryan -
Hey Neil
Wow, we are really chuffed here at Effect Digital! I guess... we have a lot of combined experience - and we also try to give something back to the community (as well as making profit, obviously)
We didn't actually know how many people used the Moz Q&A forum until recently. It seemed like a good hub to demonstrate that, not all agency accounts have to exist to give shallow 1-liner replies from a position of complete ignorance (usually just so they can link spam the comments). Groups of people, **can **be insightful and 'to the point'
Again we're just really thrilled that you found our analysis to be useful. It also shows what goes into what we do. Most of the responses on here which are under-detailed have the potential to lead people down rabbit holes. Sometimes you just have to get into the thick of it right?
I think our email address is publicly listed on our profile page. Feel free to hit us up
-
My Friend,
That is some analysis you have done there!! and I am eternally greatful. It's people like you, who are clearly so passionate about SEO, that make our industry amazing!!
I am going to private message you a longer reply, later but i just wanted to publicly say thank you!!
Regards
Neil
-
Ok let's have a look here.
So this is the URL of the page you want me to look at:
I can immediately tell you that, from my end it doesn't look like Google has even cached this page at all:
- http://webcache.googleusercontent.com/search?q=cache:https%3A%2F%2Fwww.nubalustrades.co.uk%2F (live)
- https://d.pr/i/DhmPEr.png (screenshot)
As you know I can't fetch someone else's web page as Google, but I do know Screaming Frog pretty well so let's give that a blast
First let's try a quick crawl with no client-side rendering enabled, see what that comes back with:
- https://d.pr/f/u3bifA.seospider (SF crawl file)
- https://d.pr/f/9TfNR5.xlsx (Excel spreadsheet output)
Seems as if, even without rendered crawling the words are being picked up:
Only the rows highlighted in green (the 'core' site URLs) should have a word count anyway. The other URLs are fragments and resources. They're scripts, stylesheets, images etc (none of which need copy).
Let's try a rendered crawl, see what we get:
- https://d.pr/f/ijprbx.seospider (SF crawl file)
- https://d.pr/f/c8ljoF.xlsx (Excel spreadsheet output)
Again - it seems as if the words are picked up, though oddly fewer are picked up with rendered crawling than with a simple AJAX source scrape:
That could easily be something to do with my time-out or render-wait settings though (that being said I did give a pretty generous 23 seconds so...)
In any case, it seems to me that the content is search readable in either event.
Let's look at the homepage specifically in more detail. Basically if content appears in "inspect element" but not in "view source", **that's **when you know you have a real problem
- view-source:https://www.nubalustrades.co.uk/ - (you can only open this link with Chrome browser, it's free to download from Google)
As you can see, lots of the content does indeed appear in the 'base' source code:
That's a good thing.
That being said, each piece of content seems to be replicated twice in the source code which is really weird and may be creating some content duplication issues, if Google's more simple crawl-bots aren't taking the time to analyse the source code correctly.
Go back here:
- view-source:https://www.nubalustrades.co.uk/ - (this link only works in Chrome!)
Ctrl+F to find the string of text: "issued by the British Standards Institution". Hit enter a few times. You'll see the page jump about.
On the one hand you have this, further up the page which looks alright:
On the other hand you have this further down which looks like a complete mess, embedded within some kind of script or something?
Line 6,212 of the source code is some gigantic JavaScript thing which has been in-lined (and don't get me started on how this site is over-using inline code in general, for CSS, JS - everything). No idea what it's for or does, might be deferred stuff to boost page speed without breaking the visuals or whatever (there are many clever tricks like that, but they make the source code a virtually unreadable mess for a human - let alone a programmed bot!)
What really concerns me is why such a simple page needs to have 6,250 lines of source code. That's mental!
What we all forget is that, whilst the crawl and fetch bots pull information quickly - Google's algorithms have to be run over the top of that source code and data (which is a much more complex affair)
Usually people think that normalizing the code-to-text ratio is a pointless SEO maneuver and in most cases, yes the return is vastly outweighed by the time taken to do it. But in your case it's actually very extreme:
Put your URL in and you'll get this:
I tried like 5-8 different tools and this was the most favorable result :')
It is clear that, even were the page successfully downloaded by Google, their algorithms may have trouble hunting out the nuggets of content within the vast, sprawling and unnecessary coding structure. My older colleagues had always warned me away from Wix... now I can see why, with my own two eyes
Ok. So we know that Google isn't bothering to cache the page, and that - despite the fact your content can 'technically' be crawled, it may be a marathon to do that and dig it out (especially for non-intelligent robots)
But is the content being indexed? Let's check:
- https://www.google.co.uk/search?q=site%3Anubalustrades.co.uk+%22issued+by+the+British+Standards+Institution%22
- https://www.google.co.uk/search?num=100&ei=q_MYXMj3EM_srgSNh6LYCQ&q=site%3Anubalustrades.co.uk+%22product+and+your+happy+with%22
- https://www.google.co.uk/search?num=100&ei=6vMYXPuLC4yYsAXAoKfAAg&q=site%3Anubalustrades.co.uk+%22Some+customers+like+to+have+more+than+one+balustrade%22
- https://www.google.co.uk/search?num=100&ei=CPQYXOmJFYu6tQXi8arwBA&q=site%3Anubalustrades.co.uk+%22installations+which+will+help+you+visualise+your+future+project%22
- https://www.google.co.uk/search?num=100&ei=KvQYXMyhC4LStAWopbqACg&q=site%3Anubalustrades.co.uk+%22Cleanly-designed%2C+high-quality+handrail+systems+combined+with+attention%22
Those are all special Google search queries, designed to specifically search for strings of content on your website from all the different, primary content boxes
Good news fella, it's all being found:
Let's make up an invalid text string and see what Google returns when text can't be found, to validate our findings thus-far:
If nothing is found you get this:
So I guess Google can find your content and is indexing your content
Phew, crisis over! Onto the next one...
-
Hi There,
This is the URL:-
https://www.nubalustrades.co.uk/
Be great if you could give me your opinion. I am thinking that this content isn't being indexed.
Regards
Neil
-
If you can share a link to the site I can probably diagnose it. It's probably that the content is within the modified (client-side rendered) source code, rather than the 'base' (non-modified) source code. Google fetches pages in multiple different ways, so using fetch as Google artificially makes it seem as if they always use exactly the same crawling technology. They don't.
Google 'can' crawl modified content. But they don't always do it, and they don't do it for everyone. Rendered crawling takes like... 10x longer than basic source scraping. Their mission is to index the web!
The fetch tool shows you their best-case scenario crawling methodology. Don't assume their indexation bots, which have a mountain to climb - will always be so favourable
-
Just an update on this one
Looks like it may be a problem with Wix
https://moz.com/community/q/wix-problem-with-on-page-optimization-picking-up-seo
I have another client who also uses Wix and they also show now content in screaming frog but worryingly their pages show in a cached version of the site. I know the "cache" isn't the best way to see what content is indexed and the fetch as Google is fine.
I just get the feeling something isn't right.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are on-site content carousel bad for SEO?
Hi, I didn't find an answer to my question in the Forum. I attached an example of content carousel, this is what I'm talking about. I understand that Google has no problem anymore with tabbed contents and accordeons (collapsible contents). But now I'm wondering about textual carousels. I'm not talking about an image slider, I'm talking about texts. Is text carousel harder to read for Google than plain text or tabs? Of course, i'm not talking about a carousel using Flash. Let's say the code is proper... Thanks for your help. spfra5
Technical SEO | | Alviau0 -
Migrating to new subdomain with new site and new content.
Our marketing department has decided that a new site with new content is needed to launch new products and support our existing ones. We cannot use the same subdomain(www = old subdomain and ww1 = new subdomain)as there is a technically clash between the windows server currently used, and the lamp stack required to run the new wordpress based CMS and site. We also have an aging piece of SAAS software on the www domain which is makes moving it to it's own subdomain far too risky. 301's have been floated as a way of managing the transition. I'm not too keen on that idea due to the double effect of new subdomain and content, and the SEO impact it might have. I've suggested uploading the new site to the new subdomain while leaving the old site in place. Then gradually migrating sections over before turning parts of the old site off and using a 301 at that point to finalise the move. The old site would inform user's there is a new version and it would then convert them to the new site(along with a cookie to auto redirect them in future.) while still leaving the old content in place for existing search traffic, bookmarks and visitors via static URLs. Before turning off sections on the old site we would create rel canonicals to redirect to the new pages based on a a mapped set of URLs(this in itself concerns me as the rel canonical is essentially linking to different content). Would be grateful for any advice on whether this strategy is flawed or whether another strategy might be more suitable?
Technical SEO | | Rezza0 -
How to fix Google index after fixing site infected with malware.
Hi All Upgraded a Joomla site for a customer a couple of months ago that was infected with malware (it wasn't flagged as infected by google). Site is fine now but still noticing search queries for "cheap adobe" etc with links to http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi in web master tools (about 50 in total). These url's redirect back to home page and seem to be remaining in the index (I think Joomla is doing this automatically) Firstly, what sort of effect would these be having on on their rankings? Would they be seen by google as duplicate content for the homepage (moz doesn't report them as such as there are no internal links). Secondly what's my best plan of attack to fix them. Should I setup 404's for them and then submit them to google? Will resubmitting the site to the index fix things? Would appreciate any advice or suggestions on the ramifications of this and how I should fix it. Regards, Ian
Technical SEO | | iragless0 -
Sites Copying my Content Ranking Higher
A number of sites are copying - either 100% word for word, paragraphs, or sentences of my content and are ranking higher. Some sites are doing this with permission/properly and are linking back to my article Others are not linking back or giving credit. Some of these sites, in some cases are ranking higher than me in Google results. What can I do?
Technical SEO | | ben10000 -
Javascript to manipulate Google's bounce rate and time on site?
I was referred to this "awesome" solution to high bounce rates. It is suppose to "fix" bounce rates and lower them through this simple script. When the bounce rate goes way down then rankings dramatically increase (interesting study but not my question). I don't know javascript but simply adding a script to the footer and watch everything fall into place seems a bit iffy to me. Can someone with experience in JS help me by explaining what this script does? I think it manipulates the reporting it does to GA but I'm not sure. It was supposed to be placed in the footer of the page and then sit back and watch the dollars fly in. 🙂
Technical SEO | | BenRWoodard1 -
Redirecting Entire Microsite Content to Main Site Internal Pages?
I am currently working on improving site authority for a client site. The main site has significant authority, but I have learned that the company owns several other resource-focused microsites which are stagnant, but which have accrued significant page authority of their own (thought still less than the main site). Realizing the fault in housing good content on a microsite rather than the main site, my thought is that I can redirect the content of the microsites to internal pages on the main site as a "Resources" section. I am wondering a: if this is a good idea and b: the best way to transfer site authority from these microsites. I am also wondering how to organize the content and if, for example, an entire microsite domain (e.g. microsite.com) should in fact be redirected to internal resource pages (e.g. mainsite.com/resources). Any input would be greatly appreciated!
Technical SEO | | RightlookCreative1 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Content loc and player log tags for XML video site maps
I need a little help understanding how to create two of the required tags for a XML video site map for Google. 1. video:content_loc2.<video:player_loc< p=""></video:player_loc<></video:content_loc> Google explains their Video XML Site map requirements here:
Technical SEO | | dsexton10
www.google.com/support/webmasters/bin/answer.py?answer=80472
Using the example on this Google Web Master Help page (where they explain all six of the required tags) , here are examples of the two tags I need help with: video:content_locwww.example.com/video123.flv</video:content_loc> <video:player_loc allow_embed="yes" autoplay="ap=1">www.example.com/videoplayer.swf?video=12...video:player_loc></video:player_loc> The video I am trying to optimize is located on a page on my site:
www.mountainbikingmaine.com/races/bradbury_hawk.html
This page has an embedded Vimeo video. So I don't have the video file on my domain. It is on Vimeo. Here is source code from my page that I think provides the information I need to create the two tags that Google requires. <iframe src="<a rel=" nofollow"="" href="http://player.vimeo.com/video/24580638?title=0&byline=0&portrait=0"" target="_blank">player.vimeo.com/video/24580638?title=0&...amp;portrait=0"</a> width="400" height="533" frameborder="0"></iframe> [vimeo.com/24580638">Bradbury](<a rel=) Mountain Maine Hawk Migration Count from [vimeo.com/user3219915">dan](<a rel=) sexton Using this source from my site, can you suggest what to put in the two tags? Thanks! Dan0