Fetching & Rendering a non ranking page in GWT to look for issues
-
Hi
I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages:
Googlebot couldn't get all resources for this page
Some boiler plate js plugins not found & some js comments reply blocked by robots (file below):
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not.
Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking.
Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ?
All Best
Dan
-
ok thanks !
nothing has changed just hoped it might do something
-
If anything changed between the 15th and today, it'll help ensure it gets updated. But that's all.
-
thanks Donna ! yes its all there and cache date is 15 Jan but still thought worthwhile fetching & rendering & submitting again, or does that do nothing more if its already indexed apart from asking G to take another look ?
-
Can you see if it's cached? Try cutting and pasting the entire URL into the search window, minus the http://. If it's indexed, it should show up in search results. Not the address bar, the search window.
-
Thanks for commenting Donna !
And providing the link to the interesting Q&A although this isn't the scenario i'm referring to with my original question.
The page isn't ranking at all although its very well optimised (and not overly so) and the keyword isn't that competitive so i would expect to be somewhere in the first 3-4 pages but its not even in first 100
Very similarly optimised pages (for other target keywords which are more competitive) are ranking well. Hence the fetch and render & submit to index i did, just to double check Googles seeing the page.
Cheers
Dan
-
Hi Dan,
You might find this Q&A helpful. It offers suggestions for what to do when an unexpected page is ranking for your targeted keyword phrase. I think most, if not all, suggestions apply in your case as well. Good luck!
-
Marvellous !
Many Thanks Robert !
All BEst
Dan
-
Yes there are a lot of overlaps when it comes to GWT - for the most part if you are making a submission request for crawling, it is indexed simultaneously - I believe the difference lies in some approaches which allow you to crawl as Google as opposed to submitting for official index.
In other words, what you have done is a definitive step in crawling and indexing, as opposed to seeing what Google would find if it were to crawl your site (as a test). "Submit to Index" is normally something I reserve for completed sites (as opposed to Stub content) to avoid accidental de-indexing.
In your circumstances, however, I don't think it will hurt you and it may help you identify any outstanding issues. Just remember to avoid it if you don't want a site indexed before it is ready!
Hope this helps,
Rob
-
Hi Robert,
Thanks for your help again !
That's great thanks, but what about 'submit to index' which i did also ? As in did i need to do that or not ?(since GWT says all pages submitted are indexed in sitemap section of GWT, so i take it i didn't need to, but did anyway as a precaution) ?
All Best
Dan
-
Hello again, Dan,
From what I can tell from your description, you have done what you can to make this work. We would expect JS to be blocked by that robots.txt file.
To answer your questions:
Fetch & render does allow Google to re-crawl the page using GWT. A request of this nature typically takes between 1-3 days to process, so you should know where you stand at that point.
Feel free to put an update here and if there is further information I will see what I can do to help out.
Cheers!
Rob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practices For Angular Single Page Applications & Progressive Web Apps
Hi Moz Community, Is there a proper way to do SPA (client side rendered) and PWA without having a negative impact on SEO? Our dev team is currently trying to covert most of our pages to Angular single page application client side rendered. I told them we should use a prerendering service for users that have JS disabled or use server side rendering instead since this would ensure that most web crawlers would be able to render and index all the content on our pages even with all the heavy JS use. Is there an even better way to do this or some best practices? In terms of the PWA that they want to add along with changing the pages to SPA, I told them this is pretty much separate from SPA's because they are not dependent. Adding a manifest and service worker to our site would just be an enhancement. Also, if we do complete PWA with JS for populating content/data within the shell, meaning not just the header and footer, making the body a template with dynamic JS as well would that effect our SEO in any way, any best practices here as well? Thanks!
Technical SEO | | znotes0 -
Does having no content on a mobile page have effect on the ranking.
We are about to go live with a mobile version of our webshop, mobile users will be shown an alternative version of the desktop page. At the moment we have little to no content on the mobile pages; how will this effect our ranking? (desktop and mobile page have the same URL and meta, only the "body" is different)
Technical SEO | | G.School0 -
Mobile & desktop pages
I have a mobile site (m.example.com) and a desktop site (example.com). I want search engines to know that for every desktop page there is a mobile equivalent. To do this I insert a rel=alternate on the desktop pages to the mobile equivalent. On the mobile pages I insert a rel=canonical to it's equivalent desktop page. So far so good BUT: Almost every desktop page has 4 or 5 copies (duplicate content). I get rid of this issue by using the rel=canonical to the source page. Still no problem here. But what happens if I insert a rel=alternate to the mobile equivalent on every copy of the source page? I know it sounds stupid but the system doesn't allow me to insert a rel=alternate on just one page. It's all or nothing! My question: Does Google ignore the rel=alternate on the duplicate pages but keeps understanding the link between the desktop source page & mobile page ? Or should I avoid this scenario? Many Thanks Pieter
Technical SEO | | Humix0 -
How come only 2 pages of my 16 page infographic are being crawled by Moz?
Our Infographic titled "What Is Coaching" was officially launched 5 weeks ago. http://whatiscoaching.erickson.edu/ We set up campaigns in Moz & Google Analytics to track its performance. Moz is reporting No organic traffic and is only crawling 2 of the 16 pages we created. (see first and third attachments) Google Analytics is seeing hundreds of some very strange random pages (see second attachment) Both campaigns are tracking the url above. We have no idea where we've gone wrong. Please help!! 16_pages_seen_in_wordpress.png how_google_analytics_sees_pages.png what_moz_sees.png
Technical SEO | | EricksonCoaching0 -
Duplicated rel=author tags (x 3) on WordPress pages, any issue with this?
Hi,
Technical SEO | | jeffwhitfield
We seem to have duplicated rel=author tags (x 3) on WordPress pages, as we are using Yoast WordPress SEO plugin which adds a rel=author tag into the head of the page and Fancier Author Box plugin which seems to add a further two rel=author tags toward the bottom of the page. I checked the settings for Fancier Author Box and there doesn't seem to be the option to turn rel=author tags off; we need to keep this plugin enabled as we want the two tab functionality of the author bio and latest posts. All three rel=author tags seem to be correctly formatted and Google Structured Data Testing Tool shows that all authorship rel=author markup is correct; is there any issue with having these duplicated rel=author tags on the WordPress pages?
I tried searching the Q&A but couldn't find anything similar enough to what I'm asking above. Many thanks in advance and kind regards.0 -
Good organic ranking/Poor local ranking
Hi, I am a local retailer with a physical store in a major US City. My website is very well ranked on major keywords related to my business from an organic result perspective (between 1st and 3rd spot). However, in terms of local results (google place), my website isn't even ranked (except for one specific long tail keyword). Anybody know why? Thank you so much for your help. This is driving me crazy:-)
Technical SEO | | larose370 -
Can view pages of site, but Google & SEOmoz return 404
I can visit and view every page of a site (can also see source code), but Google, SEOmoz and others say anything other than home page is a 404 and Google won't index the sub-pages. I have check robots.txt and HTAccess and can't find anything wrong. Is this a DNS or server setting problem? Any ideas? Thanks, Fitz
Technical SEO | | FitzSWC0 -
Why is the Page Authority of my product pages so low?
My domain authority is 35 (homepage Page Authority = 45) and my website has been up for years: www.rainchainsdirect.com Most random pages on my site (like this one) have a Page Authority of around 20. However, as a whole, the individual pages of my products rank exceptionally low. Like these: http://www.rainchainsdirect.com/products/copper-channel-link-rain-chain (Page Authority = 1) http://www.rainchainsdirect.com/collections/todays-deals/products/contempo-chain (Page Authority = 1) I was thinking that for whatever reason they have such low authority, that it may explain why these pages rank lower in google for specific searches using my exact product name (in other words, other sites that are piggybacking of my unique products are ranking higher for my product in a specific name search than the original product itself on my site) In any event, I'm trying to get some perspective on why these pages remain with the same non-existent Page Authority. Can anyone help to shed some light on why and what can be done about it? Thanks!
Technical SEO | | csblev0