Fetching & Rendering a non ranking page in GWT to look for issues
-
Hi
I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages:
Googlebot couldn't get all resources for this page
Some boiler plate js plugins not found & some js comments reply blocked by robots (file below):
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not.
Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking.
Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ?
All Best
Dan
-
ok thanks !
nothing has changed just hoped it might do something
-
If anything changed between the 15th and today, it'll help ensure it gets updated. But that's all.
-
thanks Donna ! yes its all there and cache date is 15 Jan but still thought worthwhile fetching & rendering & submitting again, or does that do nothing more if its already indexed apart from asking G to take another look ?
-
Can you see if it's cached? Try cutting and pasting the entire URL into the search window, minus the http://. If it's indexed, it should show up in search results. Not the address bar, the search window.
-
Thanks for commenting Donna !
And providing the link to the interesting Q&A although this isn't the scenario i'm referring to with my original question.
The page isn't ranking at all although its very well optimised (and not overly so) and the keyword isn't that competitive so i would expect to be somewhere in the first 3-4 pages but its not even in first 100
Very similarly optimised pages (for other target keywords which are more competitive) are ranking well. Hence the fetch and render & submit to index i did, just to double check Googles seeing the page.
Cheers
Dan
-
Hi Dan,
You might find this Q&A helpful. It offers suggestions for what to do when an unexpected page is ranking for your targeted keyword phrase. I think most, if not all, suggestions apply in your case as well. Good luck!
-
Marvellous !
Many Thanks Robert !
All BEst
Dan
-
Yes there are a lot of overlaps when it comes to GWT - for the most part if you are making a submission request for crawling, it is indexed simultaneously - I believe the difference lies in some approaches which allow you to crawl as Google as opposed to submitting for official index.
In other words, what you have done is a definitive step in crawling and indexing, as opposed to seeing what Google would find if it were to crawl your site (as a test). "Submit to Index" is normally something I reserve for completed sites (as opposed to Stub content) to avoid accidental de-indexing.
In your circumstances, however, I don't think it will hurt you and it may help you identify any outstanding issues. Just remember to avoid it if you don't want a site indexed before it is ready!
Hope this helps,
Rob
-
Hi Robert,
Thanks for your help again !
That's great thanks, but what about 'submit to index' which i did also ? As in did i need to do that or not ?(since GWT says all pages submitted are indexed in sitemap section of GWT, so i take it i didn't need to, but did anyway as a precaution) ?
All Best
Dan
-
Hello again, Dan,
From what I can tell from your description, you have done what you can to make this work. We would expect JS to be blocked by that robots.txt file.
To answer your questions:
Fetch & render does allow Google to re-crawl the page using GWT. A request of this nature typically takes between 1-3 days to process, so you should know where you stand at that point.
Feel free to put an update here and if there is further information I will see what I can do to help out.
Cheers!
Rob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GWT Fetch & Render displays desktop version of site as mobile
Hi team, I noticed that when I request a desktop rendering in GWT using fetch and render, pages render as the mobile version. Screenshot attached. It's related to the VHS units in our CSS (as far as I'm aware). Does anyone know what the implications of this may be? Does it mean googlebot can only see the mobile version of our website? Any help is appreciated. Jake jgScJ
Technical SEO | | Jacobsheehan0 -
Unable to demote contact us & about us pages from sitelink?
Hey all, It's been 3 months now I demoted contact us & about us page via search console but it still appearing in my sitelink. Is there any other guidelines to be followed? Do anyone have the same experience? Susan.
Technical SEO | | promodirect0 -
Joomla creating duplicate pages, then the duplicate page's canonical points to itself - help!
Using Joomla, every time I create an article a subsequent duplicate page is create, such as: /latest-news/218-image-stabilization-task-used-to-develop-robot-brain-interface and /component/content/article?id=218:image-stabilization-task-used-to-develop-robot-brain-interface The latter being the duplicate. This wouldn't be too much of a problem, but the canonical tag on the duplicate is pointing to itself.. creating mayhem in Moz and Webmaster tools. We have hundreds of duplicates across our website and I'm very concerned with the impact this is having on our SEO! I've tried plugins such as sh404SEF and Styleware extensions, however to no avail. Can anyone help or know of any plugins to fix the canonicals?
Technical SEO | | JamesPearce0 -
Advice on whether we 301 redirect a page or update existing page?
Hi guys, any advice would be really appreciated. We have an existing page that ranks well for 'red widgets'. The page isn't monetised right now, but we're bringing in a new product onto our site that we optimised for 'blue widgets'. Unfortunately, not enough research was done for this page and we've now realised that consumers actually search for 'red widgets' when looking for the product we're creating as 'blue widgets'. The problem with this is that the 'red widgets' page is in a completely different category of our site than what it needs to be (it needs to be with 'blue widgets'). So, my question is; Should we do a 301 redirect from our 'red-widgets' page to our 'blue-widgets' page which we want to update and optimise the content on there for 'red-widgets'. Or, should we update the existing red-widgets page to have the right products and content on there, even thought it is in the wrong place of our site and users could get confused as to why they are there. If we do a 301 redirect to our new page, will we lose our rankings and have to start again, or is there a better way around this? Thanks! Dave
Technical SEO | | davo230 -
Duplicate Content Issues on Product Pages
Hi guys Just keen to gauge your opinion on a quandary that has been bugging me for a while now. I work on an ecommerce website that sells around 20,000 products. A lot of the product SKUs are exactly the same in terms of how they work and what they offer the customer. Often it is 1 variable that changes. For example, the product may be available in 200 different sizes and 2 colours (therefore 400 SKUs available to purchase). Theese SKUs have been uploaded to the website as individual entires so that the customer can purchase them, with the only difference between the listings likely to be key signifiers such as colour, size, price, part number etc. Moz has flagged these pages up as duplicate content. Now I have worked on websites long enough now to know that duplicate content is never good from an SEO perspective, but I am struggling to work out an effective way in which I can display such a large number of almost identical products without falling foul of the duplicate content issue. If you wouldnt mind sharing any ideas or approaches that have been taken by you guys that would be great!
Technical SEO | | DHS_SH0 -
Why does it say I have a page with 106 links, but when I look there are only like 4?
Looking through my first crawl results and there is a page showing with 106 or so links. I clicked into the post and there are only 3 links and maybe 10 photos from Flickr. I have no clue how 106 popped up. Can you explain?
Technical SEO | | pammacn0 -
Same Video on Multiple Pages and Sites... Duplicate Issues?
We're rolling out quite a bit of pro video and hosting on a 3-party platform/player (likely BrightCove) that also allows us to have the URL reside on our domain. Here is a scenario for a particular video asset: A. It's on a product page that the video is relevant for. B. We have an entry on our blog with the video C. We have a separate section of our site "Video Library" that provides a centralized view of all videos. It's there too. D. We eventually give the video to other sites (bloggers, industry educational sites etc) for outreach and link-building. A through C on our domain are all for user experience as every page is very relevant, but are there any duplicate video issues here? We would likely only have the transcript on the product page (though we're open to suggestions). Any related feedback would be appreciated. We want to make this scalable and done properly from the beginning (will be rolling out 1000+ videos in 2010)
Technical SEO | | SEOPA0 -
I have 15,000 pages. How do I have the Google bot crawl all the pages?
I have 15,000 pages. How do I have the Google bot crawl all the pages? My site is 7 years old. But there are only about 3,500 pages being crawled.
Technical SEO | | Ishimoto0