Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do you check the google cache for hashbang pages?
-
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage
That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage.
My question is - how can you actually look up the cache for hashbang page?
-
I was actually trying to give you the tools to figure out what's cached and indexed. You can just run a site search for the content and look at the cache, though. For example:
If nothing shows up it's probably not indexed.
-
Thanks Carson but that wasn't the question.
The question was how to check the cache.
-
Generally I'd avoid hashtags or hashbangs if you have large amounts of content you want indexed behind a hashbang. Use pushState instead whenever it makes sense for the user to actually change the URL.
The general rule is that if you can see the content in your page source (ctrl+u version), it's probably being indexed. That means that client-side AJAX behind hashbangs is generally not indexed, where server-side will generally get indexed.
If for some reason you must use hashbangs, AND you must use client-rendering content, create an HTML snapshot of your page for Google. Generally, though, that's more effort than changing one of the above.
-
I think google has stopped responding to cache requests on hashbang pages all together.
See here... **I'm just playing with random urls and don't see google cache 404'ing as it should **http://recordit.co/XBlo3U2A73
You can really put anything there it won't work.
-
Searching for indexed & duplicate content. I put a line or two in quotes and Googled it. I found most of the UTMs that way. Once you do that, it's a simple change to site:yoursite.com inurl:UTM
-
Thanks a lot, Matt.
I'm curious.. how did you exactly find the version with the utm codes that are being cached?
-
Strangely, browseo sees it correctly: http://www.browseo.net/?url=https%3A%2F%2Fplaceit.net%2F%3F_escaped_fragment_%3D%2Fstages%2Fsamsung-galaxy-note-friends-park
I'm not 100% sure why this is happening on your site specifically. Normally the #! isn't too big of an issue for cache but I've seen it have a few hiccups. These pages seem to be indexed fine but they aren't generating cache.
I did find a few working but only those with UTM codes:
This doesn't look like it's working but view the source code - the content is actually there. I found it by Googling the content in " marks.
-
What you're saying make sense and our urls are setup like this but we still don't see just the homepage come up when looking up the google cache with the esc fragment version
http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
homepage - http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=
-
Let's use a Wix example site (not a client, just a sample from their page) as my example. Say you wanted to check:
http://www.kingskolacheny.com/#!press/crr2
In the source code I see the escaped fragment URL. This is the one you can find a cache for:
http://www.kingskolacheny.com/?escaped_fragment=press/crr2
That leads me to: http://webcache.googleusercontent.com/search?q=cache:http://www.kingskolacheny.com/?escaped_fragment=press/crr2
If your #! URLs are not setup this way, you will struggle to see it. One page websites are ... one page. But if you have escaped fragment URLs setup, you should be able to submit those and go from there.
The easiest way I know to find these is Screaming Frog, Ajax tab, Ugly URL field - try that one.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google does not want to index my page
I have a site that is hundreds of page indexed on Google. But there is a page that I put in the footer section that Google seems does not like and are not indexing that page. I've tried submitting it to their index through google webmaster and it will appear on Google index but then after a few days it's gone again. Before that page had canonical meta to another page, but it is removed now.
Intermediate & Advanced SEO | | odihost0 -
Is possible to submit a XML sitemap to Google without using Google Search Console?
We have a client that will not grant us access to their Google Search Console (don't ask us why). Is there anyway possible to submit a XML sitemap to Google without using GSC? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Multiple pages optimised for the same keywords but pages are functionally different and visually different
Hi MOZ community! We're wondering what the implications would be on organic ranking by having 2 pages, which have quite different functionality were optimised for the same keywords. So, for example, one of the pages in question is
Intermediate & Advanced SEO | | TrueluxGroup
https://www.whichledlight.com/categories/led-spotlights
and the other page is
https://www.whichledlight.com/t/led-spotlights both of these pages are basically geared towards the keyword led spotlights the first link essentially shows the options for led spotlights, the different kind of fittings available, and the second link is a product search / results page for all products that are spotlights. We're wondering what the implications of this could be, as we are currently looking to improve the ranking for the site particularly for this keyword. Is this even safe to do? Especially since we're at the bottom of the hill of climbing the ranking ladder of this keyword. Give us a shout if you want any more detail on this to answer more easily 🙂0 -
Why is Google ranking irrelevant / not preferred pages for keywords?
Over the past few months we have been chipping away at duplicate content issues. We know this is our biggest issue and is working against us. However, it is due to this client also owning the competitor site. Therefore, product merchandise and top level categories are highly similar, including a shared server. Our rank is suffering major for this, which we understand. However, as we make changes, and I track and perform test searches, the pages that Google ranks for keywords never seems to match or make sense, at all. For example, I search for "solid scrub tops" and it ranks the "print scrub tops" category. Or the "Men Clearance" page is ranking for keyword "Women Scrub Pants". Or, I will search for a specific brand, and it ranks a completely different brand. Has anyone else seen this behavior with duplicate content issues? Or is it an issue with some other penalty? At this point, our only option is to test something and see what impact it has, but it is difficult to do when keywords do not align with content.
Intermediate & Advanced SEO | | lunavista-comm0 -
Google Rich Snippets in E-commerce Category Pages
Hello Best Practice for rich snippets / structured data in ecommerce category pages? I put structured markup in the category pages and it seems to have negatively impacted SEO. Webmaster tools is showing about 2.5:1 products to pages ratio. Should I be putting structured data in category Pages at all? Thanks for your time 🙂
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Is it a problem to use a 301 redirect to a 404 error page, instead of serving directly a 404 page?
We are building URLs dynamically with apache rewrite.
Intermediate & Advanced SEO | | lcourse
When we detect that an URL is matching some valid patterns, we serve a script which then may detect that the combination of parameters in the URL does not exist. If this happens we produce a 301 redirect to another URL which serves a 404 error page, So my doubt is the following: Do I have to worry about not serving directly an 404, but redirecting (301) to a 404 page? Will this lead to the erroneous original URL staying longer in the google index than if I would serve directly a 404? Some context. It is a site with about 200.000 web pages and we have currently 90.000 404 errors reported in webmaster tools (even though only 600 detected last month).0 -
Should my back links go to home page or internal pages
Right now we rank on page 2 for many KWs, so should i now focus my attention on getting links to my home page to build domain authority or continue to direct links to the internal pages for specific KWs? I am about to write some articles for several good ranking sites and want to know whether to link my company name (same as domain name) or KW to the home page or use individual KWs to the internal pages - I am only allowed one link per article to my site. Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0