Google User Click Data and Metrics
-
Assuming that Google is using click data from users to calculate rankings (bounce rate, time on site, task completion, etc.) where does Google get the data, especially from browsers that aren't Chrome?
-
This was example with GA. I believe that they use dwell time and next or subsequent searches for this.
Because they can't fight against shopping cart abandonment's and other issues. So they have some as benchmark against other sites. If your metrics are above average in your industry then it's great. If your metrics are weak - you're in trouble. You can see benchmarking in Google Analytics. So whatever you do just try to make better metrics than them. Example - i just have seen that some of mine sites have pages/session 1.40 vs 2.99 in benchmark. Also mine session duration is 1:32 vs. 2:19 in benchmark.
Similar metrics are in PPC too - you need to be above the average for better positions, prices and conversions.
I know that all this explanation can sound little bit messy... but this is question all SEO specialists think about these days. If you know the answers you can become millionaire and retire quick.
-
But how does Google measure form completions or purchases for rankings?
Again, I'm not talking about Google analytics. We use it heavily for our ecommerce sites. I know how the UA tracking code works. Google claims that they don't use GA data for rankings, and I would tend to believe them.
-
That's tricky. There are lot of theories about Analytics, Chrome, AI, RNN, etc. Of course there also lot of speculations too!
BUT here Josh Bachynski explain that task completion is correlated with with user metrics - time on session, bounce rate and average pages per session. Also others - please note subsequent search in mine prev answer. So in theory sites with better time, less bounce are considered as high quality. You can check also other videos from Josh in YouTube where he explain this many times.
One of easiest way to track task completion is to add goals in Analytics and/or add events tracking too. Goals can be different - contact form filled, lead form filled, software download, whitepaper request, signup form, playing video, etc. Events can be - comments viewed, gallery viewed, video stopped, etc.Then you can see how many of your visitors do tasks and how many do events. This will be for your own insurance that they're inside of page and do something there.
Trick is that Google will use only SERP visitors and their metrics. I can have site with 20k visitors daily from Facebook/Twitter and only 200 from Google SERP. I don't saying that 20k visitors can be wrong, but they will be almost useless for clicking test. Things will be different if we have 20k daily from SERP and 200 from Facebook/Twitter.
So - whatever you do just when you receive SERP traffic keep it in site. This is higher priority for better ranking.
-
Thanks for the answer. Spot on.
There's been a lot of speculation on "task completion" and how it relates to ranking. If completing a task is a purchase on an ecommerce site, how is Google measuring it? Is it only through Chrome or by some other means?
how does Google measure when someone completes a form?
is that possible, or is Google just checking to make sure that the cart and the form work correctly? Was that the point of the "Zombie" update?
-
If you remember before 5 years ago all urls was unencrypted in SERP and lot of tools using this for capturing "keywords" and linking them to pages. After they introduce this in 2010 they begin rollout in few years and today only way to see keywords is in SearchConsole. Of course encryption is for "to improve your search quality and to provide better service". Original text can be seen here. Please note "provide better service" there. This is tricky!
So imagine that you search for moz and here is actual URL i can see now:
https://www.google.bg/search?q=moz&ie=utf-8&oe=utf-8&gws_rd=cr&ei=4wNVVpnZBYGoUZiXh4AG
you can definitely see keyword there in ?q=moz now first result is Moz.com and it's URL is:
https://www.google.bg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjenMnUq6rJAhXIRBQKHXuVCcUQFggfMAA&url=https%3A%2F%2Fmoz.com%2F&usg=AFQjCNHNW83KUfvLcZOMILlYW49NobxUig&sig2=nOVvQ05KIPrGB3XFAFmIGgAs you can clearly see - there isn't keyword anymore but everything comes with encrypted data (ved, usg, sig2). This link /url is actual redirector that count your click on specific result and position.Now if i click on 1st result and go in Moz.com i can scroll down and i find "this isn't MOZ i'm looking for" so within some time (few seconds) i will return to SERP. This is actual "dwell time" and bounce back to SERP. It's negative signal because it's show to Google that result he return for first place isn't correct with human verification. Now back on same SERP i can see Moz in Wikipedia:
https://www.google.bg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=19&cad=rja&uact=8&ved=0ahUKEwjenMnUq6rJAhXIRBQKHXuVCcUQFghhMBI&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMoz_(marketing_software)&usg=AFQjCNGCgqmsKNIdaZdGrbugf8bJk6NhTg&sig2=jS-vt68NFtD5YhgSV4lTGwIf i click this and i doesn't have return to SERP anymore this give to Google enough to calculate bounce rate for this site (only to return in SERP) so give Wikipedia some "goal completition". And time for next search can be used to calculated "time on site".And since all searches are encrypted they knows when specific user search for something and when they make new search based on already returned data. Example is "Napoleon". This can be anything - french emperor, movie, cake, drink and other things. So now i can do subsequent search "Napoleon height". This is example how one search can give me enough information to do another refined search. Other good example can be "32 us president". Then i can type "franklin d roosevelt height".
This was explained much better in closing MozCon 2015 presentation "SEO in a Two Algorithm World ":
http://www.slideshare.net/randfish/onsite-seo-in-2015-an-elegant-weapon-for-a-more-civilized-marketer
and you should see it. There also shown few tests inside with terrific results. -
I guess I should have phrased the question a little differently. This is not related to Google Analytics.
When I do a Google search, Google is able to track my actions, and is probably using the data as a ranking factor. Josh Bachynski did a Whiteboard Friday on it.
https://moz.com/blog/panda-41-google-leaked-dos-and-donts-whiteboard-friday
How is Google able to track user actions after they click on a SERP listing? Where are they getting their data?
-
Here is a good explanation.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Pagination Changes
What with Google recently coming out and saying they're basically ignoring paginated pages, I'm considering the link structure of our new, sooner to launch ecommerce site (moving from an old site to a new one with identical URL structure less a few 404s). Currently our new site shows 20 products per page but with this change by Google it means that any products on pages 2, 3 and so on will suffer because google treats it like an entirely separate page as opposed to an extension of the first. The way I see it I have one option: Show every product in each category on page 1. I have Lazy Load installed on our new website so it will only load the screen a user can see and as they scroll down it loads more products, but how will google interpret this? Will Google simply see all 50-300 products per category and give the site a bad page load score because it doesn't know the Lazy Load is in place? Or will it know and account for it? Is there anything I'm missing?
Intermediate & Advanced SEO | | moon-boots0 -
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Googlebot being redirected but not users?
Hi, We seem to have a slightly odd issue. We noticed that a number of our location category pages were slipping off 1 page, and onto page 2 in our niche. On inspection, we noticed that our Arizona page had started ranking in place of a number of other location pages - Cali, Idaho, NJ etc. Weirdly, the pages they had replaced were no longer indexed, and would remain so, despite being fetched, tweeted etc. One test was to see when the dropped out pages had been last crawled, or at least cached. When conducting the 'cache:domain.com/category/location' on these pages, we were getting 301 redirected to, you guessed it, the Arizona page. Very odd. However, the dropped out pages were serving 200 OK when run through header checker tools, screaming frog etc. On the face of it, it would seem Googlebot is getting redirected when it is hitting a number of our key location pages, but users are not. Has anyone experienced anything like this? The theming of the pages are quite different in terms of content, meta etc. Thanks.
Intermediate & Advanced SEO | | Sayers0 -
Google Places Listing Active In Two Seperate Google Places Accounts?
Hi is there any issues with having a google places listing in two seperate google places accounts. For example we have a client who cannot access their old google places account (ex-employee had their login details which they can't get) and want us to take control over the listing. If we click the "is this your listing" manage this page button - and claim the listing, will this transfer the listing to our control? Or will it create a duplicate? Are there any problems having the listing in different separate accounts. Is it a situation in which the last person who manages the listing takes control? And the listing automatically deactivates from the old account? Do all the images remain aswell? Thanks,
Intermediate & Advanced SEO | | MBASydney
Tom0 -
How does Google Keywords Tool compile search volume data from auto-suggest terms?
Hi everyone. This question has been nagging at my mind today ever since I had a colleague say "no one ever searches for the term 'presonus 16.4.2'" My argument is "Yes they do." My argument is based on the fact that when you type in 'presonus 16" - Google's auto-suggest lists several options, of which presonus 16.4.2 is one. That being said. Does Google's Keyword Tool base traffic estimates ONLY on actualy keywords typed in by the user, in this case "presonus 16" or does it also compile data for searchers who opt for the "suggested" term "presonus 16.4.2" ??? To clarify, does anyone have any insight as to whether Google is compiling data on strictly the term typed in from a use or giving precendence to a term being selected by a user that was listed as an auto-suggest, or, are they being counted twice???? Very curious to know everyone's take on this! Thanks!
Intermediate & Advanced SEO | | danatanseo0 -
Google + Local Pages
Hi, If I have a company with multipul addresses, Do I create separate Google + page for each area?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
How does google count a menu on each page
Hello, Just wondering how google treats the TOp and bottom menu that you see on each page of a website ? Does it count it on all the pages in terms of link juice, or is it just there for user experience and only what it counts are the links in the content of a page or on the side ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Is Google mad at me for redirecting...?
Hi, I have an e-commerce website that sells unique items (one of a kind). We have hundreds of items and the items are rapidly sold. Up till now I kept the sold items under our "sold items" section but it started to get back at me as we have more "sold" than non sold and we are having duplication problems (the items are quite similar besides to sizes etc.). What should we do? Should we redirect 100 pages each week? Will Google be upset with that? (for driving it crazy) Thanks
Intermediate & Advanced SEO | | BeytzNet0