Can Anybody Understand This ?
-
Hey guyz,
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is:"Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document.
For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system.
"
-
I can't say I have a complete understanding of what this is explaining, but here's a link to the original paper on Stanford's website if anyone else is interested. http://infolab.stanford.edu/~backrub/google.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone tell me - in layman's terms - any SEO implications of a Netscaler redirect?
We are in the midst of exploring the best options for developing a "microsite" experience for a client and how we manage the site - subdomain vs. subdirectory... Netscaler redirect vs DNS change. We understand that a subdirectory is best for SEO purposes; however, we anticipate technical limitations when integrating the different hosting platforms and capabilities into the existing site. The proposed solutions that were provided are a netscaler redirect and/or dns changes. Any experience with these solutions?
Technical SEO | | jgrammer0 -
We can't figure out why competitors have better position(s) in Google
We are using MOZ analytics for some days now, and it really helps us with important information about our rankings.
Technical SEO | | wilcoXXL
I hope you guys can help us out with the following particular case; In google.nl (dutch) we rank position #18 with the following searchterm 'sphinx 345' one of our competitors rank position #3.
We used the MOZ On Page Grade tool to find out some details about the two pages:
Our page #18: http://goo.gl/cTsbmI
Competitor page #3: http://goo.gl/qk21sM Our page hits an A and Keyword usage for "sphinx 345" = 52
The competitors page hits an A too and Keyword usage for "sphinx 345" = 45 About the link structure; for our page there is no link data found in Open Site Explorer. The url exists about a year and a half now.
I'm also very sure we have many internal links to this url.
Does Google and other crawlers have a hard time to crawl our site?(it's a Magento site, our competitors do have custom-made e-commerce systems, maybe that has something to do with it?) As i were saying;we can't figure this out. I hope you guys can help to get us any further. Regards, Wilco0 -
Can the sudden addition of a new section of a website cause the site to be sandboxed?
Hi there, We have noticed a sudden drop in traffic for certain keywords. It seems to correlate precisely with a different department adding a whole new recruitment section to the site. However there doesn't appear to be any keyword cannibalization or duplicate content issues. Any ideas would be most appreciated. Thanks
Technical SEO | | Blaze-Communication0 -
Can Google Read schema.org markup within Ajax?
Hi All, as a local business directory, we also display Openinghours on a business listing page. ex. http://www.goudengids.be/napoli-kontich-2550/
Technical SEO | | TruvoDirectories
At the same time I also have schema.org markup for Openinghours implemented.
But, for technical reasons (performance), the openinghours (and the markup alongside) are displayed using AJAX. I'm wondering if google is able to read the markup. The rich snippet tool and markup plugings like Semantic Inspector can't "see" the markup for openinghours. Any advice here?0 -
Can You Use More Then One Google Local Rich Snippet on a single site/ on a single page.
I am currently working on a website for a business that has multiple office locations. As I am trying to target all four locations I was wondering if it is okay to have more then one Local Rich Snippet on a single page. (For example they list all four locations and addresses within their footer and I was wondering if I could make these local rich snippets). What about having more then one on a single website. For example if a company has multiple offices located in several different cities and have set up individual contact pages for these cities, can each page have it's own Local Rich Snippet? Will Google look at these multiple "local rich snippets" as spaming or will they recognize the multiple locations and count it towards their local seo?
Technical SEO | | webdesignbarrie1 -
How can I find my Webmaster Tools HTML file?
So, totally amateur hour here, but I can't for the life of me find our HTML verification file for webmaster tools. I see nowhere to look at it in Google Webmaster Tools console, I tried a site:, I googled it, all the info out there is about how to verify a site. Ours is verified, but I need the verification file code to sync up with the Google API and no one seems to have it. Any thoughts?
Technical SEO | | healthgrades0 -
Can you do a 301 redirect without a hosting account?
Trying to retire domain1 and 301 it to domain2 - just don't want to get stuck having to pay the old hosting provider simply to serve a .htaccess file with the redirect rule.
Technical SEO | | TitanDigital0 -
Can duplicate [*] reports be suppressed?
That's the best question could come up with! Have searched but can't find any info. New user: First crawl error report show listings of pages with same titles/descriptions. In reality they are all the same page but with different parameters eg Email_Me_When_Back_In_Stock.asp?productId=xxxxxxxxx etc These have been excluded in both robots.txt (for some time ie disallow: /*?)and google webmaster tools (just done). Will they still show in updated report and if so is there a way to suppress them if the issues have been rectified as can be done in webmaster tools. Is there a way to test to see if they are being excluded by robots.txt and GWT?
Technical SEO | | RobWillox0