Difference between Google's link: operator and GWT's links to your sites
-
I haven't used the Google operator link: for a while, and I noticed that there is a big disparity between the operator "link:" and the GWT's links to your site.
I compared these results on a number of websites, my own and competitors, and the difference seem to be the same across the board.
Has Google made a recent change with how they display link results via the operator?
Could this be an indication that they are clean out backlinks?
-
Thanks Jeepster, that video answered my question. It just seems like Google was showing a lot more links on the link: search than they currently are.
-
Hi bstone81.
Not sure I understand your question. Assuming you're asking why there's a disparity between the "link:" command and what you see in GWT's, here's Mr Cutts himself. From 2009 but I suspect it's still relevant:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google's Better Ads Chrome update: will it affect email pop-ups?
Chrome will be blocking ads on websites that are not compliant with the Better Ads standards as of Feb 15. Could not find any clues whether a pop-up (not an ad) asking for your email is included in this? We currently have a pop-up that appears on exit intent only (which is not penalized). Any ideas?
Algorithm Updates | | ati10 -
Is it possible (or advisable) to try to rank for a keyword that is 'split' across subfolders in your url?
For example, say your keyword was 'funny hats' - ideally you'd make your url 'website.com/funny-hats/' But what if 'hats' is already a larger category in your site that you want to rank for as its own keyword? Could you then try to rank for 'funny hats' using the url 'website.com/hats/funny/' ? Basically what I'm asking is, would it be harmful to the chances of ranking for your primary keyword if it's split across the url like this, and not necessarily in the correct order?
Algorithm Updates | | rwat0 -
How Google's "Temporarily remove URLs" in search console works?
Hi, We have created new sub-domain with new content which we want to highlight for users. But our old content from different sub-domain is making top on google results with reputation. How can we highlight new content and suppress old sub-domain in results? Many pages have related title tags and other information in similar. We are planing to hide URLs from Google search console, so slowly new pages will attain the traffic. How does it works?
Algorithm Updates | | vtmoz0 -
Google creating it own content
I am based in Australia but a US founded search on 'sciatica' shows an awesome answer on the RHS of the SERP https://www.google.com/search?q=sciatica&oq=sciatica&aqs=chrome.0.69i59.3631j0j7&sourceid=chrome&ie=UTF-8 The download on sciatica is a pdf created by google. Firstly is this common in the US? secondly any inputs on where this is heading for rollout would be appreciated. Is google now creating its own content to publish?
Algorithm Updates | | ClaytonJ0 -
After penguin 2.0, 20-25% drop sitewide, no google unatural links message, What could be causing it?
Hi,Since Penguin 2.0 we've taken a 20-25% knock but not recieved an unatural link message from Google. After sending a bunch of removal requests, I decided to submit a disavow file anyway two weeks ago and tried to make sure I rooted out some links that were built way back when our site started and link building best practice was a bit shadier. Analysis of our backlink profile points to about 40-50% links coming from general directories, wondering if perhaps their weight has been adjusted and this is why the drop occured? Having said that we have some high quality links from government sources and highly trusted sites so not too spammy. Can anyone shed some light or offer suggestions? Thanx
Algorithm Updates | | Mulith0 -
Google Reconsideration - To do or not to do?
We haven't been manually penalized by Google yet but we have had our fair share of things needing to be fixed; malware, bad links, lack/if no content, lack-luster UX, and issues with sitemaps & redirects. Should we still submit a reconsideration even though we haven't had a direct penalty? Does hurt us to send it?
Algorithm Updates | | GoAbroadKP0 -
Excessive internal links. Should I remove the footer links?
Hi guys, I have an ecommerce site selling eco-friendly items online. I ran some on-page optimisation reports from SEOMoz PRO and discovered that I have at least 120 internal links per page. 32 of these are in the footer, designed in part to aid user navigation but perhaps also to have a positive impact on SERPs and SEO in general for the ecommerce site. Will removing these links be beneficial to my search engine rankings, as I will have less than 100 internal links per page? Or is it a major change which may be dangerous for my site rankings? Please help as I'm not sure about this! I've attached an image of the footer links below. I won't be removing the Facebook/Twitter links, just the 3 columns on the left. Thank you, Pravin MAvLe.jpg
Algorithm Updates | | goforgreen0 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0