Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
-
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
-
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.
And finally there was peace in the universe...for now. ;> Tim
-
Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it. We seem to be re-indexed every couple of days, so maybe I can test it out Monday.
BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.
Thanks Peter. Tim
-
It's really tough to say, but moving away from "Let Google decide" to a more definitive choice seems like a good next step. You know which URL should be canonical, and it's not the parameterized version (if I'm understanding correctly).
If you say "Let Google decide", it seems a bit more like rel=prev/next. Google may allow any page in the set to rank, BUT they won't treat those pages as duplicates, etc. How does this actually impact the PR flow to any given page in that series? We have no idea. They're probably consolidating them on the fly, to some degree. They basically have to be, since the page they choose to rank form the set is query-dependent.
-
This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).
BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL. Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:
"group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."
What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).
This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.
Tim
-
This sounds unusual enough that I'd almost have to see it in action. Is the JS-based URL even getting indexed? This might be a non-issue, honestly. I don't have solid evidence either way about GWT blocking passing link-juice, although I suspect it behaves like a canonical in most cases.
-
I agree. The URL parameter option seems to be the best solution since this is not a unique page. It is the main page with javascript that calls for additional content to be displayed in the form of a lightbox overlay if the condition is right. Since it is not an actual page, I cannot add the rel-canonical statement to the header. It is not clear however, whether the link juice will be passed with this parameter setting in Webmaster Tools.
-
If you're already use rel-canonical, then there's really no reason to also block the parameter. Rel-canonical will preserve any link-juice, and will also keep the page available to visitors (unlike a 301-redirect).
Are you seeing a lot of these pages indexed (i.e. is the canonical tag not working)? You could block the parameter in that case, but my gut reaction is that it's unnecessary and probably counter-productive. Google may just need time to de-index (it can be a slow process).
I suspect that Google passes some link-juice through blocked parameters and treats it more like a canonical, but it may be situational and I haven't seen good data on that. So many things in Google Webmaster Tools end up being a bit of a black box. Typically, I view it as a last resort.
-
I can just repeat myself: Set Crawl to yes and use rel canonical with website.com/?v3 pointing to website.com
-
My fault for not being clear.
I understand that the rel=canonical cannot be added to the robot.txt file. We are already using the canonical statement.
I do not want to add the page with the url parameter to the robot.txt file as that would prevent the link juice from being passed.
Perhaps this example will help clarify:
URL = website.com
ULR parameter = website.com/?v3
website.com/?v3 has a lot of backlinks. How can I pass the link juice to website.com and Not have website.com/?v3 appear in the SERP"s?
-
I'm getting a bit lost with your explanation, maybe it would be easier if I saw the urls, but here"s a brief:
I would not use parameters at all. Cleen urls are best for seo, remove everything not needed. You definately don't need an url parameter to indicate that content is unique for 25%of traffic. (I got a little bit lost here: how can a content be unique for just part of your traffic. If it is found elsewhere on your pae it is not unique, if it is not found elswehere, it is unique) So anyway those url parameters do not indicate nothing to google, just stuff your url structure with useles info (for google) so why use them?
I am already using a link rel=canonical statement. I don't want to add this to the robots.txt file as that would prevent the juice from being passed.
I totally don't get this one. You can't add canonical to robots.txt. This is not a robots.txt statement.
To sum up: If you do not want your parametered page to appear in the serps than as I said: Set Crawl to yes! and use rel canonical. This way page will no more apperar in serps, but will be available for readers and will pass link juice.
-
The parameter to this URL specifies unique content for 25% of my traffic to the home page. If I use a 301 redirect than those people will not see the unique content that is relevant to them. But since this parameter is only relevant to 25% of my traffic, I would like the main URL displayed in the SERPs rather then the unique one.
Google's Webmaster Tools let you choose how you would Google to handle URL parameters. When using this tool you must specify the parameters effect on content. You can then specify what you would like googlebot to crawl. If I say NO crawl, I understand that the page with this parameter will not be crawled but will the link juice be passed to the page without the parameter?
I am already using a link rel=canonical statement. I don't want to add this url parameter to the robots.txt file either as that would prevent the juice from being passed.
What is the best way to keep this parameter and pass the juice to the main page but not have the URL parameter displayed in the SERPs?
-
What do you men by url parameter specifies content?
If a page is not crawled it definately won't pass link juice. Set Crawl to yes and use rel canonical: http://www.youtube.com/watch?v=Cm9onOGTgeM
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Indexing without content
Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem?
Technical SEO | | AtuliSulava1 -
Japanese URL-structured sitemap (pages) not being indexed by Bing Webmaster Tools
Hello everyone, I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points: The website is based on a subdirectory URL ( example.com/ja/ ) The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji. Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed). When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either. My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this. When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded. I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps? Many thanks everyone for any advice!
Technical SEO | | Hermski0 -
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
Site Audit Tools Not Picking Up Content Nor Does Google Cache
Hi Guys, Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages. I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content. What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
Technical SEO | | nezona0 -
Tools/Software that can crawl all image URLs in a site
Excluding Screaming Frog, what other tools/software to use in order to crawl all image URLs in a site? Because in Screaming Frog, they don't crawl image URLs which are not under the site domain. Example of an image URL outside the client site: http://cdn.shopify.com/images/this-is-just-a-sample.png If the client is: http://www.example.com, Screaming Frog only crawls images under it like, http://www.example.com/images/this-is-just-a-sample.png
Technical SEO | | jayoliverwright0 -
How google crawls images and which url shows as source?
Hi, I noticed that some websites host their images to a different url than the one their actually website is hosted but in the end google link to the one that the site is hosted. Here is an example: This is a page of a hotel in booking.com: http://www.booking.com/hotel/us/harrah-s-caesars-palace.en-gb.html When I try a search for this hotel in google images it shows up one of the images of the slideshow. When I click on the image on Google search, if I choose the Visit Page button it links to the url above but the actual image is located in a totally different url: http://r-ec.bstatic.com/images/hotel/840x460/135/13526198.jpg My question is can you host your images to one site but show it to another site and in the end google will lead to the second one?
Technical SEO | | Tz_Seo0 -
Google Webmaster Tools - content keywords containing spam?
Hi all, When I looked in Google Webmaster Tools today I found under the menu Google Index, Content Keywords, that the list is full of spammy keywords (E.g. Viagra (no. 1) and stuff like that) Around april we built a whole new website, uploaded a new xml-sitemap, and did all the other things Google Webmaster Tools suggest when one is creating a Google Webmaster Account. Under the menu "Security Issues" nothing is mentioned. All together I find it har d to believe that the site is hacked - so WHY is Google finding these content keywords on our site?? Should I fear that this will harm my SEO efforts? Best regards, Christian
Technical SEO | | Henrik_Kruse0 -
How long will Google take to stop crawling an old URL once it has been 301 redirected
I need to do a clean-up old urls that have been redirected in sitemap and was wondering about this.
Technical SEO | | Ant-8080