Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
-
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
-
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.
And finally there was peace in the universe...for now. ;> Tim
-
Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it. We seem to be re-indexed every couple of days, so maybe I can test it out Monday.
BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.
Thanks Peter. Tim
-
It's really tough to say, but moving away from "Let Google decide" to a more definitive choice seems like a good next step. You know which URL should be canonical, and it's not the parameterized version (if I'm understanding correctly).
If you say "Let Google decide", it seems a bit more like rel=prev/next. Google may allow any page in the set to rank, BUT they won't treat those pages as duplicates, etc. How does this actually impact the PR flow to any given page in that series? We have no idea. They're probably consolidating them on the fly, to some degree. They basically have to be, since the page they choose to rank form the set is query-dependent.
-
This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).
BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL. Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:
"group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."
What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).
This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.
Tim
-
This sounds unusual enough that I'd almost have to see it in action. Is the JS-based URL even getting indexed? This might be a non-issue, honestly. I don't have solid evidence either way about GWT blocking passing link-juice, although I suspect it behaves like a canonical in most cases.
-
I agree. The URL parameter option seems to be the best solution since this is not a unique page. It is the main page with javascript that calls for additional content to be displayed in the form of a lightbox overlay if the condition is right. Since it is not an actual page, I cannot add the rel-canonical statement to the header. It is not clear however, whether the link juice will be passed with this parameter setting in Webmaster Tools.
-
If you're already use rel-canonical, then there's really no reason to also block the parameter. Rel-canonical will preserve any link-juice, and will also keep the page available to visitors (unlike a 301-redirect).
Are you seeing a lot of these pages indexed (i.e. is the canonical tag not working)? You could block the parameter in that case, but my gut reaction is that it's unnecessary and probably counter-productive. Google may just need time to de-index (it can be a slow process).
I suspect that Google passes some link-juice through blocked parameters and treats it more like a canonical, but it may be situational and I haven't seen good data on that. So many things in Google Webmaster Tools end up being a bit of a black box. Typically, I view it as a last resort.
-
I can just repeat myself: Set Crawl to yes and use rel canonical with website.com/?v3 pointing to website.com
-
My fault for not being clear.
I understand that the rel=canonical cannot be added to the robot.txt file. We are already using the canonical statement.
I do not want to add the page with the url parameter to the robot.txt file as that would prevent the link juice from being passed.
Perhaps this example will help clarify:
URL = website.com
ULR parameter = website.com/?v3
website.com/?v3 has a lot of backlinks. How can I pass the link juice to website.com and Not have website.com/?v3 appear in the SERP"s?
-
I'm getting a bit lost with your explanation, maybe it would be easier if I saw the urls, but here"s a brief:
I would not use parameters at all. Cleen urls are best for seo, remove everything not needed. You definately don't need an url parameter to indicate that content is unique for 25%of traffic. (I got a little bit lost here: how can a content be unique for just part of your traffic. If it is found elsewhere on your pae it is not unique, if it is not found elswehere, it is unique) So anyway those url parameters do not indicate nothing to google, just stuff your url structure with useles info (for google) so why use them?
I am already using a link rel=canonical statement. I don't want to add this to the robots.txt file as that would prevent the juice from being passed.
I totally don't get this one. You can't add canonical to robots.txt. This is not a robots.txt statement.
To sum up: If you do not want your parametered page to appear in the serps than as I said: Set Crawl to yes! and use rel canonical. This way page will no more apperar in serps, but will be available for readers and will pass link juice.
-
The parameter to this URL specifies unique content for 25% of my traffic to the home page. If I use a 301 redirect than those people will not see the unique content that is relevant to them. But since this parameter is only relevant to 25% of my traffic, I would like the main URL displayed in the SERPs rather then the unique one.
Google's Webmaster Tools let you choose how you would Google to handle URL parameters. When using this tool you must specify the parameters effect on content. You can then specify what you would like googlebot to crawl. If I say NO crawl, I understand that the page with this parameter will not be crawled but will the link juice be passed to the page without the parameter?
I am already using a link rel=canonical statement. I don't want to add this url parameter to the robots.txt file either as that would prevent the juice from being passed.
What is the best way to keep this parameter and pass the juice to the main page but not have the URL parameter displayed in the SERPs?
-
What do you men by url parameter specifies content?
If a page is not crawled it definately won't pass link juice. Set Crawl to yes and use rel canonical: http://www.youtube.com/watch?v=Cm9onOGTgeM
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Japanese URL-structured sitemap (pages) not being indexed by Bing Webmaster Tools
Hello everyone, I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points: The website is based on a subdirectory URL ( example.com/ja/ ) The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji. Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed). When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either. My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this. When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded. I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps? Many thanks everyone for any advice!
Technical SEO | | Hermski0 -
Google is indexing bad URLS
Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!
Technical SEO | | Tom3_150 -
Will Google crawl and rank our ReactJS website content?
We have 250+ products dynamically inserted and sorted on our site daily (more specifically our homepage... yes, it's a long page). Our dev team would like to explore rendering the page server-side using ReactJS. We currently use a CDN to cache all the content, which of course we would like to continue using. SO... will Google be able to crawl that content? We've read some articles with different ideas (including prerendering): http://andrewhfarmer.com/react-seo/
Technical SEO | | Jane.com
http://www.seoskeptic.com/json-ld-big-day-at-google/ If we were to only load the schema important to the page (like product title, image, price, description, etc.) from the server and then let the client render the remaining content (comments, suggested products, etc.), would that go against best practices? It seems like that might be seen as showing the googlebot 1 version and showing the site visitor a different (more complete) version.0 -
Google Webmaster Tools - content keywords containing spam?
Hi all, When I looked in Google Webmaster Tools today I found under the menu Google Index, Content Keywords, that the list is full of spammy keywords (E.g. Viagra (no. 1) and stuff like that) Around april we built a whole new website, uploaded a new xml-sitemap, and did all the other things Google Webmaster Tools suggest when one is creating a Google Webmaster Account. Under the menu "Security Issues" nothing is mentioned. All together I find it har d to believe that the site is hacked - so WHY is Google finding these content keywords on our site?? Should I fear that this will harm my SEO efforts? Best regards, Christian
Technical SEO | | Henrik_Kruse0 -
Are image pages considered 'thin' content pages?
I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.
Technical SEO | | MTalhaImtiaz0 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
Updating inbound links vs. 301 redirecting the page they link to
Hi everyone, I'm preparing myself for a website redesign and finding conflicting information about inbound links and 301 redirects. If I have a URL (we'll say website.com/website) that is linked to by outside sources, should I get those outside sources to update their links when I change the URL to website.com/webpage? Or is it just as effective from a link juice perspective to simply 301 redirect the old page to the new page? Are there any other implications to this choice that I may want to consider? Thanks!
Technical SEO | | Liggins0 -
Deep Page Link - url no longer exists
I used Open Site Explorer and found a link to our site on http://www.business.com/guides/bedding-supplies-3639/ The link was setup to go to an important, deep page on my website, but the structure of our urls changed and the url no longer exists. The link (anchor text 'National Hospitality Supply') does direct to our homepage, www.nathosp.com. My question is, am I receiving full link juice? Or would I be better served to create a 301 redirect to the revised / new page url? In case it matters, if I had my choice I'd prefer the link to go to the intended deep page. Thanks in advance for your insight. -Josh Fulfer
Technical SEO | | mhans0