Blocked URL parameters can still be crawled and indexed by google?
-
Hy guys,
I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand:
IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url?
IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand?
Thanks,
PS: ok 3 questions :)...
-
If you want to permanently remove URLs from the index, this is the basic process:
Have your developer implement NoIndex, Follow to all pages that have the URL parameter you want removed. For example, if the URL contains categoryFilter= (like above), then add the NoIndex, Follow tag to the of the page. Do this for all URL paramters you want removed from the index.
Make sure Google is allowed to crawl those pages. If they are blocked by robots.txt or told not to crawl them via Google Webmaster Tools, Google will not be able to see the newly implement NoIndex, Follow tag.
Then, give it some time and wait. It may take Google a long time to crawl all of these paramtered URLs again. Fallout of the index might be slow.
Once the URLs are gone, consider blocking the crawling of them via robots.txt or in GWT parameter handling.
-
Hi Anthony,
What if we are trying to permanently remove e-commerce website URL's that have multiple parameters from (Google) index. How would we apply noindex to all these URL's with parameters??
The aim is to recrawl and rebuild the index of the whole website using appropriate robots, canonical's & meta-tags, rather than using GWT.
Many thanks
-
Parameter handling in Google Webmaster Tools won't get a URL out of the index if it is already indexed.
You need to use the NoIndex robots meta tag in the of your page. Once you add this tag, be sure you are allowing Google to crawl the page. Make sure it is Not blocked via robots.txt or with Parameter handling.
Once the pages have left the index, you can block them from being crawled.
-
If you want a page or url not crawled then you should use the robots.txt file and robots meta tags. Then, in WMT, make sure those same pages are actually not being crawled
Hope that answers your question
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google webmaster is not crawling links and site cache still in old date
Hi guys, I have been trying to get my page indexed in Google with new title and descriptions but it is not getting indexed. I have checked in many tools but no useful. Can you please tell me what could be the issue? Even I have set up And Google webmaster is not crawling links I have built so far. Few links are indexed but others do not. Why this is happening. My url is: https://www.paydaysunny.com thanks
Technical SEO | | ksmith880 -
Removing Personal content from Google Index
Hi everyone, A user is complaining that her name is appearing in google search through our job ads site, so I removed such ads through Search Console, but the problem is not the ads anymore but our internal search results. The ads are no longer live but our searches has been indexed by google back then, We have been manually taking over 500 pages that included such name but more and more keep coming through pagination, we haven't found a pattern yet so pretty much any search result might have contained such name. We might get some legal issues here, did you guys got into anything similar before? We have just set some rules so that this doesn't happen again, but still can't find a way to deal with this one. Thanks in advance. PD: Not sure if this is the right category to fit it.
Technical SEO | | JoaoCJ0 -
How can I stop google indexing an image
I have put a map of cornwall on my site on the Corwnall Page, and for some reason Google.de has picked it up and shows it up in the top 4 images for a search for cornwall? The result is I am getting about 80% of the traffic coming to my site for the search Cornwall (I get about 50 unique visits per day, over 40 a day are landing on the Cornwall page. Is this a problem for my normal SEO as a Close up Magician? Will google start to think my site is about Cornwall? Should I noindex the image (I say that like I know how! - How do I noindex that image? ) Or is any traffic to a site good traffic, I imagine they will be clicking on the link landing on the page and then leaving, which I suspect is not good for google reputation. Any thoughts anyone Thanks Roger http://www.rogerlapin.co.uk Where they land http://www.google.de/imgres?imgurl=http://www.rogerlapin.co.uk/wp-content/uploads/2013/09/map-of-cornwall.jpg&imgrefurl=http://www.rogerlapin.co.uk/magician-cornwall-magicians-hire-cornwall&h=904&w=1000&sz=167&tbnid=9GFlDv3BTz4ikM:&tbnh=99&tbnw=110&zoom=1&usg=__-b4bUYWREU_wAy2M04LrsrkzZpw=&docid=AUFmzso0arbGDM&sa=X&ei=HLZ2UpGYDMrY0QWXp4D4Dg&ved=0CEgQ9QEwAw&dur=2958
Technical SEO | | rnperki0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Which factors are effect on Google index?
Mywebsite have 455 URL submitedbut only 77 URLs are indexed. How can i improve more indexed URL?
Technical SEO | | magician0 -
Can Google Anlaytics Segment By Time of the DaY?
Greetings from Latitude 53.92705600 Longitude -1.38481600... Can Google analytics anser this question..."Tell me on the 1st Sept how many visitors landed on my site between 1200HRS & 1300HRS" Grazie Tanto,
Technical SEO | | Nightwing
David0 -
Can anyone explain why and how these odd URLs could be working?
In our GWT and Google Analytics traffic reports, I often see some very oddly formed URLs. Here's an example http://www.ccisolutions.com/storefront/www.ccisolutions.com and here's another
Technical SEO | | danatanseo
http://www.ccisolutions.com/StoreFront/category//www.ccisolutions.com/StoreFront/CEW.cat What strikes me about this particular URL is two things: It renders this page http://www.ccisolutions.com/StoreFront/category/on-disc-printing, but not with that URL, the URL stays http://www.ccisolutions.com/StoreFront/category//www.ccisolutions.com/StoreFront/CEW.cat When I break this URL into pieces http://www.ccisolutions.com/StoreFront/category/CEW.cat
and www.ccisolutions.com/StoreFront/CEW.cat,
both redirect to: http://www.ccisolutions.com/StoreFront/category/on-disc-printing This makes me wonder, is there something (a rule?) in the
backend (maybe the .htaccess file?)that was set up that says http://www.ccisolutions.com/StoreFront/category/CEW.cat
= www.ccisolutions.com/StoreFront/CEW.cat
(or maybe vice versa?), and as a result an odd URL for the page is being
written automatically? This scenario worked on every category page I checked. All had the same results. For example, I tried: http://www.ccisolutions.com/StoreFront/category//www.ccisolutions.com/StoreFront/AAA.cat
and it rendered the Live Sound category page, but without redirecting to the
user friendly URL. This URL stayed unchanged in the address bar When I broke it into pieces, like http://www.ccisolutions.com/StoreFront/category/AAA.cat
and www.ccisolutions.com/StoreFront/AAA.cat, both of these redirected to http://www.ccisolutions.com/StoreFront/category/sound-video-lighting-equipment-experts Have any of you ever encountered a problem like this? Any sugeestions as to what might be causing it and how to remedy the problem? It is definitely causing us a duplicate content headache. Thanks! Dana0 -
Google Local Results and URL Titles
After searching for (city name) (business type) a number of my competitor's sites come up with the title of their web page as the results (including geographic descriptors). However, my site is listed by name and does not reflect our URL title. How is this possible (did someone manually change the title of our listing?) and how can I change this back so that the title includes a geo descriptor? Do I simply edit the listing under google places or will this have a negative effect on our rankings?
Technical SEO | | helliottlaw0