Partial Match or RegEx in Search Console's URL Parameters Tool?
-
So I currently have approximately 1000 of these URLs indexed, when I only want roughly 100 of them.
Let's say the URL is www.example.com/page.php?par1=ABC123=&par2=DEF456=&par3=GHI789=
All the indexed URLs follow that same kinda format, but I only want to index the URLs that have a par1 of ABC (but that could be ABC123 or ABC456 or whatever). Using URL Parameters tool in Search Console, I can ask Googlebot to only crawl URLs with a specific value. But is there any way to get a partial match, using regex maybe?
Am I wasting my time with Search Console, and should I just disallow any page.php without par1=ABC in robots.txt?
-
No problem
Hope you get it sorted!
-Andy
-
Thank you!
-
Haha, I think the train passed the station on that one. I would have realised eventually... XD
Thanks for your help!
-
Don't forget that . & ? have a specific meaning within regex - if you want to use them for pattern matching you will have to escape them. Also be aware that not all bots are capable of interpreting regex in robots.txt - you might want to be more explicit on the user agent - only using regex for Google bot.
User-agent: Googlebot
#disallowing page.php and any parameters after it
disallow: /page.php
#but leaving anything that starts with par1=ABC
allow: page.php?par1=ABC
Dirk
-
Ah sorry I missed that bit!
-Andy
-
Disallowing them would be my first priority really, before removing from index.
The trouble with this is that if you disallow first, Google won't be able to crawl the page to act on the noindex. If you add a noindex flag, Google won't index them the next time it comes-a-crawling and then you will be good to disallow
I'm not actually sure of the best way for you to get the noindex in to the page header of those pages though.
-Andy
-
Yep, have done. (Briefly mentioned in my previous response.) Doesn't pass
-
I thought so too, but according to Google the trailing wildcard is completely unnecessary, and only needs to be used mid-URL.
-
Hi Andy,
Disallowing them would be my first priority really, before removing from index. Didn't want to remove them before I've blocked Google from crawling them in case they get added back again next time Google comes a-crawling, as has happened before when I've simply removed a URL here and there. Does that make sense or am I getting myself mixed up here?
My other hack of a solution would be to check the URL in the page.php, and if URL includes par1=ABC then insert noindex meta tag. (Not sure if that would work well or not...)
-
My guess would be that this line needs an * at the end.
Allow: /page.php?par1=ABC* -
Sorry Martijn, just to jump in here for a second - Ria, you can test this via the Robots.txt testing tool in search console before going live to make sure it work.
-Andy
-
Hi Martijn, thanks for your response!
I'm currently looking at something like this...
**user-agent: *** #disallowing page.php and any parameters after it
disallow: /page.php #but leaving anything that starts with par1=ABC
allow: /page.php?par1=ABCI would have thought that you could disallow things broadly like that and give an exception, as you can with files in disallowed folders. But it's not passing Google's robots.txt Tester.
One thing that's probably worth mentioning really is that there are only two variables that I want to allow of the par1 parameter. For example's sake, ABC123 and ABC456. So would need to be either a partial match or "this or that" kinda deal, disallowing everything else.
-
Hi Ria,
I have never tried regular expressions in this way, so I can't tell you if this would work or not.
However, If all 1000 of these URL's are already indexed, just disallowing access won't then remove them from Google. You would ideally be able to place a noindex tag on those pages and let Google act on them, then you will be good to disallow. I am pretty sure there is no option to noindex under the URL Parameter Tool.
I hope that makes sense?
-Andy
-
Hi Ria,
What you could do, but it also depends on the rest of your structure is Disallow these urls based on the parameters (what you could do in a worst case scenario is that you would disallow all URLs and then put an exception Allow in there as well to make sure you still have the right URLs being indexed).
Martijn.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get a large number of urls out of Google's Index when there are no pages to noindex tag?
Hi, I'm working with a site that has created a large group of urls (150,000) that have crept into Google's index. If these urls actually existed as pages, which they don't, I'd just noindex tag them and over time the number would drift down. The thing is, they created them through a complicated internal linking arrangement that adds affiliate code to the links and forwards them to the affiliate. GoogleBot would crawl a link that looks like it's to the client's same domain and wind up on Amazon or somewhere else with some affiiiate code. GoogleBot would then grab the original link on the clients domain and index it... even though the page served is on Amazon or somewhere else. Ergo, I don't have a page to noindex tag. I have to get this 150K block of cruft out of Google's index, but without actual pages to noindex tag, it's a bit of a puzzler. Any ideas? Thanks! Best... Michael P.S., All 150K urls seem to share the same url pattern... exmpledomain.com/item/... so /item/ is common to all of them, if that helps.
Intermediate & Advanced SEO | | 945010 -
Huge spike in "access denied" in search console
Hey Guys, We have seen a huge spike in "Access Denied" status in the google search console for our website and I have no idea why that would be the case. Is there anyone that can shed some light on what is going on or who can point me in the direction of an SEO specialist that we can pay to fix the issue?? Thanks denied.png
Intermediate & Advanced SEO | | fbchris0 -
Bad Domain Links - Penguin? - Moz vs. Search Console Stats?
I've been trying to figure out why my site www.stephita.com has lost it's google ranking the past few years. I had originally thought it was due to the Panda updates, but now I'm concerned it might be because of the Penguin update. Hard for me to pinpoint, as I haven't been actively looking at my traffic stats the past years. So here's what I just noticed. On my Google Search Console - Links to your Site, I discovered there are 301 domains, where over 75% seem to be spammy. I didn't actively create those links. I'm using the MOZ - Open site Explorer tool to audit my site, and I noticed there is a smaller set of LINKING DOMAINS, at about 70 right now. Is there a reason, why MOZ wouldn't necessarily find all 300 domains? What's the BEST way to clean this up??? I saw there's a DISAVOW option in the Google Search Console, but it states it's not the best way, as I should be contacting the webmasters of all the domains, which is I assume impossible to get a real person on the other end to REMOVE these link references. HELP! 🙂 What should I do?
Intermediate & Advanced SEO | | TysonWong0 -
Google's 'related:' operator
I have a quick question about Google's 'related:' operator when viewing search results. Is there reason why a website doesn't produce related/similar sites? For example, if I use the related: operator for my site, no results appear.
Intermediate & Advanced SEO | | ecomteam_handiramp.com
https://www.google.com/#q=related:www.handiramp.com The site has been around since 1998. The site also has two good relevant DMOZ inbound links. Any suggestions on why this is and any way to fix it? Thank you.0 -
Does anyone know why my website's domain authority has dropped from 51 to 49
However this does not seem to be in isolation. All of my competitors websites have taken a similar 1 or 2 points hit. I am thinking that as an industry we may have been affected by a mutual linking site being taken down, redesigned or just loosing its own domain authority. We do rank well for our keywords and we have been on a continual rise since I took over in January, we do a little bit of a guest blogging and I am trying to build links to the site but I am doing it slowly. Would anyone else have an idea on what has happened that would cause 4 sites in the same industry to take a 1 or 2 point hit? Thanks, Emmet
Intermediate & Advanced SEO | | CertificationEU1 -
Google Semantic Search: Now I'm really confused
I'm struggling to understand why I rank for some terms and not for other closely related ones. For example: property in Toytown but NOT properties in toytown property for sale in Toytown but NOT property for sale Toytown NOR properties for sale Toytown. My gut instinct is that I don't have enough of the second phrasing as inbound link anchor text -- but didn't Penguin/Panda make all that obsolete?
Intermediate & Advanced SEO | | Jeepster0 -
How do you find old linking url's that contain uppercase letters?
We have recently moved our back office systems, on the old system we had the ability to use upper and lower case letters int he url's. On the new system we can only use lower case, which we are happy with. However any old url's being used from external sites to link into us that still have uppercase letterign now hit the 404 error page. So, how do we find them and any solutions? Example: http://www.christopherward.co.uk/men.html - works http://www.christopherward.co.uk/Men.html - Fails Kind regards Mark
Intermediate & Advanced SEO | | Duncan_Moss0 -
Include Cross Domain Canonical URL's in Sitemap - Yes or No?
I have several sites that have cross domain canonical tags setup on similar pages. I am unsure if these pages that are canonicalized to a different domain should be included in the sitemap. My first thought is no, because I should only include pages in the sitemap that I want indexed. On the other hand, if I include ALL pages on my site in the sitemap, once Google gets to a page that has a cross domain canonical tag, I'm assuming it will just note that and determine if the canonicalized page is the better version. I have yet to see any errors in GWT about this. I have seen errors where I included a 301 redirect in my sitemap file. I suspect its ok, but to me, it seems that Google would rather not find these URL's in a sitemap, have to crawl them time and time again to determine if they are the best page, even though I'm indicating that this page has a similar page that I'd rather have indexed.
Intermediate & Advanced SEO | | WEB-IRS0