undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Intermediate & Advanced SEO
  4. Partial Match or RegEx in Search Console's URL Parameters Tool?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Partial Match or RegEx in Search Console's URL Parameters Tool?

Intermediate & Advanced SEO
4
15
3.2k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • Ria_
    Ria_ last edited by Oct 9, 2015, 5:55 AM

    So I currently have approximately 1000 of these URLs indexed, when I only want roughly 100 of them.

    Let's say the URL is www.example.com/page.php?par1=ABC123=&par2=DEF456=&par3=GHI789=

    All the indexed URLs follow that same kinda format, but I only want to index the URLs that have a par1 of ABC (but that could be ABC123 or ABC456 or whatever). Using URL Parameters tool in Search Console, I can ask Googlebot to only crawl URLs with a specific value. But is there any way to get a partial match, using regex maybe?

    Am I wasting my time with Search Console, and should I just disallow any page.php without par1=ABC in robots.txt?

    1 Reply Last reply Reply Quote 0
    • Andy.Drinkwater
      Andy.Drinkwater @Ria_ last edited by Oct 9, 2015, 7:54 AM Oct 9, 2015, 7:54 AM

      No problem 🙂

      Hope you get it sorted!

      -Andy

      1 Reply Last reply Reply Quote 0
      • Ria_
        Ria_ @DirkC last edited by Oct 9, 2015, 7:53 AM Oct 9, 2015, 7:53 AM

        Thank you! 😄

        1 Reply Last reply Reply Quote 0
        • Ria_
          Ria_ @Andy.Drinkwater last edited by Oct 9, 2015, 7:51 AM Oct 9, 2015, 7:51 AM

          Haha, I think the train passed the station on that one. I would have realised eventually... XD

          Thanks for your help!

          Andy.Drinkwater 1 Reply Last reply Oct 9, 2015, 7:54 AM Reply Quote 1
          • DirkC
            DirkC last edited by Oct 9, 2015, 7:52 AM Oct 9, 2015, 7:47 AM

            Don't forget that . & ? have a specific meaning within regex - if you want to use them for pattern matching you will have to escape them. Also be aware that not all bots are capable of interpreting regex in robots.txt - you might want to be more explicit on the user agent - only using regex for Google bot.

            User-agent: Googlebot

            #disallowing page.php and any parameters after it

            disallow: /page.php

            #but leaving anything that starts with par1=ABC

            allow: page.php?par1=ABC

            Dirk

            Ria_ 1 Reply Last reply Oct 9, 2015, 7:53 AM Reply Quote 1
            • Andy.Drinkwater
              Andy.Drinkwater @Ria_ last edited by Oct 9, 2015, 7:35 AM Oct 9, 2015, 7:35 AM

              Ah sorry I missed that bit!

              -Andy

              1 Reply Last reply Reply Quote 0
              • Andy.Drinkwater
                Andy.Drinkwater @Ria_ last edited by Oct 9, 2015, 7:34 AM Oct 9, 2015, 7:33 AM

                Disallowing them would be my first priority really, before removing from index.

                The trouble with this is that if you disallow first, Google won't be able to crawl the page to act on the noindex. If you add a noindex flag, Google won't index them the next time it comes-a-crawling and then you will be good to disallow 🙂

                I'm not actually sure of the best way for you to get the noindex in to the page header of those pages though.

                -Andy

                Ria_ 1 Reply Last reply Oct 9, 2015, 7:51 AM Reply Quote 0
                • Ria_
                  Ria_ @Andy.Drinkwater last edited by Oct 9, 2015, 7:26 AM Oct 9, 2015, 7:26 AM

                  Yep, have done. (Briefly mentioned in my previous response.) Doesn't pass 😞

                  Andy.Drinkwater 1 Reply Last reply Oct 9, 2015, 7:35 AM Reply Quote 0
                  • Ria_
                    Ria_ @Martijn_Scheijbeler last edited by Oct 9, 2015, 7:24 AM Oct 9, 2015, 7:24 AM

                    I thought so too, but according to Google the trailing wildcard is completely unnecessary, and only needs to be used mid-URL.

                    1 Reply Last reply Reply Quote 0
                    • Ria_
                      Ria_ @Andy.Drinkwater last edited by Oct 9, 2015, 7:23 AM Oct 9, 2015, 7:23 AM

                      Hi Andy,

                      Disallowing them would be my first priority really, before removing from index. Didn't want to remove them before I've blocked Google from crawling them in case they get added back again next time Google comes a-crawling, as has happened before when I've simply removed a URL here and there. Does that make sense or am I getting myself mixed up here?

                      My other hack of a solution would be to check the URL in the page.php, and if URL includes par1=ABC then insert noindex meta tag. (Not sure if that would work well or not...)

                      Andy.Drinkwater 1 Reply Last reply Oct 9, 2015, 7:33 AM Reply Quote 0
                      • Martijn_Scheijbeler
                        Martijn_Scheijbeler @Ria_ last edited by Oct 9, 2015, 7:22 AM Oct 9, 2015, 7:22 AM

                        My guess would be that this line needs an * at the end.
                        Allow: /page.php?par1=ABC*

                        Ria_ 1 Reply Last reply Oct 9, 2015, 7:24 AM Reply Quote 0
                        • Andy.Drinkwater
                          Andy.Drinkwater @Ria_ last edited by Oct 9, 2015, 7:18 AM Oct 9, 2015, 7:18 AM

                          Sorry Martijn, just to jump in here for a second - Ria, you can test this via the Robots.txt testing tool in search console before going live to make sure it work.

                          -Andy

                          Ria_ 1 Reply Last reply Oct 9, 2015, 7:26 AM Reply Quote 0
                          • Ria_
                            Ria_ @Martijn_Scheijbeler last edited by Oct 9, 2015, 7:14 AM Oct 9, 2015, 7:14 AM

                            Hi Martijn, thanks for your response!

                            I'm currently looking at something like this...

                            **user-agent: *** #disallowing page.php and any parameters after it
                            disallow: /page.php #but leaving anything that starts with par1=ABC
                            allow: /page.php?par1=ABC

                            I would have thought that you could disallow things broadly like that and give an exception, as you can with files in disallowed folders. But it's not passing Google's robots.txt Tester.

                            One thing that's probably worth mentioning really is that there are only two variables that I want to allow of the par1 parameter. For example's sake, ABC123 and ABC456. So would need to be either a partial match or "this or that" kinda deal, disallowing everything else.

                            Andy.Drinkwater Martijn_Scheijbeler 2 Replies Last reply Oct 9, 2015, 7:22 AM Reply Quote 0
                            • Andy.Drinkwater
                              Andy.Drinkwater last edited by Oct 9, 2015, 7:11 AM Oct 9, 2015, 7:11 AM

                              Hi Ria,

                              I have never tried regular expressions in this way, so I can't tell you if this would work or not.

                              However, If all 1000 of these URL's are already indexed, just disallowing access won't then remove them from Google. You would ideally be able to place a noindex tag on those pages and let Google act on them, then you will be good to disallow. I am pretty sure there is no option to noindex under the URL Parameter Tool.

                              I hope that makes sense?

                              -Andy

                              Ria_ 1 Reply Last reply Oct 9, 2015, 7:23 AM Reply Quote 0
                              • Martijn_Scheijbeler
                                Martijn_Scheijbeler last edited by Oct 9, 2015, 7:01 AM Oct 9, 2015, 7:01 AM

                                Hi Ria,

                                What you could do, but it also depends on the rest of your structure is Disallow these urls based on the parameters (what you could do in a worst case scenario is that you would disallow all URLs and then put an exception Allow in there as well to make sure you still have the right URLs being indexed).

                                Martijn.

                                Ria_ 1 Reply Last reply Oct 9, 2015, 7:14 AM Reply Quote 0
                                • 1 / 1
                                1 out of 15
                                • First post
                                  1/15
                                  Last post

                                Got a burning SEO question?

                                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                                Start my free trial


                                Browse Questions

                                Explore more categories

                                • Moz Tools

                                  Chat with the community about the Moz tools.

                                • SEO Tactics

                                  Discuss the SEO process with fellow marketers

                                • Community

                                  Discuss industry events, jobs, and news!

                                • Digital Marketing

                                  Chat about tactics outside of SEO

                                • Research & Trends

                                  Dive into research and trends in the search industry.

                                • Support

                                  Connect on product support and feature requests.

                                • See all categories

                                Related Questions

                                • Davit1985

                                  My url disappeared from Google but Search Console shows indexed. This url has been indexed for more than a year. Please help!

                                  Super weird problem that I can't solve for last 5 hours. One of my urls: https://www.dcacar.com/lax-car-service.html Has been indexed for more than a year and also has an AMP version, few hours ago I realized that it had disappeared from serps. We were ranking on page 1 for several key terms. When I perform a search "site:dcacar.com " the url is no where to be found  on all 5 pages. But when I check my Google Console it shows as indexed I requested to index again but nothing changed. All other 50 or so urls are not effected at all, this is the only url that has gone missing can someone solve this mystery for me please. Thanks a lot in advance.

                                  Intermediate & Advanced SEO | Apr 8, 2019, 8:19 AM | Davit1985
                                  0
                                • SDCMarketing

                                  Change Google's version of Canonical link

                                  Hi My website has millions of URLs and some of the URLs have duplicate versions. We did not set canonical all these years. Now we wanted to implement it  and fix all the technical SEO issues. I wanted to consolidate and redirect all the variations of a URL to the highest pageview version and use that as the canonical because all of these variations have the same content. While doing this, I found in Google search console that Google has already selected another variation of URL as canonical and not the highest pageview version. My questions: I have millions of URLs for which I have to do 301 and set canonical. How can I find all the canonical URLs that Google has autoselected? Search Console has a daily quota of 100 or something. Is it possible to override Google's version of Canonical? Meaning, if I set a variation as Canonical and it is different than what Google has already selected, will it change overtime in Search Console? Should I just do a 301 to highest pageview variation of the URL and not set canonicals at all? This way the canonical that Google auto selected might get redirected to the highest pageview variation of the URL. Any advice or help would be greatly appreciated.

                                  Intermediate & Advanced SEO | Jul 30, 2018, 3:28 PM | SDCMarketing
                                  0
                                • Corbec888

                                  How does educational organization schema interact with Google's knowledge graph?

                                  Hi there! I was just wondering if the granular options of the Organization schema, like Educational Organization (http://schema.org/EducationalOrganization) and CollegeOrUniversity (http://schema.org/CollegeOrUniversity) schema work the same when it comes to pulling data into the knowledge graph. I've typically always used the Organization schema for customers but was wondering if there are any drawbacks for going deep into the hierarchy of schema. Cheers 😄

                                  Intermediate & Advanced SEO | Sep 28, 2023, 5:43 AM | Corbec888
                                  0
                                • Atlanta-SMO

                                  Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's

                                  An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.

                                  Intermediate & Advanced SEO | Oct 11, 2014, 2:26 PM | Atlanta-SMO
                                  0
                                • esiow2013

                                  May know what's the meaning of these parameters in .htaccess?

                                  Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
                                  RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
                                  RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist

                                  Intermediate & Advanced SEO | Sep 22, 2013, 12:47 PM | esiow2013
                                  1
                                • howlusa

                                  A few questions on Google's Structured Data Markup Helper...

                                  I'm trying to go through my site and add microdata with the help of Google's Structured Data Markup Helper. I have a few questions that I have not been able to find an answer for. Here is the URL I am referring to: http://www.howlatthemoon.com/locations/location-chicago My company is a bar/club, with only 4 out of 13 locations serving food. Would you mark this up as a local business or a restaurant? It asks for "URL" above the ratings. Is this supposed to be the URL that ratings are on like Yelp or something? Or is it the URL for the page? Either way, neither of those URLs are on the page so I can't select them. If it is for Yelp should I link to it? How do I add reviews? Do they have to be on the page? If I make a group of days for Day of the Week for Opening hours, such as Mon-Thu, will that work out? I have events on this page. However, when I tried to do the markup for just the event it told me to use  itemscope itemtype="http://schema.org/Event" on the body tag of the page. That is just a small part of the page, I'm not sure why I would put the event tag on the whole body? Any other tips would be much appreciated. Thanks!

                                  Intermediate & Advanced SEO | Aug 26, 2013, 7:34 PM | howlusa
                                  0
                                • SKP

                                  Two Pages with the Same Name Different URL's

                                  I was hoping someone could give me some insight into a perplexing issue that I am having with my website. I run an 20K product ecommerce website and I am finding it necessary to have two pages for my content: 1 for content category pages about wigets one for shop pages for wigets 1st page would be .com/shop/wiget/ 2nd page would be .com/content/wiget/ The 1st page would be a catalogue of all the products with filters for the customer to narrow down wigets. So ultimately the URL for the shop page could look like this when the customer filters down... .com/shop/wiget/color/shape/ The second page would be content all about the Wigets. This would be types of wigets colors of wigets, how wigets are used, links to articles about wigets etc. Here are my questions. 1. Is it bad to have two pages about wigets on the site, one for shopping and one for information. The issue here is when I combine my content wiget with my shop wiget page, no one buys anything. But I want to be able to provide Google the best experience for rankings. What is the best approach for Google and the customer? 2.  Should I rel canonical all of my .com/shop/wiget/ + .com/wiget/color/ etc. pages to the .com/content/wiget/ page? Or, Should I be canonicalizing all of my .com/shop/wiget/color/etc pages to .com/shop/wiget/ page? 3. Ranking issues. As it is right now, I rank #1 for wiget color. This page on my site would be .com/shop/wiget/color/ . If I rel canonicalize all of my pages to .com/content/wiget/ I am going to loose my rankings because all of my shop/wiget/xxx/xxx/ pages will then point to .com/content/wiget/ page. I am just finding with these massive ecommerce sites that there is WAY to much potential for duplicate content, not enough room to allow Google the ability to rank long tail phrases all the while making it completely complicated to offer people pages that promote buying. As I said before, when I combine my content + shop pages together into one page, my sales hit the floor (like 0 - 15 dollars a day), when i just make a shop page my sales are like (1k+ a day). But I have noticed that ever since Penguin and Panda my rankings have fallen from #1 across the board to #15 and lower for a lot of my phrase with the exception of the one mentioned above. This is why I want to make an information page about wigets and a shop page for people to buy wigets. Please advise if you would. Thanks so much for any insight you can give me!

                                  Intermediate & Advanced SEO | Feb 14, 2013, 11:09 PM | SKP
                                  0
                                • Townpages

                                  Culling 99% of a website's pages. Will this cause irreparable damage?

                                  I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick

                                  Intermediate & Advanced SEO | Jul 13, 2011, 10:38 PM | Townpages
                                  0

                                Get started with Moz Pro!

                                Unlock the power of advanced SEO tools and data-driven insights.

                                Start my free trial
                                Products
                                • Moz Pro
                                • Moz Local
                                • Moz API
                                • Moz Data
                                • STAT
                                • Product Updates
                                Moz Solutions
                                • SMB Solutions
                                • Agency Solutions
                                • Enterprise Solutions
                                Free SEO Tools
                                • Domain Authority Checker
                                • Link Explorer
                                • Keyword Explorer
                                • Competitive Research
                                • Brand Authority Checker
                                • Local Citation Checker
                                • MozBar Extension
                                • MozCast
                                Resources
                                • Blog
                                • SEO Learning Center
                                • Help Hub
                                • Beginner's Guide to SEO
                                • How-to Guides
                                • Moz Academy
                                • API Docs
                                About Moz
                                • About
                                • Team
                                • Careers
                                • Contact
                                Why Moz
                                • Case Studies
                                • Testimonials
                                Get Involved
                                • Become an Affiliate
                                • MozCon
                                • Webinars
                                • Practical Marketer Series
                                • MozPod
                                Connect with us

                                Contact the Help team

                                Join our newsletter
                                Moz logo
                                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                                • Accessibility
                                • Terms of Use
                                • Privacy

                                bookmark_instructions

                                Looks like your connection to Moz was lost, please wait while we try to reconnect.