undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Intermediate & Advanced SEO
  4. PDF for link building - avoiding duplicate content

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

PDF for link building - avoiding duplicate content

Intermediate & Advanced SEO
4
14
3.1k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • BobGW
    BobGW last edited by Feb 12, 2013, 5:21 PM

    Hello,

    We've got an article that we're turning into a PDF. Both the article and the PDF will be on our site. This PDF is a good, thorough piece of content on how to choose a product.

    We're going to strip out all of the links to our in the article and create this PDF so that it will be good for people to reference and even print. Then we're going to do link building through outreach since people will find the article and PDF useful.

    My question is, how do I use rel="canonical" to make sure that the article and PDF aren't duplicate content?

    Thanks.

    1 Reply Last reply Reply Quote 0
    • Marcus_Miller
      Marcus_Miller @BobGW last edited by Feb 14, 2013, 8:21 PM Feb 14, 2013, 8:21 PM

      Hey Bob

      I think you should forget about any kind of perceived conventions and have whatever you think works best for your users and goals.

      Again, look at unbounce, that is a custom landing page with a homepage link (to share the love) but not the general site navigation.

      They also have a footer to do a bit more link love but really, do what works for you.

      Forget conventions - do what works!

      Hope that helps
      Marcus

      1 Reply Last reply Reply Quote 0
      • BobGW
        BobGW @BobGW last edited by Feb 14, 2013, 4:12 PM Feb 14, 2013, 4:12 PM

        I see, thanks! I think it's important not to have the ecommerce navigation on the page promoting the pdf. What would you say is ideal as far as the graphical and navigation components of the page with the PDF on it - what kind of navigation and graphical header should I have on it?

        1 Reply Last reply Reply Quote 0
        • Marcus_Miller
          Marcus_Miller @BobGW last edited by Feb 14, 2013, 12:56 PM Feb 14, 2013, 12:56 PM

          Yep, check the HTTP headers with webbug or there are a bunch of browser plugins that will let you see the headers for the document.

          That said, I would push to drive the links to the page though rather than the document itself and just create a nice page that houses the document and make that the link target.

          You could even make the PDF link only available by email once they have singed up or some such as canonical is only a directive and you would still be better getting those links flooding into a real page on the site.

          You could even offer up some HTML to make this easier for folks to link to that linked to your main page. If you take a look at any savvy infographics etc folks will try to draw a link into a page rather than the image itself for the very same reasons.

          If you look at something like the Noobs Guide to Online Marketing from Unbounce then you will see something like this as the suggested linking code:

          [](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

          [The Noob Guide to Online Marketing - Infographic](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

          [](<strong>http://unbounce.com/noob-guide-to-online-marketing-infographic/</strong>)

          Unbounce – The DIY Landing Page Platform

          So, the image is there but the link they are pimping is a standard page:

          http://unbounce.com/noob-guide-to-online-marketing-infographic/

          They also cheekily add an extra homepage link in as well with some keywords and the brand so if folks don't remove that they still get that benefit.

          Ultimately, it means that when links flood into the site they benefit the whole site rather than just promote one PDF.

          Just my tuppence! 
          Marcus

          1 Reply Last reply Reply Quote 0
          • BobGW
            BobGW @Marcus_Miller last edited by Feb 14, 2013, 12:43 PM Feb 14, 2013, 12:43 PM

            Thanks for the code Marcus.

            Actually, the pdf is what people will be linking to. It's a guide for websites. I think the PDF will be much easier to promote than the article.I assume so anyway.

            Is there a way to make sure my canonical code in htaccess is working after I insert the code?

            Thanks again,

            Bob

            Marcus_Miller BobGW 3 Replies Last reply Feb 14, 2013, 8:21 PM Reply Quote 0
            • Marcus_Miller
              Marcus_Miller last edited by Feb 16, 2013, 9:02 PM Feb 14, 2013, 8:55 AM

              Hey Bob

              There is a much easier way to do this and simply have your PDFs that you don't want indexed in a folder that you block access to in robots.txt. This way you can just drop PDFs into articles and link to them knowing full well these pages will not be indexed.

              Assuming you had a PDF called article.pdf in a folder called pdfs/ then the following would prevent indexation.

              User-agent: * Disallow: /pdfs/

              Or to just block the file itself:

              User-agent: *
              Disallow: /pdfs/yourfile.pdf Additionally, There is no reason not to add the canonical link as well and if you find people are linking directly to the PDF then having this would ensure that the equity associated with those links was correctly attributed to the parent page (always a good thing).

              Header add Link '<http: www.url.co.uk="" pdfs="" article.html="">; </http:> rel="canonical"'

              Generally, there are better ways to block indexation than with robots.txt but in the case of PDFs, we really don't want these files indexed as they make for such poor landing pages (no navigation) and we certainly want to remove any competition or duplication between the page and the PDF so in this case, it makes for a quick, painless and suitable solution.

              Hope that helps!
              Marcus

              BobGW 1 Reply Last reply Feb 14, 2013, 12:43 PM Reply Quote 2
              • BobGW
                BobGW @BobGW last edited by Feb 13, 2013, 4:15 AM Feb 13, 2013, 4:15 AM

                Thanks ThompsonPaul,

                Say the pdf is located at

                domain.com/pdfs/white-papers.pdf

                and the article that I want to rank is at

                domain.com/articles/article.html

                do I simply add this to my htaccess file?:

                Header add Link "<http: www.domain.com="" articles="" article.html="">; rel="canonical""</http:>

                1 Reply Last reply Reply Quote 0
                • ThompsonPaul
                  ThompsonPaul @BobGW last edited by Feb 13, 2013, 3:20 AM Feb 13, 2013, 3:20 AM

                  You can insert the canonical header link using your site's .htaccess file, Bob. I'm sure Hostgator provides access to the htaccess file through ftp (sometimes you have to turn on "show hidden files") or through the file manager built into your cPanel.

                  Check tip #2 in this recent SEOMoz blog article for specifics:
                  seomoz.org/blog/htaccess-file-snippets-for-seos

                  Just remember too - you will want to do the same kind of on-page optimization for the PDF as you do for regular pages.

                  • Give it a good, descriptive, keyword-appropriate, dash-separated file name. (essential for usability as well, since it will become the title of the icon when saved to someone's desktop)
                  • Fill out the metadata for the PDF, especially the Title and Description. In Acrobat it's under File -> Properties -> Description tab (to get the meta-description itself, you'll need to click on the Additional Metadata button)

                  I'd be tempted to build the links to the html page as much as possible as those will directly help ranking, unlike the PDF's inbound links which will have to pass their link juice through the canonical, assuming you're using it. Plus, the visitor will get a preview of the PDF's content and context from the rest of your site which which may increase trust and engender further engagement..

                  Your comment about links in the PDF got kind of muddled, but you'll definitely want to make certain there are good links and calls to action back to your website within the PDF - preferably on each page. Otherwise there's no clear "next step" for users reading the PDF back to a purchase on your site. Make sure to put Analytics tracking tags on these links so you can assess the value of traffic generated back from the PDF - otherwise the traffic will just appear as Direct in your Analytics.

                  Hope that all helps;

                  Paul

                  1 Reply Last reply Reply Quote 2
                  • BobGW
                    BobGW @BobGW last edited by Feb 13, 2013, 3:59 AM Feb 13, 2013, 2:48 AM

                    Can I just use htaccess?

                    See here: http://www.seomoz.org/blog/how-to-advanced-relcanonical-http-headers

                    We only have one pdf like this right now and we plan to have no more than five.

                    Say the pdf is located at

                    domain.com/pdfs/white-papers.pdf

                    and the article that I want to rank is at

                    domain.com/articles/article.pdf

                    do I simply add this to my htaccess file?:

                    Header add Link "<http: www.domain.com="" articles="" article.pdf="">; rel="canonical""</http:>

                    1 Reply Last reply Reply Quote 0
                    • BobGW
                      BobGW @BobGW last edited by Feb 12, 2013, 6:22 PM Feb 12, 2013, 6:22 PM

                      How do I know if I can do an HTTP header request? I'm using shared hosting through hostgator.

                      1 Reply Last reply Reply Quote 0
                      • DoRM
                        DoRM @BobGW last edited by Feb 12, 2013, 6:11 PM Feb 12, 2013, 6:11 PM

                        PDF seem to not rank as well as other normal webpages.  They still rank do not get me wrong, we have over 100 pdf pages that get traffic for us. The main version is really up to you, what do you want to show in the search results.  I think it would be easier to rank for a normal webpage though.  If you are doing a rel="canonical"  it will pass most of the link juice, not all but most.

                        1 Reply Last reply Reply Quote 0
                        • DoRM
                          DoRM @BobGW last edited by Feb 12, 2013, 6:11 PM Feb 12, 2013, 6:11 PM

                          PDF seem to not rank as well as other normal webpages.  They still rank do not get me wrong, we have over 100 pdf pages that get traffic for us. The main version is really up to you, what do you want to show in the search results.  I think it would be easier to rank for a normal webpage though.  If you are doing a rel="canonical"  it will pass most of the link juice, not all but most.

                          1 Reply Last reply Reply Quote 1
                          • BobGW
                            BobGW @DoRM last edited by Feb 12, 2013, 5:59 PM Feb 12, 2013, 5:59 PM

                            Thank you DoRM,

                            I assume that the PDF is what I want to be the main version since that is what I'll be marketing, but I could be wrong? What if I get backlinks to both pages, will both sets of backlinks count?

                            DoRM BobGW ThompsonPaul 6 Replies Last reply Feb 13, 2013, 4:15 AM Reply Quote 0
                            • DoRM
                              DoRM last edited by Feb 16, 2013, 9:02 PM Feb 12, 2013, 5:38 PM

                              Indicate the canonical version of a URL by responding with the Link rel="canonical" HTTP header. Addingrel="canonical" to the head section of a page is useful for HTML content, but it can't be used for PDFs and other file types indexed by Google Web Search. In these cases you can indicate a canonical URL by responding with the Link rel="canonical" HTTP header, like this (note that to use this option, you'll need to be able to configure your server):

                              Link: <http: www.example.com="" downloads="" white-paper.pdf="">; rel="canonical"</http:> 
                              

                              Google currently supports these link header elements for Web Search only.

                              You can read more her http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394

                              BobGW 1 Reply Last reply Feb 12, 2013, 5:59 PM Reply Quote 1
                              • 1 / 1
                              1 out of 14
                              • First post
                                1/14
                                Last post

                              Got a burning SEO question?

                              Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                              Start my free trial


                              Browse Questions

                              Explore more categories

                              • Moz Tools

                                Chat with the community about the Moz tools.

                              • SEO Tactics

                                Discuss the SEO process with fellow marketers

                              • Community

                                Discuss industry events, jobs, and news!

                              • Digital Marketing

                                Chat about tactics outside of SEO

                              • Research & Trends

                                Dive into research and trends in the search industry.

                              • Support

                                Connect on product support and feature requests.

                              • See all categories

                              Related Questions

                              • davidmac

                                Upper and lower case URLS coming up as duplicate content

                                Hey guys and gals, I'm having a frustrating time with an issue. Our site has around 10 pages that are coming up as duplicate content/ duplicate title. I'm not sure what I can do to fix this. I was going to attempt to 301 direct the upper case to lower but I'm worried how this will affect our SEO. can anyone offer some insight on what I should be doing? Update:  What I'm trying to figure out is what I should do for our URL's. For example, when I run an audit I'm getting two different pages: aaa.com/BusinessAgreement.com and also aaa.com/businessagreement.com. We don't have two pages but for some reason, Google thinks we do.

                                Intermediate & Advanced SEO | Jun 11, 2019, 8:34 PM | davidmac
                                1
                              • GhillC

                                Same site serving multiple countries and duplicated content

                                Hello! Though I browse MoZ resources every day, I've decided to directly ask you a question despite the numerous questions (and answers!) about this topic as there are few specific variants each time: I've a site serving content (and products) to different countries built using subfolders (1 subfolder per country). Basically, it looks like this:
                                site.com/us/
                                site.com/gb/
                                site.com/fr/
                                site.com/it/
                                etc. The first problem was fairly easy to solve:
                                Avoid duplicated content issues across the board considering that both the ecommerce part of the site and the blog bit are being replicated for each subfolders in their own language. Correct me if I'm wrong but using our copywriters to translate the content and adding the right hreflang tags should do. But then comes the second problem: how to deal with duplicated content when it's written in the same language? E.g. /us/, /gb/, /au/ and so on.
                                Given the following requirements/constraints, I can't see any positive resolution to this issue:
                                1. Need for such structure to be maintained (it's not possible to consolidate same language within one single subfolders for example),
                                2. Articles from one subfolder to another can't be canonicalized as it would mess up with our internal tracking tools,
                                3. The amount of content being published prevents us to get bespoke content for each region of the world with the same spoken language. Given those constraints, I can't see a way to solve that out and it seems that I'm cursed to live with those duplicated content red flags right up my nose.
                                Am I right or can you think about anything to sort that out? Many thanks,
                                Ghill

                                Intermediate & Advanced SEO | Oct 23, 2018, 12:35 PM | GhillC
                                0
                              • ajiabs

                                Duplicate content due to parked domains

                                I have a main ecommerce website with unique content and decent back links. I had few domains parked on the main website as well specific product pages. These domains had some type in traffic. Some where exact product names.  So main main website www.maindomain.com had domain1.com , domain2.com parked on it. Also had domian3.com parked on www.maindomain.com/product1. This caused lot of duplicate content issues. 12 months back, all the parked domains were changed to 301 redirects. I also added all the domains to google webmaster tools. Then removed main directory from google index. Now realize few of the additional domains are indexed and causing duplicate content. My question is what other steps can I take to avoid the duplicate content for my my website 1. Provide change of address in Google search console. Is there any downside in providing change of address pointing to a website? Also domains pointing to a specific url , cannot provide change of address 2. Provide a remove page from google index request in Google search console. It is temporary and last 6 months. Even if the pages are removed from Google index, would google still see them duplicates? 3. Ask google to fetch each url under other domains and submit to google index. This would hopefully remove the urls under domain1.com and doamin2.com eventually due to 301 redirects. 4. Add canonical urls for all pages in the main site. so google will eventually remove content from doman1 and domain2.com due to canonical links. This wil take time for google to update their index 5. Point these domains elsewhere to remove duplicate contents eventually. But it will take time for google to update their index with new non duplicate content. Which of these options are best best to my issue and which ones are potentially dangerous? I would rather not to point these domains elsewhere. Any feedback would be greatly appreciated.

                                Intermediate & Advanced SEO | Jan 17, 2016, 6:02 PM | ajiabs
                                0
                              • JonathanLeplang

                                Contextual FAQ and FAQ Page, is this duplicate content?

                                Hi Mozzers, On my website, I have a FAQ Page (with the questions-responses of all the themes (prices, products,...)of my website) and I would like to add some thematical faq on the pages of my website. For example : adding the faq about pricing on my pricing page,... Is this duplicate content? Thank you for your help, regards. Jonathan

                                Intermediate & Advanced SEO | May 7, 2015, 3:02 AM | JonathanLeplang
                                0
                              • allianzireland

                                Case Sensitive URLs, Duplicate Content & Link Rel Canonical

                                I have a site where URLs are case sensitive. In some cases the lowercase URL is being indexed and in others the mixed case URL is being indexed. This is leading to duplicate content issues on the site. The site is using link rel canonical to specify a preferred URL in some cases however there is no consistency whether the URLs are lowercase or mixed case. On some pages the link rel canonical tag points to the lowercase URL, on others it points to the mixed case URL. Ideally I'd like to update all link rel canonical tags and internal links throughout the site to use the lowercase URL however I'm apprehensive! My question is as follows: If I where to specify the lowercase URL across the site in addition to updating internal links to use lowercase URLs, could this have a negative impact where the mixed case URL is the one currently indexed? Hope this makes sense! Dave

                                Intermediate & Advanced SEO | Mar 25, 2015, 2:28 PM | allianzireland
                                0
                              • browndoginteractive

                                Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)

                                Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
                                2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

                                Intermediate & Advanced SEO | Jan 30, 2014, 8:19 PM | browndoginteractive
                                0
                              • BobAnderson

                                Tabs and duplicate content?

                                We own this site http://www.discountstickerprinting.co.uk/ and just a little concerned as I right clicked open in new tab on the tab content section and it went to a new page For example if you right click on the price tab and click open in new tab you will end up with the url
                                http://www.discountstickerprinting.co.uk/#tabThree Does this mean that our content is being duplicated onto another page? If so what should I do?

                                Intermediate & Advanced SEO | Nov 25, 2013, 11:55 AM | BobAnderson
                                0
                              • MBASydney

                                Duplicate content on sites from different countries

                                Hi, we have a client who currently has a lot of duplicate content with their UK and US website. Both websites are geographically targeted (via google webmaster tools) to their specific location and have the appropriate local domain extension. Is having duplicate content a major issue, since they are in two different countries and geographic regions of the world? Any statement from Google about this? Regards, Bill

                                Intermediate & Advanced SEO | Aug 1, 2013, 11:08 AM | MBASydney
                                0

                              Get started with Moz Pro!

                              Unlock the power of advanced SEO tools and data-driven insights.

                              Start my free trial
                              Products
                              • Moz Pro
                              • Moz Local
                              • Moz API
                              • Moz Data
                              • STAT
                              • Product Updates
                              Moz Solutions
                              • SMB Solutions
                              • Agency Solutions
                              • Enterprise Solutions
                              Free SEO Tools
                              • Domain Authority Checker
                              • Link Explorer
                              • Keyword Explorer
                              • Competitive Research
                              • Brand Authority Checker
                              • Local Citation Checker
                              • MozBar Extension
                              • MozCast
                              Resources
                              • Blog
                              • SEO Learning Center
                              • Help Hub
                              • Beginner's Guide to SEO
                              • How-to Guides
                              • Moz Academy
                              • API Docs
                              About Moz
                              • About
                              • Team
                              • Careers
                              • Contact
                              Why Moz
                              • Case Studies
                              • Testimonials
                              Get Involved
                              • Become an Affiliate
                              • MozCon
                              • Webinars
                              • Practical Marketer Series
                              • MozPod
                              Connect with us

                              Contact the Help team

                              Join our newsletter
                              Moz logo
                              © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                              • Accessibility
                              • Terms of Use
                              • Privacy

                              Looks like your connection to Moz was lost, please wait while we try to reconnect.