Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Duplicate title while setting canonical tag.
-
Hi Moz Fan,
My websites - https://finance.rabbit.co.th/ has run financial service, So our main keywords is about "Insurance" in Thai, But today I have an issues regarding to carnonical tag.
We have a link that containing by https://finance.rabbit.co.th/car-insurance?showForm=1&brand_id=9&model_id=18&car_submodel_id=30&ci_source_id=rabbit.co.th&car_year=2014 and setting canonical to this url - https://finance.rabbit.co.th/car-insurance within 5,000 items. But in this case I have an warning by site audit tools as Duplicate Page Title (Canonical), So is that possible to drop our ranking.
What should we do, setting No-Index, No-Follow for all URL that begin with ? or keep them like that.
-
Using the disallow directive in the robots.txt file is probably the better bet as far as making sure that our tools don't crawl those pages and report duplicate page titles. I think the disallow directive is the way to go!
That said, I'm not an SEO expert, so it might be worth checking in with a web developer to see if they have different suggestions.

-
Thanks for you guys and sorry for lately replied,
@tawnycase, I need to setting robot to ignore those link right ?, So in this case it must setting by dissallow ?parameter because I don't need to setting no index for main folders.
-
Hi there! Tawny from the Help Team here.
Even with a NoIndex, NoFollow tag on those pages, our tools will still crawl and report on everything up to that tag and report on it. The best way to prevent our crawler from accessing these dynamically tagged pages would be to block it from accessing them using the disallow directive in your robots.txt file. It would look something like this:
User-agent: Rogerbot
Disallow: ?showFormetc., until you have blocked all of the parameters or tags that may be causing these errors. You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer.
Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt
I hope this helps!
-
You'll definitely want to keep that canonical tag in place. Some tools don't recognize canonicals, so I wouldn't worry too much about duplicate notifications due to parameters like that. If you noindex that page, it will apply to the root of that URL, not strictly the parameter'd version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical or hreflang?
I have four English sites for four different countries, UK, Ireland, Australia and New Zealand and I want to share some content between the sites. On the pages that share the content, which is essentially exactly the same on all 4 sites, do I use the hreflang tags like: or do I add a canonical tag to the other three pointing to the "origin", which would be the UK site? I believe it is best practice to use one or the other, but I'm not sure which make sense in this situation.
Technical SEO | | andrew-mso0 -
Duplicate titles from hreflang variations
Hi, I am working on a large global site which has around 9 different language variations. We have setup the hreflang tags and referenced the corresponding content as follows: (We have not implemented a version X-default reference, as we felt it was not necessary) Using DeepCrawl and Search Console, we can see that these language variations are causing duplicate title issues. Many of them. My assumption was that the hreflang would have alleviated this issue and informed Google what is going on, however i wanted to see if anyone has any experience with this kind of thing before. It would be good to understand what the best practice approach is to deal with the problem. Is it even an issue at all, or just the tools being over-sensitive? Thank you in advance.
Technical SEO | | NickG-1230 -
Duplicated titles and meta descriptions
Hi, Dealing with both my duplicated titles and meta descriptions i'm wondering if there's a "quick" win I could potentially implement asap. A bit of background:
Technical SEO | | GhillC
Say I've 4 pages structured that way: domain.com/us/productA.html for the US domain.com/gb/productA.html the UK domain.com/fr/productA.html for France domain.com/de/productA.html For Germany At the moment, both my page titles and meta-descriptions are duplicated all over the place for product A.
Title is reading "Product A - company name"
MD is a bit better, being translated in all 3 languages (En, Fr, DE). Therefore being the same for the US and for the UK. Ideally, I would get unique page titles and MD all over the place. However, due to time and resource constraints, I can't make it happen overnight. So my questions are pretty simple:
1. Can I create a rule for page titles to be "Product A - country - company name" or similar? Would that be enough to make the page titles unique? Is there any value doing so?
2. Can I "localize" duplicate MD by simply naming the country? I assume it is not enough in this case as all the rest would be copy/pasted. Ideally speaking, both my page titles and MD would be completely unique but I can't afford doing so in the short term. Thanks!0 -
Quick Fix to "Duplicate page without canonical tag"?
When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something?
Technical SEO | | CREW-MARKETING1 -
How long does it take for canonical tags to work
How long on average does it take for a canonical tag to work? Understand that canonicals are just a suggestion, but after adding a canonical tag and submitting the page via Google fetch, assuming Google follows the canonical, would you expect it to work after a day or two or does it take longer? We added canonicals to old PPC landing pages that are ranking organically, though our new landing pages (which we want to rank organically) are not identical and have a bit more content/features. They are similar though. Canonicals were added to the old pages (pointing to new pages) and requested indexing via search console. Old pages are still ranking and new pages not so much. FYI we are unable to 301 old PPC pages due to other non negotiable reasons unfortunately. Thanks.
Technical SEO | | SoulSurfer80 -
Canonical for duplicate pages in ecommerce site and the product out of stock
I’m an SEO for an ecommerce site that sells shoes I have duplicate pages for different colors of the same product (unique URL for each color), Conventionally I have added canonical tags for each page, which direct to a specific product URL My question is what happens when a product which the googlbot is direct to, is out of stock but is still listed in the canonical tag ?
Technical SEO | | shoesonline0 -
Two different canonical tags on one page
Due to an error, some of my pages now have two canonical tags on them. One is correct and the other goes to a nonsense URL (404 page). I know I should ideally remove the incorrect ones, but it's a big manual job. Are they doing any harm? Can I just leave them there and let Google figure it out? The correct ones are higher up in the code. Will this make a difference? Any help appreciated.
Technical SEO | | ShearingsGroup0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0