Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Do long UTM codes hurt SEO?
-
Since most UTM codes/URLs are longer than 70ish characters, is this hurting my SEO? If it is, how can I solve the problem while still using a UTM code? Thanks!
-
The correct way to use UTM tracking without hurting your SEO efforts is to make sure you’ve implemented your canonical tags correctly. You should add self-referring canonical tags, which will prevent multiple versions of the same page from being indexed.
For example:
should have a canonical tag that looks like this:
If you have pages with these parameters on your site then you should use the rel=”canonical” tag to specify the canonical URL that you’d like Google to rank.
I hope this information answers your question. if you consider that my answer is good enough don't forget to mark it as a Good Answer
Regards and have a great day
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Barba Plugin and SEO
Hello, community! My client wants to use the barba.js plugin for their new site. What are the implications for SEO?
Technical SEO | | SimpleSearch0 -
Domain Masking SEO Impact
I hope I am explaining this correctly. If I need to provide any clarity please feel free to ask. We currently use a domain mask on an external platform that points back to our site. We are a non-profit and the external site allows users to create peer-to peer fundraisers that benefit our ministry. Currently we get many meta issues related to this site as well as broken links when fundraisers expire etc. We do not have a need to rank for the information from this site. Is there a way to index these pages so that they are not a part of the search engine site crawls as it relates to our site?
Technical SEO | | SamaritansPurse0 -
Static Links in Sidebar Hurting SEO?
Our website currently has a sidebar/widget area that appears on almost all pages throughout of entire site (350 page domain). In that sidebar, we have some static links and some non-static links. Right now there are: 6 Related Post Links - Non-Static
Technical SEO | | DemiGR
1 - Call To Action - Static to a landing page
10 Calculators - Static - These calculators I think are very useful to our users (financial website). So in total 17 total sidebar links, 11 static links, and 6 which change based on the content of the page. Do you think these static links from an SEO perspective can be hurting us? Is there some sort of best practice for sidebar links in regards to quantity as well as static vs non-static? Thanks!0 -
How long does it take to rank easy keywords?
I have an established site with low keyword ranking and the keyword I am wanting to rank for it rated below 10 on Moz. It has been a few days since I published the article.
Technical SEO | | Begbie20060 -
SEO value of InDesign pages?
Hi there, my company is exploring creating an online magazine built with Adobe's InDesign toolset. If we proceeded with this, could we make these pages "as spiderable" as normal html/css webpages? Or are we limited to them being less spiderable, or not at all spiderable?
Technical SEO | | TheaterMania1 -
Exclude status codes in Screaming Frog
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration. The site has approximately 190,000 pages according to the results of a Google site: command. The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root. There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters. There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs. Does anyone know how to exclude files using status codes? I know that would help. If it helps, the site is kodylighting.com. Thanks in advance for any guidance you can provide.
Technical SEO | | DonnaDuncan0 -
How long does it take for rank to return after 301?
Hello all -- just looking for those who have implemented site-wide 301 redirects due to a domain change. I am about 2 weeks into mine and am seeing minimum 5, maximum 21 spot drops in many of my targeted terms. My pages have strong GPR, with most being 3 or higher. Anyone out there been able to track a ballpark of when rank will return? The 301 redirects were implemented correctly, with 1 to 1 setups in most cases. Google has updated the listings with the new domain name, but I'm taking a huge rank hit. Any experience on how long I can expect (I know it's different for every situation) would be great! Thanks.
Technical SEO | | Bandicoot0 -
Image Size for SEO
Hi there I have a website which has some png images on pages, around 300kb - is this too much? How many kbs a page, to what extent do you know does Google care about page load speed? is every kb important, is there a limit? Any advice much appreciated.
Technical SEO | | pauledwards0