Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved Capturing Source Dynamically for UTM Parameters
-
Does anyone have a tutorial on how to dynamically capture the referring source to be populated in UTM parameters for Google Analytics?
We want to syndicate content and be able to see all of the websites that provided referral traffic for this specific objective. We want to set a specific utm_medium and utm_campaign but have the utm_source be dynamic and capture the referring website.
If we set a permanent utm_source, it would appear the same for all incoming traffic.
Thanks in advance!
-
@peteboyd said in Capturing Source Dynamically for UTM Parameters:
Thanks in advance!
UTM (Urchin Tracking Module) parameters are tags that you can add to the end of a URL in order to track the effectiveness of your marketing campaigns. These parameters are used by Google Analytics to help you understand how users are interacting with your website and where they are coming from.
There are five different UTM parameters that you can use:
utm_source: This parameter specifies the source of the traffic, such as "google" or "Facebook".
utm_medium: This parameter specifies the medium of the traffic, such as "cpc" (cost-per-click) or "social".
utm_campaign: This parameter specifies the name of the campaign, such as "spring_sale" or "promotion".
utm_term: This parameter specifies the term or keywords used in the campaign, such as "shoes" or "dress".
utm_content: This parameter specifies the content of the ad, such as the headline or the call-to-action.
To capture the source dynamically for UTM parameters, you can use JavaScript to get the value of the document. referrer property. This property returns the URL of the page that is linked to the current page. You can then use this value to set the utm_source parameter dynamically.
For example, you might use the following code to set the utm_source parameter based on the referring URL:
Copy code
var utmSource = '';if (document.referrer.indexOf('google') !== -1) {
utmSource = 'google';
} else if (document.referrer.indexOf('facebook') !== -1) {
utmSource = 'facebook';
}// Add the utm_source parameter to the URL
var url = 'http://www.example.com?utm_source=' + utmSource;
This code will set the utm_source parameter to "google" if the user came to the page from a Google search or to "Facebook" if the user came to the page from Facebook. If the user came to the page from another source, the utm_source parameter will be left empty.You can then use this modified URL in your marketing campaigns to track the effectiveness of your campaigns and understand where your traffic is coming from.
-
@peteboyd you can refer to this tutorial: https://www.growwithom.com/2020/06/16/track-dynamic-traffic-google-tag-manager/
Should meet your requirements perfectly - using GTM to replace a static value with the url in your UTM Source.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best redirect destination for 18k highly-linked pages
Technical SEO question regarding redirects; I appreciate any insights on best way to handle. Situation: We're decommissioning several major content sections on a website, comprising ~18k webpages. This is a well established site (10+ years) and many of the pages within these sections have high-quality inbound links from .orgs and .edus. Challenge: We're trying to determine the best place to redirect these 18k pages. For user experience, we believe best option is the homepage, which has a statement about the changes to the site and links to the most important remaining sections of the site. It's also the most important page on site, so the bolster of 301 redirected links doesn't seem bad. However, someone on our team is concerned that that many new redirected pages and links going to our homepage will trigger a negative SEO flag for the homepage, and recommends instead that they all go to our custom 404 page (which also includes links to important remaining sections). What's the right approach here to preserve remaining SEO value of these soon-to-be-redirected pages without triggering Google penalties?
Technical SEO | | davidvogel1 -
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
Need some help understanding SEO - Please help before I lose [pull out] all my hair
I'm new to SEO, and am stubbornly trying to educate myself. I have a telescope shop in Canada, it's a small business that we run on the side. We're driving lots of traffic through FB and our outreach programs but I really want to increase our presence on search. We released a new website back in January and it killed some of our rankings. We're working our way back with a very specific set of efforts on regular SEO: Metadata and titles, although it seems that's not super relevant Building high quality backlinks and eliminating any spammy backlinks Rewriting product listings so that they are original content though I'm not sure how important this is in e-commerce Writing high quality articles and blog posts Working relevant keywords into our product pages and titles I understand that good SEO is about pushing on all the levers, and trying to make sure that your site is as valuable to the end user as possible. We're making some good progress, but I'm puzzled by the #1 shop in Canada. They don't put any apparent effort into SEO and they still rank #1 on every key product we compete with them on. I've worked with two separate, highly ranked and regarded SEO firms on this and neither has been able to tell my why this other site ranks so highly. Here's a specific example on a popular product that we both sell, the Celestron NexStar 8SE. Here’s the link to Telescope Canada’s page for their Celestron 8SE: https://telescopescanada.ca/products/celestron-nexstar-8se-computerized-telescope-11069 Here’s a link to the Celestron 8SE page from the manufacturer website: https://www.celestron.com/products/nexstar-8se-computerized-telescope Telescopes Canada has just copied and pasted. There is no original content aside from adding the shipping and return policy to the tab, and having some options for selecting accessories on the page. Here is our page: https://all-startelescope.com/products/celestron-nexstar-8se We have higher page authority, higher domain authority, and they keyword analyzer in moz says that our page is higher quality than the Telescopes Canada page. I can’t find a single metric on any tool (ubbersuggest, Moz, ahrefs, semrush) that says Telescopes Canada is a better site, or has a better NexStar 8SE product page. But they keep ranking ahead of us, and right at the top of google search. Our titles are good, our metadata is good (but I don’t think that’s been a serious ranking factor for about ten years). Our text is original, it’s relevant, we have healthy internal links to the page. According to Moz's page ranker it's 20 points higher than Telescope Canada's page. We have invensted in some excellent blog content, we’re adding new products to the website so that we rank for more keywords. All of those things are helping, but I fundamentally don’t understand why Telescopes Canada is #1 almost across the board on every key product in our market. There is something that I’m not seeing here. Can you see any metric, any tool in your toolbox that indicates why they rank at the top, or even higher than we do for in these search terms specific to that product: Celestron NexStar 8SE
Intermediate & Advanced SEO | | nkennett
NexStar 8SE
Celestron NexStar 8SE Canada
NexStar 8SE Canada I have a feeling it's something technical that I'm missing, but I'm not sure how obvious it is with two 'professional' firms not finding it. I'd really appreciate any help or insight that you can offer.0 -
What's Causing My Extremely Low Bounce Rate
My client's site that is reporting an under 10% bounce rate for all sources. Direct is the highest at 8%. I'm no expert in GA but wondering if there is a problem with the analytics/tag manager code on the site. I'm especially concerned about the GTM body script being in an iframe which I read could be trouble. <!-- Google Tag Manager (noscript) -->
Reporting & Analytics | | bradsimonis
<noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-MWGMNW6"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) --> You can see all the source code here:
view-source:https://nfinit.com/0 -
Solved How to solve orphan pages on a job board
Working on a website that has a job board, and over 4000 active job ads. All of these ads are listed on a single "job board" page, and don’t obviously all load at the same time. They are not linked to from anywhere else, so all tools are listing all of these job ad pages as orphans. How much of a red flag are these orphan pages? Do sites like Indeed have this same issue? Their job ads are completely dynamic, how are these pages then indexed? We use Google’s Search API to handle any expired jobs, so they are not the issue. It’s the active, but orphaned pages we are looking to solve. The site is hosted on WordPress. What is the best way to solve this issue? Just create a job category page and link to each individual job ad from there? Any simpler and perhaps more obvious solutions? What does the website structure need to be like for the problem to be solved? Would appreciate any advice you can share!
Reporting & Analytics | | Michael_M2 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0 -
CGI Parameters: should we worry about duplicate content?
Hi, My question is directed to CGI Parameters. I was able to dig up a bit of content on this but I want to make sure I understand the concept of CGI parameters and how they can affect indexing pages. Here are two pages: No CGI parameter appended to end of the URL: http://www.nytimes.com/2011/04/13/world/asia/13japan.html CGI parameter appended to the end of the URL: http://www.nytimes.com/2011/04/13/world/asia/13japan.html?pagewanted=2&ref=homepage&src=mv Questions: Can we safely say that CGI parameters = URL parameters that append to the end of a URL? Or are they different? And given that you have rel canonical implemented correctly on your pages, search engines will move ahead and index only the URL that is specified in that tag? Thanks in advance for giving your insights. Look forward to your response. Best regards, Jackson
Technical SEO | | jackson_lo0