Track PDF files downloaded from my site
-
I came across this code for tracking PDF files
[1. map.pdf ( name of PDF file ) and files is the folder name. Am i right ?
2. What shall i be able to track using the code given above ?
a ) No. of clicks on links or how many persons downloaded the PDF files ?
3. Where in Google this report will be visible ?
Thanks a lot.](http://www.example.com/files/map.pdf)
-
Hi Atul, Following is the response to your queries. 1). 'Map.pdf' refers to the name of the pdf and 'file' refers to the directory in which it is stored. 2). Using the above code, you would be able to track the click on the links and not the downloads. For setting up event tracking for a pdf download, I would recommend you to place the following code within the code of your download link onClick="_gaq.push(['_trackEvent', 'download', 'Pdf', 'Mypdfname',, true]);" Thus, the final download link would look like Download PDF Now that you have set up event tracking script in place, you now need to set up the event as a goal in Google Analytics. For that, follow the steps, 1. Open up the profile you wish to set up the goal in. 2. Click the gear icon in the upper right corner of the Google Analytics interface. 3. Click the Goals tab (in the sub-navigation just below where your Profile is listed) 4. Choose the Goal Set you wish to add the event to. 5. Name your goal and select the Event radio button. 6. Populate the following goal details: Category | that matches | download Action | that matches | Pdf Label | that matches | Mypdfname Value | that matches | 7. If you’ve added a Value in step 1, leave the “Use the actual Event Value” radio button selected. 8. Click “Save” and you’re ready to go! Hope, it might help! Cheers!
-
I doesn't. It does you a number of views, because there's a lot different solutions in internet browsers and plugins and user can even view it in Google docs viewer. But for many purposes it is sufficient to know that user view a document.
You can implement dedicated java script tool to know the number of downloads. Probably the code mentioned by nvs.nim (below) realizes such an action.
Good luck
-
Thanks.
Does the code gives me the number of downloads
how many persons have downloaded the PDF file or
how many times PDF file was downloaded.
-
Hi,
ad. 1it's part of url - address of file you have on serwer (to download) obviously it's example you can named it different
ad. 2 it's optional - look to my answer on 31/01 and you can read it on google WT help
Marek
-
Ok. Let me be very specific
You gave me the code -
[1. What does files/map.pdf means ?
2. Is label MyMap necessary or i can omit it
Thanks a lot for your patience :)](http://www.example.com/files/map.pdf)
-
Hi again,
First you should decide about tracking method. Next read Google article and finally ask us a specific not general question. I think it is fair advise
-
I am still not sure about the EXACT code. Can you please give me the code to track how many persons have downloaded PDF files from a site.
I will appeciate greatly
-
Hi Atul,
Did this answer your question, or do you still have questions about this topic?
-
Hi Steve,
Could you please tell me how? What specific technical solution you use?
Thanks, Marek
-
We track our PDF downloads via content. This way, we know how many of each particular one were downloaded.
-
It does not look right cos it was a virtual pageview tracking or mixture of both methods.
_trackPageview() creates a virtual directory structures and _trackEvent() creates an event...
-
Hi,
I think that it's better to use event tracking than virtual page-view tracking.
Syntax and parameters explanation
_trackEvent(category, action, opt_label, opt_value, opt_noninteraction)'```
[]);">Download](#)where: -category (required) - e.g. Maps (The name you supply for the group of objects you want to track.) - action (required) - e.g. Download (A string that is uniquely paired with each category, and commonly used to define the type of user interaction for the web object.) - label (optional) - e.g. EuropeMap or filename map.pdf (An optional string to provide additional dimensions to the event data.) - value (optional) - An integer that you can use to provide numerical data about the user event. It is very complicated to quick explanation and usage, you should read full article and additions like: Implicit Count. - non-interaction (optional) - set it for "true" (A boolean that when set to `true`, indicates that the event hit will not be used in bounce-rate calculation.) In this link you can read full google help article: http://code.google.com/apis/analytics/docs/tracking/eventTrackerGuide.html In your case syntax will be: [](http://www.example.com/files/map.pdf) [Then go to analytics to "Content" section of the reports and view Event Tracking an you will have: category: PDF Action: Download label: MyMap and count .... count how many events you have Wish you a huge number of downloads. M.](http://www.example.com/files/map.pdf)
-
Using this code will track all the files you offer to download: /** * Measuring GA Files */ var extensions = ['pdf','doc','docx','xls','csv','jpg','gif', 'mp3','swf','txt','ppt','zip','gz','dmg','xml']; var download_ga_folder = '/downloads/'; $('a').each(function(){ var _self = $(this); var u = $(this).attr('href'); if(typeof(u) != 'undefined'){ if(u.indexOf("?")!=-1){ uext = u.substring(0, u.lastIndexOf("?")); }else{ uext = u; }; var ext = uext.split('.')[uext.split('.').length - 1]; //check extensions for(i = 0; i < extensions.length; i++){ if(ext == extensions[i]){ _self.click(function(){ pageTracker._trackPageview(download_ga_folder + _self.attr('href')); return false; }); break; }; }; }; });
-
HI Aful,
Your event tracking anatomy does not look righ to me. Go here to see how to structure it correctly:
http://code.google.com/apis/analytics/docs/tracking/eventTrackerGuide.html#AnatomyReagrding where the data should render in Google Anlytics go to Content sub category events ,
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Migrating micro site into existing website
My company is planning to migrate an existing (ecommerce) micro site - which sits on its own domain - into their main ecommerce site. This means that the content will be moved from www.microdomain.co.uk to www.maindomain.com/category. Some products already exist on the main domain. The micro site is fairly small with just over 400 pages - I am planning to map each URL to the new URL (exact corresponding page) and create 301 redirects for each. Where any additional content does not exist yet on the existing main domain, we will create it and 301 redirect to it. The micro site currently ranks fairly well for some keywords - being such a specialised micro site, (some of) the keywords also form part of the domain name, however, they won't on the main page although they may form part of the URL (category). As an example (using a made up URL), our micro site www.bread-sticks.co.uk ranks on page 1 for the keyword bread sticks - we don't just sell bread sticks on www.bread-sticks.co.uk but also rolls and bread though, bread sticks is one category of very closely related categories. Say our main domain is www.supermarket.co.uk (selling a wide range of food / drink products. The micro site will be moving to www.supermarket.co.uk/baked-products/ - which is a category. Within that category, there are sub categories, i.e. bread sticks, rolls and bread which will sit under www.supermarket.co.uk/bread-sticks/ etc. What would be the best way for ensuring that our main domain would take over the rankings from our micro site, given that it will be sitting on our main domain as a category (one of many)? Can we expect www.supermarket.co.uk/baked-products/ or www.supermarket.co.uk/bread-sticks/ to replace www.bread-sticks.co.uk in the rankings simply by 301 redirecting? Thanks for your help!
Technical SEO | | ViviCa10 -
How to Switch My Site to HTTPS in GWT?
I recently bought an SSL certificate and moved my site over to HTTPS. Now how do I make the change in Google Webmaster Tools?
Technical SEO | | sbrault740 -
I noticed all my SEOed sites are getting attacked constantly by viruses. I do wordpress sites. Does anyone have a good recommendation to protect my clients sites? thanks
We have tried all different kinds of security plugins but none seem to work long term.
Technical SEO | | Carla_Dawson0 -
Where Is This Being Addended to Our Page File Names?
I have worked over the last several months to eliminate duplicate page titles at our site. Below is one situation that I need your advice on. Google Webmaster Tools is reporting several of our pages with
Technical SEO | | lbohen
duplicate title such as this one: This is a valid page at our Web store: http://www.audiobooksonline.com/159179126X.html This is an invalid page that Google says is a duplicate of the one above: http://www.audiobooksonline.com/159179126X.html?gdftrk=gdfV2138_a_7c177_a_7c432_a_7c9781591791263 Where might the code ?gdftrk=.... be coming from? How to get rid of it?0 -
Site maintenance and crawling
Hey all, Rarely, but sometimes we require to take down our site for server maintenance, upgrades or various other system/network reasons. More often than not these downtimes are avoidable and we can redirect or eliminate the client side downtime. We have a 'down for maintenance - be back soon' page that is client facing. ANd outages are often no more than an hour tops. My question is, if the site is crawled by Bing/Google at the time of site being down, what is the best way of ensuring the indexed links are not refreshed with this maintenance content? (ie: this is what the pages look like now, so this is what the SE will index). I was thinking that add a no crawl to the robots.txt for the period of downtime and remove it once back up, but will this potentially affect results as well?
Technical SEO | | Daylan1 -
How do you find bad links to your site?
My website has around 900 incoming links and I have a Google 50 penalty that is sitewide. I have been doing research and from what I can see is that the 50 penalty is usually associated with scetchy links. The penalty started last year. I had about 40 related domains to my main site and each had a simple one page site with a link to the main site. (I know I screwed up) I cleaned up all of those links by removing them. The single page site still exist, but they have no links and several of them still rank very well. I also had an outside SEO person that bought a few links. I came clean with Google and told them everything. I gave them all of my sites and that the SEO person had bought links. I gave them full disclosure and removed everything. I have one site that I can't get the link removed from. I have contacted them numerous times to remove the link and I get no response. I am curious if anyone has had a simular experience and how they corrected the situation. Another issue is that my site is "thin" because its an ecommerce affiliate site and full of affiliate links. I work in the costume market. I'm also afraid that I have other bad links pointing to my site. Dooes anyone know of a tool to identify bad links that Google may be penalizing me for at this time. Here is Google's latest denial of my reconsideration request. Dear site owner or webmaster of XXXXXXXXX.com. We received a request from a site owner to reconsider XXXXXXXX.com for compliance with Google's Webmaster Guidelines. We've reviewed your site and we believe that some or all of your pages still violate our quality guidelines. In order to preserve the quality of our search engine, pages from XXXXXXXXXX.com may not appear or may not rank as highly in Google's search results, or may otherwise be considered to be less trustworthy than sites which follow the quality guidelines. If you wish to be reconsidered again, please correct or remove all pages that are outside our quality guidelines. When such changes have been made, please visit https://www.google.com/webmasters/tools/reconsideration?hl=en and resubmit your site for reconsideration. If you have additional questions about how to resolve this issue, please see our Webmaster Help Forum for support. Sincerely, Google Search Quality
Technical SEO | | tadden0 -
Traffic has dropped from my site.
Hello, I never had amazing traffic, but during the last week my site seems to have almost dropped of search engines. Nothing drastic has changed during this time that I can see would have caused this. The site is http://www.comparebestodds.com Does any one have any ideas that can help? Thanks
Technical SEO | | jwdesign0 -
Duplicate content question with PDF
Hi, I manage a property listing website which was recently revamped, but which has some on-site optimization weaknesses and issues. For each property listing like http://www.selectcaribbean.com/property/147.html there is an equivalent PDF version spidered by google. The page looks like this http://www.selectcaribbean.com/pdf1.php?pid=147 my question is: Can this create a duplicate content penalty? If yes, should I ban these pages from being spidered by google in the robots.txt or should I make these link nofollow?
Technical SEO | | multilang0