How does a search engine bot navigate past a .PDF link?
-
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks.
If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window?
If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this?
1. Add a rel="nofollow" attribute
2. Don't open the link in a new window so the back button remains finctional
3. Both 1 and 2
or
4. Create specs on the page instead of relying on a .pdf
Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element]
Thoughts and suggestions would be greatly appreciated.
Dana
-
Thanks very much Christopher. This is an excellent explanation. What do you think of Charlie and EGOL's suggestions regarding making sure that there are links embedded in these PDFs pointing either back to the product page or even to the home page?
In your opinion, is this something worth doing? If so, why?
-
Hi Dana,
" ... you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?"
Bots are not really like a single spider that has to crawl around the web that can get trapped when entering an orphaned page with no back-button. When a bot enters a site, it creates a list of all the internal pages that are linked from the home page. Then it visits each page on that list and keeps adding more linked pages to that list. Each time it adds more pages to the list, it only adds new unique pages and does not add duplicates. It also keeps track of which pages it has already visited. When all the pages have been visited once, and no new pages are discovered that are not already on the list, all of the pages have been crawled.
Best,
Christopher -
Hi Don,
Thanks so much for responding and while the answers I have received so far did give me some direction, you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?
Thanks for chiming in. I'd love it if someone more familiar with how a bot actually crawls links like this on a page would jump in with an answer.
Dana
-
Thanks Charlie. I think this is a good suggestion. I work 9-6 too, and just so happen to be the in-house SEO strategist, so this stuff is what I'm there to do. I don't mind the mundane aspects of SEO because the payoff is usually pretty rewarding! Now I know what I'm doing on Monday (on top of a dozen other things!)
Thanks again!
-
I would spend the time needed to do an assessment of these pages.
** how many of them have external links
** how many of them pull traffic from search or other sites
** how many of them are currently useful (are people looking at them)
I would delete (and redirect the URL) of any page that answers "no" to the three items above. These are "dead weight" on your site.
Also, if these are .pdfs of print ads then they might simply be images in a pdf. (test this by searching for an exact phrase from one of them in quotes and include site:yourdomain.com in the query. Keep in mind that google can read the text in some images embedded in pdfs.
I had a lot of pdfs with images on one of my site and got hit with a panda problem. I think that Google thought that the .pdfs were thin content. So I used rel=canonical to assign them to the most relevant page using .htaccess. The panda problem was solved after a couple of months.
Also, keep in mind that .pdfs can be used for conversions. You can embed "add to cart" buttons and links into them and they will function just as on a web page.
If any of these pdfs are pulling in tons I traffic I would figure out how I can put the pdf to better use or create webpage (and redirect the pdf) to best monetize/convert or whatever you business goals dictate.
-
Can a bot navigate via a back button?
I don't think so. They can follow links but they can't "click".
-
Hi Dana
I think your question has been dodged a tad. I ways lead to understand that a .pdf or any page that opens in a new tab and does not link back to the original site, (dangling page) is not a problem. The reason being is that crawlers don't really care how a page is opened. Because the crawler will fork at every link and crawls each new page/link from each fork, when it finds a orphan or dangling page it just stops. This of course is not an issue since if the crawler has forked at each link.
So the question is how a SE treats .pdf's rather how does it treat orphan page. Maybe somebody who works with crawlers can confirm or educate us both on they work.
Don
-
Many thanks to both you and EGOL for excellent answers!
-
Thanks EGOL. Yes, many of these .pdfs could be and are referenced by other sites. Given that there's no link from the .pdf back to our site, we really are missing out on a huge opportunity. I thought this might be the case as I pondered the whole concept of "dangling links" that was discussed in a SEOMoz blog post this week.
I agree about the last point regarding opening in a new window being more of a usability issue than a problem for SEO. I agree with you completely that opening in the same window is way better for the end user.
Can a bot navigate via a back button?
Thanks very much to both you and Charlie for your excellent answers!
-
lol, thank heaven's they aren't spammy. However, they aren't particularly helfpul either. You see, about 3,000 of them are old .pdf versions of print advertising campaigns, going back as far as 2005. They contain obsolete pricing, products, etc. Unfortunately, instead of archiving them off the server, they've been continuously archived in a sub-directory of our main website.
Nearly all of it is indexed. It seems to me the best thing to do for these is to include a statement that the content is an old advertisement and include a llnk to our current "special offers" page.
What do you think of that as a strategy for at least giving engines and humans a means to navigate to someplace current on the site?
-
I see 6000 pdfs as an amazing opportunity. Get links on those pages and it will funnel a lot of power through your site.
If that was my site, we would be on that job immediately. Could be a huge gain for some easy work.
-
Go back and rework our .pdfs so they at least contain a link back to the homepage?
Yes! Absolutely! And, link them to other relevant pages. If these are reference documents they could be pulling in a lot of links and traffic from other sites.
In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
In my opinion, this is not an SEO issue. This is a usability issue. I would have them open in the same window so the back button is available.
-
Thank you Charlie. In our case, our .pdfs contain no links in them at all. There is nothing for a bot (or a human) that will navigate them out of the .pdf....not even the back button.
Considering that, and EGOL's response below, would the best course of action be to include, at the very least, an active link back to our homepage from all of our .pdf files?
We have as many as 6,000 .pdfs.
Thanks,
Dana
-
Thanks EGOL,
Yes, I understand well that .pdf documents can be indexed. That's not my concern. My concern is that a bot that navigates to one of our many .pdf tech specs documents, which, incidentally, contains no outbound links to anything, would then become trapped and not be able to continue crawling the site. This is particularly true because we have them set up to open in a new window. In the example above, sure, there's a text reference back to the site "www.kingdom.com" - but it isn't a link in the .pdf. There are no links, in any of our .pdfs.
So, what is the best way to deal with this? Go back and rework our .pdfs so they at least contain a link back to the homepage? In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
-
.pdf documents are crawled by bots and they accumulate pagerank just like .html pages.
You can include links in them to other documents on the web and bots will crawl those links and pagerank will flow through them.
.pdf documents can be given a "title tag" equivalent by opening their properties and giving the document a title. This title will display in the SERPs. .pfd documents can be hard to beat in the SERPs if they are optimized and have links from a competitive number of other web documents.
Lots of document formats behave this way. Excel, PowerPoint, Word for example.
In my opinion, .pdf documents can trigger a Panda problem for your site if you have a lot of them with trivial or duplicate content (as in print versions of web documents). They can be given rel=canonical through .htaccess to solve the Panda problem but Google often takes a long long time (sometimes months) to recognize the canonical and use that instruction.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Avoid too many internal links
My site is being flagged in Page Optimization Score for "Avoid too many internal links", but the warning doe snot tell me how many internal links are on a page. How can I find this?
Technical SEO | | Zambezikid0 -
Surge in spammy links
Hi, Our website www.foodjet.com has recently seen a huge amount of spammy incoming links to non-exisiting URLS: They all target pages that lead to a 404 and which clearly do not exist on our website. Since they have started to appear our DA has plummeted. I have already disavowed some domains, but more re-appear just as fast. I have also checked if our site has been hacked, which does not seem to be the case. What am I missing? And/or what can I do?
Technical SEO | | FoodJEt0 -
Keyword links in footer
Hi - I am trying to help a site to get out from under a Google manual action penalty - down as "Partial Matches - Unnatural Links to site".
Technical SEO | | StevieD
I am checking through their links - the site that links most to them is a local directory style site - it has 2,682 links back into 1 page (Home) The directory site was built by the web co. that built my clients' site and they put a keyword link in the footer of the directory site - the keyword was "Buy Truffles". All my instincts say that is a bad thing! But - this is what is perplexing me - they are ranking no.1 for that keyword! Whereas they have lost rankings (i.e. not top 50) for all the other keywords they were targeting. So I don't get it! Can anyone explain why this is. I feel I should I get that link removed but don't want to take out their only ranking keyword! Webmaster shows about 55 different pages in the directory site have a link back to my client. Hope you can help.
Cheers - Steve0 -
Google Enterprise Search Questions
Hi Everybody, A client has asked me to take a look at Google Enterprise Search for them. It has been a few years since I last fooled around with implementing a Google search box on a website, and that was the free version which included off-site results in the results. This appears to be the main page describing the paid product: http://www.google.com/enterprise/search/ I have three questions: The search testing function on the above page doesn't seem to be working. I'm typing in a URL and search term, as prompted, and the page is simply refreshing. It never provides me an example set of results. Is it working for you? This client has a moderately large e-commerce site (about 200 products). Have you implemented Google enterprise search on such a site and are you happy with its performance? The goal here is to let users search for a topic and be returned both product and informational pages. How well does this tool do this? Am I going to need to know any special types of coding (beyond html/css) to implement this? If so, what are they? If you have experience with this product, I would surely appreciate your feedback. Thank you!
Technical SEO | | MiriamEllis0 -
Is it possible to export Inbound Links in a CSV file categorized by Linking Root Domains ?
Hi, I am performing an analysis of the total inbound links to my homepage and I would like to have the total amount of inbound links categorized by the Linking root domains. For example, the Open Site explorer does offer the feature to show you the Linking Root Domains to your page. Then when you click on the first Linking Root Domain, it also shows you the Top Linking Pages ( Which means all the pages that link to your page from this particular top level domain) Now I would like to export this data to a CSV file, but open site explorer only exports the total amount of top level linking domains. Does anyone has a solution to this problem ? Thank you very much for the help in advance!
Technical SEO | | Feweb0 -
What are the best tools for back links?
I am a new to SEO, please help me in choosing the right tools for back links. I am thinking to buy Ultimate demon, Should I buy it or not? I have a range of you tube videos to rank.
Technical SEO | | Sajiali0 -
No-follow links on advertising pages
Hi I run a job board that enables employers to post job vacancies and information about their organisations. These are 'paid for' pages (advertising) on our site. These link out to their own websites. My question is, would it be better for these links out to their sites to be no-follow? From my site's perspective, I cannot necessarily dictate the quality of their websites (although the majority are leading firms) as I would in article and feature content, where we do happily link out and refer to other quality sites with information that gives readers further information. I know that many large job boards do this where they run listings of feeds from other sites, but should we also do this at the page level where the link out is effectively paid for. What would be the pros and cons if I do or if I don't use no-follow? I hope this makes sense and look forward to some replies. Many thanks
Technical SEO | | CelestialChook0 -
Search optimal Tab structure?
Good day, We are in the process of starting a website redesign/development. We will likely be employing a tabbing structure on our home page and would like to be able to capitalize on the keyword content found across the various tabs. The tab structure will be similar to how this site achieves tabs: http://ugmo.com/ I've uploaded a screen grab of this page as the Googlebot user agent. The text "Soil Intelligence for professional Turf Managers" clicks through to this page: http://ugmo.com/?quicktabs_1=1#quicktabs-1 So I'm thinking there could be some keyword dilution there. That said Google is very much aware of the text on the quicktabs-1 page being related to the home page content: http://www.google.com/search?q=Up+your+game+with+precise+soil+moisture%2C+salinity+and+temperature+measurements.+And+in+the+process%2C+save+water%2C+resources%2C+money.+inurl%3Augmo.com&sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe= Is this the best search optimal way to add keyword density on a home page with a tab structure? Or is there a better means of achieving this? {61bfcca1-5f32-435e-a311-7ef4f9b592dd}_tabs_as_Googlebot.png
Technical SEO | | Hershel.Miller0