Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How does a search engine bot navigate past a .PDF link?
-
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks.
If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window?
If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this?
1. Add a rel="nofollow" attribute
2. Don't open the link in a new window so the back button remains finctional
3. Both 1 and 2
or
4. Create specs on the page instead of relying on a .pdf
Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element]
Thoughts and suggestions would be greatly appreciated.
Dana
-
Thanks very much Christopher. This is an excellent explanation. What do you think of Charlie and EGOL's suggestions regarding making sure that there are links embedded in these PDFs pointing either back to the product page or even to the home page?
In your opinion, is this something worth doing? If so, why?
-
Hi Dana,
" ... you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?"
Bots are not really like a single spider that has to crawl around the web that can get trapped when entering an orphaned page with no back-button. When a bot enters a site, it creates a list of all the internal pages that are linked from the home page. Then it visits each page on that list and keeps adding more linked pages to that list. Each time it adds more pages to the list, it only adds new unique pages and does not add duplicates. It also keeps track of which pages it has already visited. When all the pages have been visited once, and no new pages are discovered that are not already on the list, all of the pages have been crawled.
Best,
Christopher -
Hi Don,
Thanks so much for responding and while the answers I have received so far did give me some direction, you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?
Thanks for chiming in. I'd love it if someone more familiar with how a bot actually crawls links like this on a page would jump in with an answer.
Dana
-
Thanks Charlie. I think this is a good suggestion. I work 9-6 too, and just so happen to be the in-house SEO strategist, so this stuff is what I'm there to do. I don't mind the mundane aspects of SEO because the payoff is usually pretty rewarding! Now I know what I'm doing on Monday (on top of a dozen other things!)
Thanks again!
-
I would spend the time needed to do an assessment of these pages.
** how many of them have external links
** how many of them pull traffic from search or other sites
** how many of them are currently useful (are people looking at them)
I would delete (and redirect the URL) of any page that answers "no" to the three items above. These are "dead weight" on your site.
Also, if these are .pdfs of print ads then they might simply be images in a pdf. (test this by searching for an exact phrase from one of them in quotes and include site:yourdomain.com in the query. Keep in mind that google can read the text in some images embedded in pdfs.
I had a lot of pdfs with images on one of my site and got hit with a panda problem. I think that Google thought that the .pdfs were thin content. So I used rel=canonical to assign them to the most relevant page using .htaccess. The panda problem was solved after a couple of months.
Also, keep in mind that .pdfs can be used for conversions. You can embed "add to cart" buttons and links into them and they will function just as on a web page.
If any of these pdfs are pulling in tons I traffic I would figure out how I can put the pdf to better use or create webpage (and redirect the pdf) to best monetize/convert or whatever you business goals dictate.
-
Can a bot navigate via a back button?
I don't think so. They can follow links but they can't "click".
-
Hi Dana
I think your question has been dodged a tad. I ways lead to understand that a .pdf or any page that opens in a new tab and does not link back to the original site, (dangling page) is not a problem. The reason being is that crawlers don't really care how a page is opened. Because the crawler will fork at every link and crawls each new page/link from each fork, when it finds a orphan or dangling page it just stops. This of course is not an issue since if the crawler has forked at each link.
So the question is how a SE treats .pdf's rather how does it treat orphan page. Maybe somebody who works with crawlers can confirm or educate us both on they work.
Don
-
Many thanks to both you and EGOL for excellent answers!
-
Thanks EGOL. Yes, many of these .pdfs could be and are referenced by other sites. Given that there's no link from the .pdf back to our site, we really are missing out on a huge opportunity. I thought this might be the case as I pondered the whole concept of "dangling links" that was discussed in a SEOMoz blog post this week.
I agree about the last point regarding opening in a new window being more of a usability issue than a problem for SEO. I agree with you completely that opening in the same window is way better for the end user.
Can a bot navigate via a back button?
Thanks very much to both you and Charlie for your excellent answers!
-
lol, thank heaven's they aren't spammy. However, they aren't particularly helfpul either. You see, about 3,000 of them are old .pdf versions of print advertising campaigns, going back as far as 2005. They contain obsolete pricing, products, etc. Unfortunately, instead of archiving them off the server, they've been continuously archived in a sub-directory of our main website.
Nearly all of it is indexed. It seems to me the best thing to do for these is to include a statement that the content is an old advertisement and include a llnk to our current "special offers" page.
What do you think of that as a strategy for at least giving engines and humans a means to navigate to someplace current on the site?
-
I see 6000 pdfs as an amazing opportunity. Get links on those pages and it will funnel a lot of power through your site.
If that was my site, we would be on that job immediately. Could be a huge gain for some easy work.
-
Go back and rework our .pdfs so they at least contain a link back to the homepage?
Yes! Absolutely! And, link them to other relevant pages. If these are reference documents they could be pulling in a lot of links and traffic from other sites.
In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
In my opinion, this is not an SEO issue. This is a usability issue. I would have them open in the same window so the back button is available.
-
Thank you Charlie. In our case, our .pdfs contain no links in them at all. There is nothing for a bot (or a human) that will navigate them out of the .pdf....not even the back button.
Considering that, and EGOL's response below, would the best course of action be to include, at the very least, an active link back to our homepage from all of our .pdf files?
We have as many as 6,000 .pdfs.
Thanks,
Dana
-
Thanks EGOL,
Yes, I understand well that .pdf documents can be indexed. That's not my concern. My concern is that a bot that navigates to one of our many .pdf tech specs documents, which, incidentally, contains no outbound links to anything, would then become trapped and not be able to continue crawling the site. This is particularly true because we have them set up to open in a new window. In the example above, sure, there's a text reference back to the site "www.kingdom.com" - but it isn't a link in the .pdf. There are no links, in any of our .pdfs.
So, what is the best way to deal with this? Go back and rework our .pdfs so they at least contain a link back to the homepage? In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
-
.pdf documents are crawled by bots and they accumulate pagerank just like .html pages.
You can include links in them to other documents on the web and bots will crawl those links and pagerank will flow through them.
.pdf documents can be given a "title tag" equivalent by opening their properties and giving the document a title. This title will display in the SERPs. .pfd documents can be hard to beat in the SERPs if they are optimized and have links from a competitive number of other web documents.
Lots of document formats behave this way. Excel, PowerPoint, Word for example.
In my opinion, .pdf documents can trigger a Panda problem for your site if you have a lot of them with trivial or duplicate content (as in print versions of web documents). They can be given rel=canonical through .htaccess to solve the Panda problem but Google often takes a long long time (sometimes months) to recognize the canonical and use that instruction.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Googlebot and other spiders are searching for odd links in our website trying to understand why, and what to do about it.
I recently began work on an existing Wordpress website that was revamped about 3 months ago. https://thedoctorwithin.com. I'm a bit new to Wordpress, so I thought I should reach out to some of the experts in the community.Checking ‘Not found’ Crawl Errors in Google Search Console, I notice many irrelevant links that are not present in the website, nor the database, as near as I can tell. When checking the source of these irrelevant links, I notice they’re all generated from various pages in the site, as well as non-existing pages, allegedly in the site, even though these pages have never existed. For instance: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/feedback-and-testimonials/ allegedly linked from: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/ (doesn’t exist) In other cases, these goofy URLs are even linked from the sitemap. BTW - all the URLs in the sitemap are valid URLs. Currently, the site has a flat structure. Nearly all the content is merely URL/content/ without further breakdown (or subdirectories). Previous site versions had a more varied page organization, but what I'm seeing doesn't seem to reflect the current page organization, nor the previous page organization. Had a similar issue, due to use of Divi's search feature. Ended up with some pretty deep non-existent links branching off of /search/, such as: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/consultations/ allegedly linked from: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/ (doesn't exist). I blocked the /search/ branches via robots.txt. No real loss, since neither /search/ nor any of its subdirectories are valid. There are numerous pre-existing categories and tags on the site. The categories and tags aren't used as pages. I suspect Google, (and other engines,) might be creating arbitrary paths from these. Looking through the site’s 404 errors, I’m seeing the same behavior from Bing, Moz and other spiders, as well. I suppose I could use Search Console to remove URL/category/ and URL/tag/. I suppose I could do the same, in regards to other legitimate spiders / search engines. Perhaps it would be better to use Mod Rewrite to lead spiders to pages that actually do exist. Looking forward to suggestions about best way to deal with these errant searches. Also curious to learn about why these are occurring. Thank you.
Technical SEO | | linkjuiced0 -
301 Redirects, Sitemaps and Indexing - How to hide redirected urls from search engines?
We have several pages in our site like this one, http://www.spectralink.com/solutions, which redirect to deeper page, http://www.spectralink.com/solutions/work-smarter-not-harder. Both urls are listed in the sitemap and both pages are being indexed. Should we remove those redirecting pages from the site map? Should we prevent the redirecting url from being indexed? If so, what's the best way to do that?
Technical SEO | | HeroDesignStudio0 -
Abnormally high internal link reported in Google Search Console not matching Moz reports
If I'm looking at our internal link count and structure on Google Search Console, some pages are listed as having over a thousand internal links within our site. I've read that having too many internal links on a page devalues that page's PageRank, because the value is divided amongst the pages it links out to. Likewise, I've heard having too many internal links is just bad in general for SEO. Is that true? The problem I'm facing is determining how Google is "discovering" these internal links. If I'm just looking at one single page reported with, say, 1,350 links and I'm just looking at the code, it may only have 80 or 90 actual links. Moz will confirm this, as well. So why would Google Search Console report different? Should I be concerned about this?
Technical SEO | | Closetstogo0 -
Is SEO effected of putting an external link in the primary navigation of a website?
I have a customer, www.xxx.com. This site has good traffic, low bounce rate (28%), 2:00 min avg time on site, and 45% return visitor rating. No spam rankings, etc. Good load time. Another site, www.yyy.com, has sent out a request for them to add them as a new link in www.xxx.com's primary navigation - using a title such as "abc" (not the name of the company or site of yyy.com). This second site, www.yyy.com, has a bounce rate of 98%, avg time on site is :30, and 10.2% return visitor rate. No spam flags noted in Open Site explorer. Plus they are asking other sites similar to www.xxx.com to do the same thing. Questions/Concerns and Feedback appreciated: Will yyy.com's analytics and quality pass back to xxx.com and cause Google or algorithms to flag or penalize xxx.com? (It ranks #1 for quite a few things.) The relevancy between the sites is good -same industry, same business objectives. From a usability standpoint, isn't it more appropriate to place a link to another website in a different way? e.g. a promotional graphic wit a link or anchor text links? Isn't it more appropriate to ask another business for links - not using the primary nav of a site? (It seems yyy.com is essentially asking other sites for 'free advertising/promotion.' Thanks!
Technical SEO | | mundsack0 -
How to fix broken links?
Hi, I use WordPress CMS with Yoast SEO plugin. I have just found out that my 403 errors increased dramatically. It seems that all my tags below of each post are being broken for some reason. When i click on the tags i get the following massage: **403 Forbidden Request forbidden by administrative rules. ** I assume it has something to do with the configuration within Yoast SEO plugin. Dose anyone know how should i fix that? Thanks, Raviv evsGujA
Technical SEO | | Indiatravelz0 -
Self-referencing links
I personally think that self-referencing links are silly. It's blatantly easy for Google to tell and my instinct says that the link juice for this would simply evaporate rather than passing back to itself. Does anyone have information backing me up from an authoritative source? I can't find any info about this linked to Matt Cutts, Rand or any of those I look up to.
Technical SEO | | IPROdigital0 -
Forum Profile Links
Are they really important? Many preach they are, and there are tonnes of services out there who give you thousands of forum profile links in no time. I strictly believe in genuine links built the hard way, and definitely don't want to get into anything which is black hat. Please suggest if building several Forum Profile Links is an appropriate way of building links?
Technical SEO | | KS__2 -
Do search engines treat 307 redirects differently from 302 redirects?
We will need to send our users to an alternate version of our homepage for a few hours for a certain event. The SEO task at hand is to minimize the chance of the special homepage getting crawled and cached in the search engines in place of our normal homepage. (This has happened in the past so the concern is not imaginary.) Among other options, 302 and 307 redirects are being discussed. IE, redirecting www.domain.com to www.domain.com/specialpage. Having used 302s and 301s in the past, I am well aware of how search engines treat them. A 302 effectively says "Hey, Google! Please get rid of the old content on www.domain.com and replace it with the content on /specialpage!" Which is exactly what we don't want. My question is: do the search engines handle 307s any differently? I am hearing that the 307 does NOT result in the content of the second page being cached with the first URL. But I don't see that in the definition below (from w3.org). Then again, why differentiate it from the 302? 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.
Technical SEO | | CarsProduction0