Pagenation - Crawl Issue
-
Hi,
We have a site with large number of products (6000 +) under each categories and so we have made a page under each category to list out all products (View all page), which lists out product in pagenation setup built on Ajax. The problem is only our 1st page is crawlable and all the other pages beyond 1st page remains hidden,
We need make all our pagenation URL’s crawlable, our requirements are we never want a change in URL as user goes to next page, want to show the user the same URL for all the pagenation numbers. Is there a perfect solution? -
Sadly, no. It's tricky, but your best bet is probably to deliver a non-AJAX version to Google (or make the AJAX crawlable, although that depends entirely on your implementation) and then use rel=prev/next on Google's version. This is tricky at best, but Google has to be able to crawl the paginated URLs somehow.
Just so I understand - the "View All" isn't really a view all without being able to call the AJAX, right?
You might want to check out:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dead end pages are really an issue?
Hi all, We have many pages which are help guides to our features. These pages do not have anymore outgoing links (internal / external). We haven't linked as these are already 4th level pages and specific about particular topic. So these are technically dead end pages. Do these pages really hurt us? We need to link to some other pages? Thanks
Web Design | | vtmoz0 -
Website Server Issue?
I'm getting error messages that a website cannot be crawled and it might be due to the following issues: Couldn't access the webpage because the server either timed out or refused/closed the connection before our crawler could receive a response. How to fix: Please contact your web hosting technical support team and ask them to fix the issue Could Possibly Be:
Web Design | | PrimeMediaConsulting
1. DDoS protection system.
OR
2. Overloaded or misconfigured server They asked me to talk to my hosting company about this issue and he's at a loss (I don't think he knows everything he needs to know potentially). Have you seen these issues before? Where is the best spot to start troubleshooting this issue?0 -
How can I fix New 4XX Issue on Site Crawl?
Hi all, My recent site crawl shows 27 4xx issues on this website http://www.rrbusinessconsultants.com/ All of them are for 'posts' on this wordpress website. Here is an example of the issue: http://www.rrbusinessconsultants.com/rr-business-consultants-on-the-rise-of-glassdoor-and-how-companies-are-coping/void(null) The blog page seems to be creating links ending in void(null) which are defaulting to 404 pages. I cannot see the links on the site so cannot see how to remove them. Can anyone provide any insight into how to correct his issue? Many thanks in advance.
Web Design | | skehoe0 -
Duplicate Title Issues using # anchor tags
Our homepage navigation uses anchor tags (?TabNumb=1#, ?TabNumb=1# etc) rather than directly linking to different pages to decrease load time (and simplify the build process I owuld imagine). These anchor links are showing up as duplicate titles in Moz. I am pretty sure if I were to use noindex or rel tags, that could have a negative affect on my search results. Any way to tackle this outside of a complete redesign of the structure? http://www.dedoose.com/about-us/?TabNum=2# as an example
Web Design | | sbnjl0 -
How to fix and issue with robot.txt ?
I am receiving the following error message through webmaster tools http://www.sourcemarketingdirect.com/: Googlebot can't access your site Oct 26, 2012
Web Design | | skehoe
Over the last 24 hours, Googlebot encountered 35 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. The site has dropped out of Google search.0 -
I've set up my own site which is still fairly new but I'm a bit concerned that there is a bloackage SEO wise somewhere because when I try to crawl the site on SEOmoz it only crawls one page.
I'm really baffled and none of my research has shed much light on it. My url is www.emporiumofmanliness.co.uk I'd really appreciate any help! Thanks
Web Design | | JoshED0 -
Google search issue with exact domain
We had a site from Feb-2011 to Nov-2011 at the domain amcoexterminating.com. The site was pure HTML/CSS and the daily unique visitors steadily increased over that time. So all was fine. We then moved the site to a CMS (Joomla) on Dec. 6th. From that day forward, the daily visitors went into the tank. Before the move, if you typed "amcoexterminating.com" or "amco exterminating" into Google search, the site would be the first result (as you'd expect since those are the words that make up the actua domain). But we tried this yesterday and the site did not come up at all. NOT GOOD. It would work in Yahoo or Bing, but not in Google. So obviously, the problem with Google search directly affected the daily visitors. We just checked Webmaster tools yesterday (yes, this should have been done sooner, lesson learned) and it said "Site has severe health issues - Important page blocked by robots.txt". It listed the "important" page URL and it was just a link to an image. Regardless, I wiped out the Joomla created robots.txt file and added a new one and made it just say... User-agent: *Allow: / About 14 hours later, after the new robots.txt file was recognized by Google, the "severe health" message went away. However if I search in Google for "amcoexterminating.com", it still doesn't show up and the client is concerned (as they should be). Do you think the search engines just need more time to refresh? If so, once it refreshes, should the site show up first again right away? Or is it possible the robots.txt file had nothing to do with the issue? If so, what other things could I check into that might cause Google search to not find a site even if you search for exact domain name? Please share any and all things I should look into as I need to get this site showing in Google search again (as it was before moving to the CMS). Thanks!
Web Design | | MarathonMS0 -
Are my duplicate meta titles and descriptions an issue ?
HelloMy website http://www.gardenbeet.com has been rebuilt using prestacart and there are 158 duplicate title and meta descriptions being reported by google.My developer advised the following Almost all the duplicates are due to the same page being accessible at the root and following the category heading. e.g; /75-vegetable-patio-planter-turquoise.html
Web Design | | GardenBeet
/patio-planters/75-vegetable-patio-planter-turquoise.html This is hard-wired into PrestaShop. Was the Canonical module (now disabled) responsible for the confusion by not including the category name? The Googlebot shouldn't be scanning the root versions now. I don't believe this to be a serious issue but I'd recommend a second opinion from someone more SEO savvy just to be sure.Opinions??0