Javascript or HTML / DIVS to fix pagination issues?
-
Which is better to fix a pagination problem, javascript or HTML/DIVs? I know in one Google Webmaster Forum, a Google engineer recommends Javascript, but I've also seen people use DIVs.
-
Is there a reason why JS is better than Divs/css or does it matter?
-
JS could work in that case. Load your content into hidden divs and show each on demand. Spiders could still get the whole article while users see one part at a time. Just remember that there is an indexation cap (100k IIRC) so there are limits to how well this can work.
-
So, if you have an article that you want to divide into five parts for a better user experience, what is the best way to put all of the content on one url? divs and css or javascript?
-
Can you describe the exact issue you are trying to resolve?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there any SEO issues we should be aware of on Gutenberg?
We are launching a new website and switching to WP 5.0 Gutenberg. Are there any issues we should be aware of related to SEO with the new platform?
Technical SEO | | AegisLiving0 -
Historic issue with incomplete indexing
Hi there We run quite a big site in the UK in the commercial real-estate space. Historically we have always had a challenge getting our "primary" landing pages indexed, which are location based property result pages. e.g. https://realla.co/to-rent/commercial-property/oxford For example, for the "towns" category we have 8,549 submitted in our xml sitemap, with only 3,171 indexed. This is a general issue across all our sitemaps. 120k submitted, 80k indexed. Our pages are linked through breadcrumbs, and nearby links. In the new search console these pages are reported as "crawled - currently not indexed" These all sit under the folder: site:https://realla.co/to-rent/commercial-property/* site:https://realla.co/to-rent/office/* We have done extensive work to optimise performance, including AMP pages. Each location page has many details pages for individual properties e.g. https://realla.co/to-rent/details/0ffbbd0a1a1147edb8847c5ce6179509 One action we have remaining is to nest the details under the locations pages, which may help. These details pages are indexed fully. Any feedback much appreciated
Technical SEO | | ianparryuk0 -
Query string category pagination
I've been reading some posts on the merits and pitfalls of using rel=prev, rel=next and canonical, but I just wanted to double check the right solution. example.com/birth-announcements example.com/birth-announcements?p=2 example.com/birth-announcements?p=3 With a small selection of products on each variation. So at the moment there is a canonical on all of them to the base example.com/birth-announcements. The problem is we are having difficulty getting the products within p=* indexed. I don't think from all I read that rel=prev/rel=next is the way to go. Would the solution (or best way to go) be to create a "view-all" filter and set that to be the canonical URL, so all product URLs are in clear focus for Google. The volume of products won't (shouldn't) have too much of an impact on page load. Or am I wrong and rel=prev/rel=next is a feasible solution?
Technical SEO | | MickEdwards0 -
On our site by mistake some wrong links were entered and google crawled them. We have fixed those links. But they still show up in Not Found Errors. Should we just mark them as fixed? Or what is the best way to deal with them?
Some parameter was not sent. So the link was read as : null/city, null/country instead cityname/city
Technical SEO | | Lybrate06060 -
How bad is it to have duplicate content across http:// and https:// versions of the site?
A lot of pages on our website are currently indexed on both their http:// and https:// URLs. I realise that this is a duplicate content problem, but how major an issue is this in practice? Also, am I right in saying that the best solution would be to use rel canonical tags to highlight the https pages as the canonical versions?
Technical SEO | | RG_SEO0 -
Noindex nofollow issue
Hi, For some reason 2 pages on my website, time to time get noindex nofollow tags they disappear from search engine, i have to log in my thesis wp theme and uncheck box for "noindex" "nofollow" and them update, in couple days my website is back up. here is screen shot http://screencast.com/t/A6V6tIr2Cb6 Is that something in thesis theme that cause the problem? even though i unchecked the box and updated but its still stays checked http://screencast.com/t/TnjDcYfsH4sq appreciated for your help!
Technical SEO | | tonyklu0 -
RSS Hacking Issue
Hi Checked our original rss feed - added it to Google reader and all the links go to the correct pages, but I have also set up the RSS feed in Feedburner. However, when I click on the links in Feedburner (which should go to my own website's pages) , they are all going to spam sites, even though the title of the link and excerpt are correct. This isn't a Wordpress blog rss feed either, and we are on a very secure server. Any ideas whatsoever? There is no info online anywhere and our developers haven't seen this before. Thanks
Technical SEO | | Kerry220