Server-side pagination vs client-side pagination

Server-side pagination vs client-side pagination

Server side pagination typically occurs before the webpage is rendered to the client often at the database level. While client-side pagination occurs on the browser level once all of the data has been returned. Server-side pagination typically offers improved performance over client-side pagination.

Pagination is a vital UI/UX element that is usually overlooked by many developers. It makes navigating through data easier, but it comes at a development cost. It's usually not easy to implement and involves various moving parts, such as more button elements and transition effects. I have written about pagination in the past, particularly client-side pagination using JavaScript, which you can read more about here:

How to paginate through a collection in JavaScript
Add pagination to any table in JavaScript
Custom JavaScript pagination of objects

In this article though I will go over the biggest differences between the two kinds of pagination that exist (client and server). Because they are both very important and (usually) they both need to be implemented in order to really optimize your web pages loading times and performance. Which, as I wrote about recently here, is very important.

Client-side pagination

Let's start here, since it is typically where most developers usually focus their attentions on. You can think of client-side pagination as mainly a navigation tool. Users will click on page 2, or next, or "more data", etc in order to either show another chunk of HTML and hide the current chunk or to load more data in addition to what is already showing.

The typical implementation is something like the following:

1. Load all data from a data store. (Database, JSON file, text file)
2. Create only the HTML required for a particular page.
3. Create the navigation controls for that page.
4. On pagination, hide the current elements, and load the new HTML.
5. Update pagination controls

Depending on whether you have an HTML table or your own custom HTML, the implementation would be different and would require its own set of controls and logic.

You also have the option of implementing numeric pagination, which again, involves a bit more work as you will have to maintain groups of page numbers to display to users. Imagine having a news feed with 100 pages for example. Most websites won't typically show you all 100 numeric links in the pager area. They will chunk it down to around 5 visible page numbers and as the user paginates more and more, the visible page numbers would update accordingly. Not a simple implementation and one that might not have the highest ROI for you overall.

And lastly, infinite scrolling is probably the most popular method for pagination these days. It keeps users engaged with new content without having to reload the page keeping the users engaged. It's also the easiest to implement, as you are pretty much just grouping records together and rendering the same HTML for different data over and over again.

But there is one area of contention, regardless of the pagination method you end up using.

The issue

With small data sets, only using client-side pagination is perfectly reasonable and could potentially save you a web request or two in the process. Knowing just how much data you will be loading is important though.

For example, loading an 90KB JSON file with 20 records is way different than loading a 1MB response with thousands of records. Even if you aren't showing users the entire batch of data all at once (which you shouldn't), there will still be that initial increase in time-to-first-byte (TTFB) that search engines use to rank you often times. It has also been shown that when users have to wait more than 2-3 seconds for content to load, that the overall bounce-rate tends to increase as well.

Take the articles page on this blog for example. The pagination is currently set to load only 9 records at a time from the database on initial page load. The result is a small response of only around 5KB.

Now compare that to a larger page size of around 60 articles that are retrieved instead.

As time goes on and more content gets added, you can imagine that eventually this could become an issue with performance.

Most users probably won't look at all 1000 records regardless of how you present them. Pagination is a great tool to show more data to users, but most users will never really use it. A small percentage of site visitors will though, and pagination is for them. Search engines can also use this to crawl through all of your links as a sort of archive.

So the challenge becomes selectively loading chunks of data from the database, while at the same time maintaining the client-side pagination controls mentioned above. And that's where server-side pagination comes into play.

Server-side pagination

Most websites will pull data from some kind of database. This could be a relational database, like SQL Server or MySQL, or it could be a NoSQL database like MongoDB or Redis. Or it could be a JSON file, in which case you might have some custom Node.js functionality to act as the pager logic.

Regardless of how you store you data, you will need to find out how the database engine handles pagination. The typical pattern with most database servers essentially involves letting the server know what record you are starting from and the number of records that you are requesting and the ORDER BY logic that you wish to paginate by.

Note that server-side pagination can be expensive to perform depending on how your data is stored and indexed. Particularly if you have a large amount of data to filter, sort by and then group into chunks.

For example, in SQL Server you can use the OFFSET-FETCH combination to accomplish this:

SELECT expressions 
FROM tables 
[ORDER BY expressions ASC|DESC] 
[OFFSET offset_row_count ROW|ROWS] 
[FETCH FIRST|NEXT fetch_row_count ROW|ROWS ONLY]

And if you are using a NoSQL database like MongoDB then something like the following would work:

 // Page 1
    db.students.find().limit(5)

    // Page 2
    db.students.find().skip(5).limit(5)

    // Page 3
    db.students.find().skip(5).limit(5)

So really, it isn't a case of one versus the other as the title might have eluded. It's knowing when to use which in order to boost performance from a technical standpoint and to provide an overall strong UI/UX experience to your users.

Complex pagination

And lastly, there are more complex implementations for pagination that you can take. One of those is selectively pre-loading data from the database and performing client-side pagination on that data only.

To clarify further, imagine having 100 records in total in your database. Your pagination size is 5 records per page. If you notice that most of your users are okay clicking through to page 3, then you might consider initially loading 15 records but only showing 5 at a time. This ensures that no excess requests are getting made unless a user requests to go to page 4 and above.

While that may sound like overkill (because it kind of is), many of the largest websites out there today implement some form of this complex logic in their own applications.

And there will be many more variations that are bound to spring up in the near future as new UI/UX elements are introduced and as users evolve their tastes in how they want their data presented to them.

I hope you enjoyed this quick look at pagination and its many intricacies and uses.

Walter G. author of blog post
Walter Guevara is a Computer Scientist, software engineer, startup founder and previous mentor for a coding bootcamp. He has been creating software for the past 20 years.

Get the latest programming news directly in your inbox!

Have a question on this article?

You can leave me a question on this particular article (or any other really).

Ask a question

Community Comments

No comments posted yet

Add a comment