Pagination: Server Side or Client Side? - javascript

What is it best to handle pagination? Server side or doing it dynamically using javascript?
I'm working on a project which is heavy on the ajax and pulling in data dynamically, so I've been working on a javascript pagination system that uses the dom - but I'm starting to think it would be better to handle it all server side.
What are everyone's thoughts?

The right answer depends on your priorities and the size of the data set to be paginated.
Server side pagination is best for:
Large data set
Faster initial page load
Accessibility for those not running javascript
Client side pagination is best for:
Small data set
Faster subsequent page loads
So if you're paginating for primarily cosmetic reasons, it makes more sense to handle it client side. And if you're paginating to reduce initial load time, server side is the obvious choice.
Of course, client side's advantage on subsequent page load times diminishes if you utilize Ajax to load subsequent pages.

Doing it on client side will make your user download all the data at first which might not be needed, and will remove the primary benefit of pagination.
The best way to do so for such kind of AJAX apps is to make AJAX call the server for next page and add update the current page using client side script.

If you have large pages and a large number of pages you are better of requesting pages in chunks from the server via AJAX. So let the server do the pagination, based of your request URL.
You can also pre-fetch the next few pages the user will likely view to make the interface seem more responsive.
If there are only few pages, grabbing it all up-front and paginating on the client may be a better choice.

Even with small data sizes the best choice would be server side pagination. You will not have to worry later if your web application scales further.
And for larger data sizes the answer is obvious.

Server side - send to the client just enough content for the current view.

In a practical world of limits, I would page on the server side to conserve all the resources associated with sending the data. Also, the server needs to protect itself from a malicious/malfunctioning client asking for a HUGE page.
Once that code is happily chugging along, I would add "smarts" to the client to get the "next" and "previous" page and hold that in memory. When the user pages to the next page, update your cache.
If the client software does this sort of page caching, do consider how fast your data ages (is likely to change) and if you should check that your cached page of data is still valid. Maybe re-request it if it ages more than 2 minutes. Maybe have a "dirty" flag in it. Something like that. Hope you find this helpful. :)

Do you mean that your JavaScript has all the data in memory, and shows one page a time? Or that it downloads each page from the server as it's needed, using AJAX?
If it's the latter, you also may need to think about sorting. If you sort using JavaScript, you'll only be able to sort one page at a time, which doesn't make much sense. So your sorting should be done on the server.

I prefer server side pagination. However, when implementing it, you need to make sure that you're optimizing your SQL properly. For instance, I believe in MySQL, if you use the LIMIT option it doesn't use the index so you need to rewrite your sql to use the index properly.
G-Man

One other thing to point out here is that very rarely will you be limited to simply paging through a raw dataset.
You might have to search for certain terms in one or more columns you are displaying, and then say sort on a few columns and then give the users the ability to page through this filtered dataset.
In a situation like this you might have to see whether it would be better to have this logic search and/or sort client side or server side.
Another thing to consider is that Amazon's cloud search api gives you some very powerful searching abilities and obviously you'll want to allow cloud search to handle searching and sorting for you if you happen to have your data hosted there.

Related

loading a web page for a fake query string

I don't even know how to phrase the title of this question, but hopefully the following description will explain my issue.
I have a web application that is made up of a single, bare search page with a search field. The search is actually performed by the client browser and results are loaded via ajax. In other words, the server does nothing but serve up the bare search page at http://server/index.html
Once the query is performed, I use history.pushState() to change the URI in the browser address bar to something more sensible like http://server/index.html?q=searchterm&page=1&size=10. Pagination is performed by prev and next links that too are called via ajax along with the appropriately incremented or decremented page and size values. All is good.
But, I want my application to be a good web citizen, and be bookmark-able. In other words, if someone enters http://server/index.html?q=searchterm&page=1&size=10 directly in the browser address bar, I want to load the results correctly. Except, if I send that URI to the server, the serve will croak unless I implement some server-side processing. And, that is something I don't want to do as that will change the complexity of my application completely. Unless I can do that with plain, vanilla nginx (my web server). In other words, I don't want to implement any server side scripting other than what can be done with the web server itself, such as SSI.
So, how do I solve this problem?
hi the exact term for what you are trying to do is "Client side routing". It involves a combination of manipulating the browsers history using history.pushState() [which you are already doing] and server side config setting
.htaccess if you are using apache
config file if you are using nginx.
The server side settings will make your web server your base index.html for whatever request the browser makes(http://server/index.html?q=searchterm&page=1&size=10) once loaded in the client you have to get the query string in the window address bar and handle accordingly(make an ajax request).
This implementation has implications when search engines crawl your site using the URL but that is not within the scope of this question.
this SO question will give you a start
actually, I think this is a lot easier than I thought. When I send the browser to http://server/index.html?q=searchterm&page=1&size=10, it doesn't complain. It simply sends back http://server/index.html. Then it is just a matter for me to use js to extract the query string and do my ajax bit. This should work.

best practices loading knockout js models from server side code

I've inherited a site that uses knockout js and asp.net. The site runs decent after everything has been loaded but the initial load leaves a lot to be desired. Digging through the code there are around 20 models, each one calls an ajax method to get data from the server on page load. There is quite a bit of data being queried from the db which is causing the performance issue as the server sends the js, then the client sends and receives a large amount of data over 20 methods.
I want to handle all of the queries on server side before I send it to the client side, and then load the js models from that data. I am thinking about posting this data in a hidden div on the page as JSON and loading the models from there instead of an ajax call.
My question is, is this best practice? Is there a better way to optimize this scenario?
If you inline the data from the 20 queries in the page response, then the page response time can be significantly prolonged. It will results in the browser having to sit and wait from the previous page or on a boring blank page.
However if you keep the solution as-is then the user will get the page initially much faster, and the data will pop-in when it is ready.
Although the total load time is probably going to be better with the data inlined, the perceived performance from the users perspective is going to be worse. Here is a nice post on the subject: http://www.lukew.com/ff/entry.asp?1797
Another benefit is that you don't have a weakest-link problem in that the page response time will be that of the slowest query. That will be quite severe in query timeout conditions.
Be also aware of issues if one query fails, then you must still inline the successful queries, and also handle the failed query.
I would argue that it is much better to do the queries from the browser.
There are some techniques to consider if you want to have the 20 queries executed more efficiently. Consider using something like SignalR to send all queries in a single connection and having the results also stream back in a single connection. I've used this technique previously with great success, it also enabled me to stream back cached results (from server-side cache) before the up-to-date results from a slow backend service was returned.

Better way to design a web application with data persistance

For my Web apps I'm always wondering, which way is the best to design a proper Web applications with data persistance. For now I design every time a single HTML page, and all the content and the data upload is managed with jQuery AJAX requests based on a RESTful model, to a remote server which takes care of the database. But at the end that make sometimes a lot of AJAX calls, and getting huge amount of data takes sometimes a few seconds, which is not user-friendly.
Is there something like a guideline, or a standard way of developing to design web App ?
I've already looked over the WebWorkers and WebSockets Javascript API, but never used them yet. Does anybody already try it ? Does that allows better performance than AJAX exchanges ?
What is your way of Web App developing ?
This isn't really the place for questions like this, but I will give you a few brief pointers.
AJAX requests shouldn't take long, if they are consistently being slow then the problem is most likely your server-side code and inefficiencies there. Websockets aren't going to give you any benefit over AJAX if your server is slow.
A common design is to load the minimal dataset required for the page to function, AJAXing any other required data to get the page responsive as quickly as possible.
Caching and pre-fetching are great ways to speed up your site, for instance: if you are running a mysql query over and over, run it once and put the results in a caching service like memcached or mongodb with an expiration of an hour (or something) and serve the cached response, this will speed up your server response times. Pre-fetching is anticipating what your user is going to do next and loading that data in the background without any user interaction.
Consider using localStorage or IndexedDB if your users are loading the same data repeatedly

Best way to pass a query result to another page

Quick question:
I have a page that makes an AJAX call to my SQL server and receives an XML response. I parse the XML and display the relevant data in a table.
I have a button on the page that is used to display the table data in a simple line graph on a new page. Currently, I just re-query the database and to re-get the data and create a new array object of that data for my graph.
The GET can take up to 2.5 seconds, with the final graph time to render being about 8-9 seconds. I am investigating any alternatives to re-GET'ting the data.
So far I have:
localStorage (HTML5)
php pass me the data instead of querying the DB
jquery plugin (DOMCache)
Any thoughts on this or best practices??
thanks for any input!
You might do best to just make sure the response is cached in your users' browsers. There are a variety of ways to make this happen (variant upon the framework your server is running, browsers that your clients are using, etc etc), but the long story short is that relying upon caching will alleviate you from having to jump through performance hoops by making modifications to your codebase.
IE8+ is actually a kingpin in this area (much as I hate to admit it). Its aggressive caching of ajax responses is usually a serious pain in the arse, but in this case would prove valuable to your scenario.
Edit -
You mentioned SQL Server, so I'm making the assumption that you're running through an ASP.NET middle tier. If that's the case, here's an interesting article on caching ajax requests on the server and the client with the .NET framework.

Client-side caching (with JavaScript)

I have a API that I'm querying via JS on a client side and then displaying result on a page (again via JS).
I have a limit of 5 queries per second. In real life i can send maximum of 11 API calls in one loop.
What i need:
I need somehow to bypass 11 queries limit, because usually i need to make about 50 calls in one loop.
I need to make sure that I'm not sending the same API requests on every page refresh.
The obvious solution is caching. To comply with speed requirements, ideally i would like to cache data on the client's side.
The question:
How? I don't think that cookies is a good solution because of the 4KB size limit. I heard about Google-gears (that they use for Offline-Gmail.) but recent search result showed that it doesn't exists anymore.
You can use localstorage but only if you need the cache to remain between refreshes of the browser. If you don't then you can use memory like hold it in array or result.

Categories

Resources