How to optimize the ajax requests in Jquery? [closed] - javascript

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am building a mobile app using jquery mobile, jquery and php back-end.
My problem is in certain pages I am sending and receiving multiple ajax requests which are visibly slowing down the performance and causes memory leaks which leads to the crashing of app in low network connections
What are the possible ways of optimizing the ajax requests?
note:- I am sending some ajax requests periodically(like once in a second)
Some ajax are sent based on events

First off, correctly written Ajax code does not leak memory. So, if you actually have a leak, that is likely caused by incorrectly written code, not by using Ajax.
Then, there are a number of optimizations you can consider.
Combine multiple requests to your server into one larger request. This saves roundtrips to the server, saves bandwidth, saves battery and increases performance.
Don't poll every second. Be smarter about how often you poll your server.
Lengthen the polling interval to vary it based on likely activity, switch to "long polling" or switch to a webSocket so you can do server push with no polling.
Analyze what is causing excessive memory consumption and fix bugs or redesign to curtail that. There is no reason that lots of Ajax calls should "leak" memory. It will chew up battery life, but need not leak memory if coded correctly.
Oh, and don't hesitate to pull up some high scale social web apps that already exist, open the debugger, switch to the network tab and study the heck out of what they are doing. You may as well learn from high scale apps that already have solved this issue.

Long Polling Is Better Than Polling
If you poll in a set interval it is probably better if you set a timeout on server-side and wait to return a message until a given number of loops. This makes especially sense for SPA.
However if you poll, you should increase the time intervall with every empty response you get.
Group Requests By Events, Not By Entity
Let's say you poll every 10 seconds, to retrieve new messages. Moreover you poll every 10 seconds to retrieve more notifications. Instead of doing two requests you should only do one ajax request and return one big json response, which you then use to modify DOM elements on the client side.
Use Server-Sent Events Or Sockets If Possible
Sockets or Server-Sent Events will result in much less requests and will only notify you if something actually happened on the server side. Here is a good comparison.

Here are few things you can do
Firstly
Get rid of all the memory leaks. Put your application in to a sandbox and give it the toughest time you can to record any memory leaks. Investigate them and correct your code.
Then
Reduce the number of requests. Carefully research on the best thresholds in your application and only trigger your requests at theses. Try to batch your requests and put them in to a single request whenever possible.
Use GET requests whenever possible. They are comparably simple and would act fast. This step is important for an application which triggers a lot of requests in its operation.
Choose the data format carefully. It could be plain text, JSON or
XML. Figure out the one that have the lowest impact on your
application and switch to that appropriately. Configure your server
to use data compression if possible.
Optimize the server. Use proper Expire or Cache-Control headers
for the content being servers. Use ETags if possible.
Try putting your content on a CDN. This may place the resources in a
geographically closer location to the user and make your application work faster.

Also, If you want to make a chat server or send fast messages to the server, you should use webSockets like socket.io .

There are mainly 3 steps for optimization (minimize) jQuery AJAX calls:
Either pass the done callback function as an argument when calling your function
return the jqXhr object from the function, and assign the done callback
Alternatively switch to using jQuery.ajax() instead, and pass the entire options object
Please refer this link : How to optimize (minimize) jQuery AJAX calls

You can use the cache feature of the Ajax call for this that helps you to get all data in first during the request and stored in cache and from next time no Ajax request happened and data manipulation done using cached data only.
Also, you can only optimize Ajax call request and response time by reducing the amount of data requested and sent while Ajax call. This can be done only by removing unwanted data during Ajax call.
Other than that, all optimization depends on code and database logic. To increase performance and reduce app crashing, optimized code and database logic helps you.

Create variable with ajax callback and use it
Each second you create new xhr request instance, that will be in
memory until done callback executes.Better way is to create new ajax
request on previous ajax complete, that means you will have only one
xhr request instance at time.
After data send to server, clean your object data, for example:
array.length = 0;
object = null;
image.src = "";
file = null;

As #jfriend00 said, Ajax requests don't generate memory leaks issues...
this is what your code probably do!
The best way to do what you need is to open a Socket Layer with your Data Layer Service, so, instead of a polling, you can be notified when a domain-model change occurs.
Checkout at Socket.io...
If you don't want to implement a Socket Layer, I think there aren't many way to increase performances...
One thing that, in addition - of course - to the code improvement operations (refactoring, ecc.), in my opinion, you can do is to implement a "Tail Management System"...
Study how Promises work (es6, jQuery, Q, bluebird)
Create A QueueService, a simple Javascript Class that knows how to serialize all (Ajax) tasks (promises);
Interpose the QueueService between Controllers and Data Layer Access Services
Implement a simple lightweight CacheSystem that prevents unnecessary AjaxRequests!

Related

What are some good use cases for Server Sent Events

I discovered SSE (Server Sent Events) pretty late, but I can't seem to figure out some use cases for it, so that it would be more efficient than using setInterval() and ajax.
I guess, if we'd have to update the data multiple times per second then having one single connection created would produce less overhead. But, except this case, when would one really choose SSE?
I was thinking of this scenario:
A new user comment from the website is added in the database
Server periodically queries DB for changes. If it finds new comment, send notification to client with SSE
Also, this SSE question came into my mind after having to do a simple "live" website change (when someone posts a comment, notify everybody who is on the site). Is there really another way of doing this without periodically querying the database?
Nowadays web technologies are used to implmement all sort of applications, including those which need to fetch constant updates from the server.
As an example, imagine to have a graph in your web page which displays real time data. Your page must refresh the graph any time there is new data to display.
Before Server Sent Events the only way to obtain new data from the server was to perform a new request every time.
Polling
As you pointed out in the question, one way to look for updates is to use setInterval() and an ajax request. With this technique, our client will perform a request once every X seconds, no matter if there is new data or not. This technique is known as polling.
Events
Server Sent Events on the contrary are asynchronous. The server itself will notify to the client when there is new data available.
In the scenario of your example, you would implement SSE such in a way that the server sends an event immediately after adding the new comment, and not by polling the DB.
Comparison
Now the question may be when is it advisable to use polling vs SSE. Aside from compatibility issues (not all browsers support SSE, although there are some polyfills which essentially emulate SSE via polling), you should focus on the frequency and regularity of the updates.
If you are uncertain about the frequency of the updates (how often new data should be available), SSE may be the solution because they avoid all the extra requests that polling would perform.
However, it is wrong to say in general that SSE produce less overhead than polling. That is because SSE requires an open TCP connection to work. This essentially means that some resources on the server (e.g. a worker and a network socket) are allocated to one client until the connection is over. With polling instead, after the request is answered the connection may be reset.
Therefore, I would not recommend to use SSE if the average number of connected clients is high, because this could create some overhead on the server.
In general, I advice to use SSE only if your application requires real time updates. As real life example, I developed a data acquisition software in the past and had to provide a web interface for it. In this case, a lot of graphs were updated every time a new data point was collected. That was a good fit for SSE because the number of connected clients was low (essentially, only one), the user interface should update in real-time, and the server was not flooded with requests as it would be with polling.
Many applications do not require real time updates, and thus it is perfectly acceptable to display the updates with some delay. In this case, polling with a long interval may be viable.

best practices loading knockout js models from server side code

I've inherited a site that uses knockout js and asp.net. The site runs decent after everything has been loaded but the initial load leaves a lot to be desired. Digging through the code there are around 20 models, each one calls an ajax method to get data from the server on page load. There is quite a bit of data being queried from the db which is causing the performance issue as the server sends the js, then the client sends and receives a large amount of data over 20 methods.
I want to handle all of the queries on server side before I send it to the client side, and then load the js models from that data. I am thinking about posting this data in a hidden div on the page as JSON and loading the models from there instead of an ajax call.
My question is, is this best practice? Is there a better way to optimize this scenario?
If you inline the data from the 20 queries in the page response, then the page response time can be significantly prolonged. It will results in the browser having to sit and wait from the previous page or on a boring blank page.
However if you keep the solution as-is then the user will get the page initially much faster, and the data will pop-in when it is ready.
Although the total load time is probably going to be better with the data inlined, the perceived performance from the users perspective is going to be worse. Here is a nice post on the subject: http://www.lukew.com/ff/entry.asp?1797
Another benefit is that you don't have a weakest-link problem in that the page response time will be that of the slowest query. That will be quite severe in query timeout conditions.
Be also aware of issues if one query fails, then you must still inline the successful queries, and also handle the failed query.
I would argue that it is much better to do the queries from the browser.
There are some techniques to consider if you want to have the 20 queries executed more efficiently. Consider using something like SignalR to send all queries in a single connection and having the results also stream back in a single connection. I've used this technique previously with great success, it also enabled me to stream back cached results (from server-side cache) before the up-to-date results from a slow backend service was returned.

Operational transformation and collaboration in real time

After reading this post (probably, you can get the gist by looking at the images, no need to read the whole text), I'm having a hard time deciding at which point is needed the help of comet type technologies.
It looks to me (naively) that all of that can be accomplished by using ajax requests and a database to retrieve several versions. Is that true?.
Probably I'm missing something, so a clarification would be great.
UPDATE:
Given the helpful answer written by Andrew, saying that an ajax approach to this issue it is not timely, I was wondering why, that is, at which stage the response sent by the server to the client will produce a delay?.
Comet IS Ajax requests.
In order for the server to be able to push notifications to the users browsers(IE anytime you see the server sending a change in the diagrams), the user needs to have a connection with the server already. The method of maintaining that connection using ajax long polling or the like is what the term comet refers to.
Yes, you could implement this by sending an Ajax request every x seconds. But that is wasteful, and it is not timely.
[Edit]
When I say it's not timely, what I am saying is that, using an ajax call to update on an interval will have a delay of whatever that interval is.
The server CANNOT send an update to the client. It can only answer requests from the client. So if the server gets new information, it has to sit on it until all the clients come back and ask for an update. In a scenario like this people can edit the same information and commit it at the same time, which needs to be handled by the server, and which is what the article is addressing. Using a comet framework will just reduce the chances of this happening because the different clients will be better synced.

Quick AJAX responses from Rails application

I have a need to send alerts to a web-based monitoring system written in RoR. The brute force solution is to frequently poll a lightweight controller with javascript. Naturally, the downside is that in order to get a tight response time on the alerts, I'd have to poll very frequently (every 5 seconds).
One idea I had was to have the AJAX-originated polling thread sleep on the server side until an alert arrived on the server. The server would then wake up the sleeping thread and get a response back to the web client that would be shown immediately. This would have allowed me to cut the polling interval down to once every 30 seconds or every minute while improving the time it took to alert the user.
One thing I didn't count on was that mongrel/rails doesn't launch a thread per web request as I had expected it to. That means that other incoming web requests block until the first thread's sleep times out.
I've tried tinkering around with calling "config.threadsafe!" in my configuration, but that doesn't seem to change the behavior to a thread per request model. Plus, it appears that running with config.threadsafe! is a risky proposition that could require a great deal more testing and rework on my existing application.
Any thoughts on the approach I took or better ways to go about getting the response times I'm looking for without the need to deluge the server with requests?
You could use Rails Metal to improve the controller performance or maybe even separate it out entirely into a Sinatra application (Sinatra can handle some serious request throughput).
Another idea is to look into a push solution using Juggernaut or similar.
One approach you could consider is to have (some or all of) your requests create deferred monitoring jobs in an external queue which would in turn periodically notify the monitoring application.
What you need is Juggernaut which is a Rails plugin that allows your app to initiate a connection and push data to the client. In other words your app can have a real time connection to the server with the advantage of instant updates.

is it better or group my ajax requests or send every request separately?

I'm developing an ajax project, but i'm confusing in something
i have 3 functions that are sending data to the server every function has a certain job.
is it better to group the ajax request that will be send from each function and send it in one big request. which will reduce my requests count & time. or send each request separately which will reduce execution time for my code at the server side but will make many requests.
P.S: for my application it works in both cases.
If you can consolidate them into a single request, neatly, I'd suggest you do that. Browsers will put restrictions on how many requests you can be making simultaneously (can be as low as 2), as will many servers.
Of course, as always, there are exceptions. If you have some data that you need to query every 10 seconds, and other data you need to query every 10 minutes, it's really not necessary to query the 10-minute data along with the 10-second data every 10 seconds.
Perhaps a single request with options that control what is sent back. You can request large amounts of data, or small amounts of data, all from the same request depending on the options you pass along with the request itself.
Just theory.
There are several factors involved here.
Putting them together on the client has the advantage of the client doing more of the work, which is good.
All but the newest browsers only allow 2 requests to the same domain at a time. If you have a lot of latency, this would cause to to want to group requests together if they're going to the same domain. However, newer browsers allow 8 requests at a time, so this problem will gradually go away.
But on the other hand
Do the requests need to all be made at the same time? Keeping the requests separate will allow you to maintain flexibility.
Probably the most important: Keeping the request separate will probably result in more maintainable, straightforward code.
I would suggest doing whatever keeps the code on the browser straightforward. Javascript (or frameworks that do ajax for you) is difficult enough to maintain and coordinate. Keep it simple!

Categories

Resources