UI unresponsive during AJAX calls - javascript

I have a dashboard screen that needs to make about 20 AJAX requests on load, each returning a different statistics. In total, it takes about 10 seconds for all the requests to come back. However during that 10 seconds, the UI is pretty much locked.
I recall reading a JS book by Nick Zakas that described techniques for maintaining UI responsiveness during intensive operations (using timers). I'm wondering if there is a similar technique for dealing with my situation?
*I'm trying to avoid combining the AJAX calls for a number of reasons
$(".report").each(function(){
var container = $(this)
var stat = $(this).attr('id')
var cache = db.getItem(stat)
if(cache != null && cacheOn)
{
container.find(".value").html(cache)
}
else
{
$.ajax({
url: "/admin/" + stat,
cache: false,
success: function(value){
container.find(".value").html(value.stat)
db.setItem(stat, value.stat);
db.setItem("lastUpdate", new Date().getTime())
}
});
}
})

If you have access to jQuery, you can utilize the $.Deferred object to make multiple async calls simultaneously and perform a callback when they all resolve.
http://api.jquery.com/category/deferred-object/
http://api.jquery.com/deferred.promise/
If each of these callbacks are making modifications to the DOM, you should store the changes in some temporary location (such as in-memory DOM objects) and then append them all at once. DOM manipulation calls are very time consuming.

I've had similar problems working heavily with SharePoint web services - you often need to pull data from multiple sources to generate input for a single process.
To solve it I embedded this kind of functionality into my AJAX abstraction library. You can easily define a request which will trigger a set of handlers when complete. However each request can be defined with multiple http calls. Here's the component (and detailed documentation):
DPAJAX at DepressedPress.com
This simple example creates one request with three calls and then passes that information, in the call order, to a single handler:
// The handler function
function AddUp(Nums) { alert(Nums[1] + Nums[2] + Nums[3]) };
// Create the pool
myPool = DP_AJAX.createPool();
// Create the request
myRequest = DP_AJAX.createRequest(AddUp);
// Add the calls to the request
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [5,10]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [4,6]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [7,13]);
// Add the request to the pool
myPool.addRequest(myRequest);
Note that unlike many of the other solutions provided this method does not force single threading of the calls being made - each will still run as quickly (or as slowly) as the environment allows but the single handler will only be called when all are complete. It also supports the setting of timeout values and retry attempts if your service is a little flakey.
In your case you could make a single request (or group related requests - for example a quick "most needed" request and a longer-running "nice to have" request) to call all your data and display it all at the same time (or in chunks if multiple requests) when complete. You can also specifically set the number of background objects/threads to utilize which might help with your performance issues.
I've found it insanely useful (and incredibly simple to understand from a code perspective). No more chaining, no more counting calls and saving output. Just "set it and forget it".
Oh - concerning your lockups - are you, by any chance, testing this on a local development platform (running the requests against a server on the same machine as the browser)? If so it may simply be that the machine itself is working on your requests and not at all indicative of an actual browser issue.

Related

How to make API abstraction layer code cleaner

Introduction
I'm developing a react application that has to communicate with a rest api. Currently the full api isn't fully implemented yet, so I'm making mock ups and to avoid wasted code I'm adding an adding an abstraction layer between mock up/api and application.
Current situation
Currently I have classes representing the components (like 'a user') in the api. A get request of an url obj1/obj2/ob3/ is translated into javascript as server.get("obj1").get("obj2").get("obj3").fetch(...args).then(onsucces,onerror).
fetch would return a promise. the other methods
The question
My question has 2 parts.
First is there a way to clean up this part, .get("obj1").get("obj2").get("obj3"). (I don't think react supports Proxies).
Secondly, if you have recursive requests
server.get("user").get(<id>).fetch(
(user)=>{
update_ui(user);
user.books.fetch(
(books)=>{
update_ui(books);
},(error)=>{}
)
},(error)=>{}
)
they can get ugly quickly, is there a way similar to .then(...).then(...) (for promises) to flatten them or something complete differently that would result in better code.
What get().get()...fetch() does
the gets would construct a path from which the fetch,... operations will be executed for an actual api these would be urls for a mock-up this could be a hard-coded dictionary.
for example, get("users").get(<userid>), would correspond to an object of the form
{
path:"users.<userid>" //(or any other seperator)
fetch: function(...args) //GET specified in the api
push: null // api doesn't specify a POST request for this url
...
}
the translation of HTTP request to javascript is as follows:
GET to fetch
POST to push
PUT to set
PATCH to update
DELETE to pop
the implementation of these methods (fetch) would then use the path and the specified arguments to get,post,... the data.

Ajax server side setup and teardown

I have the following situation:
Server-side, I have a relational database, and am doing some rather computationally-intensive requests on it. For a request, there is a resource intensive setup task. Many of my requests require this same setup step, which I don't want to repeat, which leads me to want a single call. However, each request takes a lot of time, which makes me want to issue them asynchronously as several calls, so that the user gets stuff as it becomes available. I could try to just "keep the setup step around", but I don't really want the server to try to guess when the client is done, and it can't rely on the client to tell it when to clean up.
Here's what I would LIKE to happen:
- Client-side, I gather up the requests that I want to make, A,B and C. They all require the same setup step.
- So that I don't have to repeat the setup step, I issue one ajax call for A,B and C simultaneously.
- So that the user doesn't have to wait forever for results, I return answers for A,B and C asynchronously, handing them back to the client as the results become available.
Something like:
$.ajax({
type: "GET",
url: "/?A=1&B=2&C=3",
partialSuccess: function (data) {
if (partialSuccess.which == "A") {
doStuffForTaskA();
}
},
success: function (data) {
console.log("all tasks complete!");
}
});
I'm pretty sure the sort of thing I have in the code above is not possible. Any thoughts on the best way to accomplish what I'm trying to do? (I am also the author of the server-side code. It happens to be c sharp, but I'm somewhat more interested in this as a 'which protocol, and how does it work' question)

understanding of node js performance

I recently discovered Node js and I read in various articles that Node js is fast and can handle more requests than a Java server although Node js use a single thread.
I understood that Node is based on an event loop, each call to a remote api or a database is done with an async call so the main thread is never blocked and the server can continue to handle others client requests.
If I understood well, each portion of code that can take times should be processed with an async call otherwise the server will be blocked and it won't be able to handle others requests ?
var server = http.createServer(function (request, response) {
//CALL A METHOD WHICH CAN TAKE LONG TIME TO EXECUTE
slowSyncMethod();
//THE SERVER WILL STILL BE ABLE TO HANDLER OTHERS REQUESTS ??
response.writeHead(200, {"Content-Type":"text/plain"});
response.end("");
});
So if my understanding is correct, the above code is bad because the synchronous call to the slow method will block the Node js main thread ? Is Node js fast on condition that all the code that can take times are executed in an async manner ?
NodeJs is as fast as your hardware(vm) and the v8 that is running it. that being said, any heavy duty task like any type of media(music, image, video etc) file processing will definitively lock your application. so will computation on large collections thats why the async model is leveraged though events, and deferred invocations. that being said nothing stops you from spawning child processes to relegate heavy duty and asynchronously get back the result. But if you are finding your self in the need to do this for many tasks, maybe you should revisit your architecture.
I hope thhis helps

Using worker/background processes in node.js vs async call

I want to know if there is any benefit in passing off db or other async calls to a worker process or processes. Specifically I'm using heroku and postgres. I've read up a good bit on node.js and how to structure your server so that the event loop isn't blocked and that smart architecture doesn't leave incoming requests hanging longer than 300ms or so.
Say I have the following:
app.get('/getsomeresults/:query', function(request, response){
var foo = request.params.query;
pg.connect(process.env.DATABASE_URL, function(err, client, done) {
client.query("SELECT * FROM users WHERE cat=$1", [foo],
function(err, result){
//do some stuff with result.rows that may take 1000ms
response.json({some:data})
});
});
});
Being that postgresql is async by nature is there any real benefit to creating a worker process to handle the processing of the results set from the initial db call?
You don't gain any benefit for running async functions in another process because the real work (running the SQL query) is already running in another process (postgres). Basically, the async/event-oriented design pattern is a lightweight process manager for things that run outside your process.
However, I noticed in your comment that the processing in the callback function does indeed take up a lot of CPU time (if that's really the case). That portion of code does benefit from being run in another process - it frees the main process to accept incoming requests.
There are two ways to structure such code. Either run the async function in a separate process (so that the callback doesn't block) or just run the relevant portion of the callback as a function in a separate process.
Calling client.query from a separate process won't give you a real benefit here, as sending queries to the server is already an asynchronous operation in node-pg. However, the real problem is the long execution time your callback function. The callback runs synchronously in the main event loop and blocks other operations, so it would be a good idea to make this non-blocking.
Option 1: Fork a child process
Creating a new process every time the callback is executed is no good idea, since each Node.js process needs its own environment, which is time consuming to set up. Instead it would be better to create multiple server processes when the server is started and let them handle requests concurrently.
Option 2: Use Node.js clusters
Luckily Node.js offers the cluster interface to achieve exactly this. Clusters give you the ability to handle multiple worker processes from one master process. It even supports connection pooling, so you can simply create a HTTP server in each child process an the incoming requests will be distributed among them automatically (node-pg supports pooling as well).
The cluster solution is also nice, because you don't have to change a lot in your code for that. Just write the master process code and start your existing code as workers.
The official documentation on Node.js clusters explains all aspects if clusters very well, so I won't go into details here. Just a short example for a possible master code:
var cluster = require("cluster");
var os = require("os");
var http = require("http");
if (cluster.isMaster)
master();
else
worker();
function master() {
console.info("MASTER "+process.pid+" starting workers");
//Create a worker for each CPU core
var numWorkers = os.cpus().length;
for (var i = 0; i < numWorkers; i++)
cluster.fork();
}
function worker() {
//Put your existing code here
console.info("WORKER "+process.pid+" starting http server");
var httpd = http.createServer();
//...
}
Option 3: Split the result processing
I assume that the reason for the long execution time of the callback function is that you have to process a lot of result rows and that there is no chance to process the results in a faster way.
In that case it might also be a good idea to split the processing into several chunks using process.nextTick(). The chunks will run synchronously in several event-loop frames, but other operations (like event-handlers) can be executed between these chunks. Here's a rough (and untested) scetch how the code could look like:
function(err, result) {
var s, i;
s = 0;
processChunk();
// process 100 rows in one frame
function processChunk() {
i = s;
s += 100;
while (i<result.rows.length && i<s) {
//do some stuff with result.rows[i]
i++;
}
if (i<result.rows.length)
process.nextTick(processChunk);
else
//go on (send the response)
}
}
I'm not 100% sure, but I think node-pg offers some way to receive a query result not as a whole, but split into several chunks. This would simplify the code a lot, so it might be an idea to search into that direction...
Final conclusion
I would use option 2 in the first place and option 3 additionally, if new requests still have to wait too long.

Is google apps script synchronous?

I'm a Java developer learning JavaScript and Google Apps Script simultaneously. Being the newbie I learned the syntax of JavaScript, not how it actually worked and I happily hacked away in Google Apps Script and wrote code sequentially and synchronous, just like Java. All my code resembles this: (grossly simplified to show what I mean)
function doStuff() {
var url = 'https://myCompany/api/query?term<term&search';
var json = getJsonFromAPI(url);
Logger.log(json);
}
function getJsonFromAPI(url) {
var response = UrlFetchApp.fetch(url);
var json = JSON.parse(response);
return json;
}
And it works! It works just fine! If I didn't keep on studying JavaScript, I'd say it works like a clockwork. But JavaScript isn't a clockwork, it's gloriously asynchronous and from what I understand, this should not work at all, it would "compile", but logging the json variable should log undefined, but it logs the JSON with no problem.
NOTE:
The code is written and executed in the Google Sheet's script editor.
Why is this?
While Google Apps Script implements a subset of ECMAScript 5, there's nothing forcing it to be asynchronous.
While it is true that JavaScript's major power is its asynchronous nature, the Google developers appear to have given that up in favor of a simpler, more straightforward API.
UrlFetchApp methods are synchronous. They return an HttpResponse object, and they do not take a callback. That, apparently, is an API decision.
Please note that this hasn't really changed since the introduction of V8 runtime for google app scripts.
While we are on the latest and greatest version of ECMAScript, running a Promise.all(func1, func2) I can see that the code in the second function is not executed until the first one is completed.
Also, there is still no setTimeout() global function to use in order to branch the order of execution. Nor do any of the APIs provide callback functions or promise-like results. Seems like the going philosophy in GAS is to make everything synchronous.
I'm guessing from Google's point of view, that parallel processing two tasks (for example, that simply had Utilities.sleep(3000)) would require multiple threads to run in the server cpu, which may not be manageable and may be easy to abuse.
Whereas parallel processing on the client or other companies server (e.g., Node.js) is up to that developer or user. (If they don't scale well it's not Google's problem)
However there are some things that use parallelism
UrlFetchApp.fetchAll
UrlFetchApp.fetchAll() will asynchronously fetch many urls. Although this is not what you're truly looking for, fetching urls is a major reason to seek parallel processing.
I'm guessing Google is reasoning this is ok since fetchall is using a web client and its own resources are already protected by quota.
FirebaseApp getAllData
Firebase I have found is very fast compared to using a spreadsheet for data storage. You can get many things from the database at once using FirebaseApp's getAllData:
function myFunction() {
var baseUrl = "https://samplechat.firebaseio-demo.com/";
var secret = "rl42VVo4jRX8dND7G2xoI";
var database = FirebaseApp.getDatabaseByUrl(baseUrl, secret);
// paths of 3 different user profiles
var path1 = "users/jack";
var path2 = "users/bob";
var path3 = "users/jeane";
Logger.log(database.getAllData([path1, path2, path3]));
}
HtmlService - IFrame mode
HtmlService - IFrame mode allows full multi-tasking by going out to client script where promises are truly supported and making parallel calls back into the server. You can initiate this process from the server, but since all the parallel tasks' results are returned in the client, it's unclear how to get them back to the server. You could make another server call and send the results, but I'm thinking the goal would be to get them back to the script that called HtmlService in the first place, unless you go with a beginRequest and endRequest type architecture.
tanaikech/RunAll
This is a library for running the concurrent processing using only native Google Apps Script (GAS). This library claims full support via a RunAll.Do(workers) method.
I'll update my answer if I find any other tricks.

Categories

Resources