I have read a handful of setTimeout questions and none appear to relate to the question I have in mind. I know I could use setInterval() but it is not my prefered option.
I have a web app that could in theory be running a day or more without a page refresh and more than one check a minute. Am I likely to get tripped up if my function calls itself several hundred (or more) times using setInterval? Will I reach "stack overflow" for example?
If I use setInterval there is a remote possibility of two requests being run at the same time especially on slow connections (where a second one is raised before the first is finished). I know I could create flags to test if a request is already active, but I'm afraid of false positives.
My solution is to have a function call my jquery ajax code, do its thing, and then as part of ajaxComplete, it does a setInterval to call itself again in X seconds. This method also allows me to alter the duration between calls, so if my server is busy(slow), one reply can set a flag to increase the time between ajax calls.
Sample code of my idea...
function ServiceOrderSync()
{
// 1. Sync stage changes from client with the server
// 2. Sync new orders from server to this client
$.ajax( { "data": dataurl,
"success": function(data)
{ // process my data },
"complete": function(data)
{
// Queue the next sync
setTimeout( ServiceOrderSync, 15000 );
});
}
You won't get a stack overflow, since the call isn't truly recursive (I call it "pseudo-recursive")
JavaScript is an event driven language, and when you call setTimeout it just queues an event in the list of pending events, and code execution then just continues from where you are and the call stack gets completely unwound before the next event is pulled from that list.
p.s. I'd strongly recommend using Promises if you're using jQuery to handle async code.
Related
I have a script I'm using to schedule resources using an API and a list in CSV format. The script currently loops through the CSV and fires off a call to a function that has the API calls in it. The AJAX calls are nested (Create a Reservation->Take the reservation number and add a resource->Validate the reservation->Submit the reservation). The problem is that after the original AJAX call, it seems to hang until all of the AJAX calls have completed. There doesn't seem to be any asynchronicity going on.
for(line in CSV)
{
makeAPICalls(line)
}
function makeAPICalls(line)
{
$.ajax("Create Reservation").then(function(){
$.ajax("Add Resource to Reservation").then(function(){
$.ajax("Validate Reservation").then(function(){
$.ajax("Confirm Reservation")
})
})
})
}
The first API call ("Create Reservation") completes, and then waits for all of the other lines in the CSV to make that call, then they ALL move on to the next step ("Add Resource to Reservation"). I was wondering if the system was just moving too quickly, so there wasn't a chance for everything to get "out of sync", so I added a delay before makeAPICalls(), but it still waited. Once the CSV loop finished, all the AJAX calls moves from ("Create Reservation") to the then("Add Resource to Reservation").
Is this as expected? Ideally I'd like each call to makeAPICalls() to finish as quickly as possible, with no regards for any other calls (which I kind of thought was what async was all about, but it doesn't seem to be happening here.
This is happening because you are chaining the requests. If your requests are not dependent on each other, you can call them without using .then().
The behaviour is quite correct. I don't know how you are putting in the delay but it probably won't help since JavaScript is single-threaded.
If you want all the steps to complete for a specific csv line you could have your function process the list one-by-one. You could even have the last step call back into the function with the next index to process.
When are callbacks executed, for example the callback of a setTimeout() or a click event?
Do they pause code, that is already running, or do they wait until it has finished?
Example
I have a data structure (incrementalChanges) that records state changes caused by user interactions, for example mouse clicks. If I want to send all changes to another peer, I send him this data structure.
Another possibility is a full synchronisation (makeFullSync()), that means I send him my complete current state, so that I must empty the incremental changes (deleteIncrementalChanges()). That is, what you can see in the code. However I am not sure, what happens, if a user clicks something exactly between these two function calls. If this event fires immediately, then an item to the incrementalChanges structure would be added, but then in the second call directly deleted, so that it will never be sent and the other peer's state would became invalid.
makeFullSync();
/* what if between these 2 calls a new change is made, that is saved in the
changes data structure, that will be deleted by deleteIncrementalChanges()?
Then this change would be lost? If I change the order it is not better ...
*/
deleteIncrementalChanges();
Some good links and, in the case the first scenario (it pauses running code) is true, solutions are welcomed.
Javascript is single threaded, and keeps an event stack of stuff it needs to get to once it's done running the current code it's working on. It will not start the next event in the stack until the current one is finished.
If you make multiple asynchronous calls, such as calls for a server to update data on another client, you need to structure your code to handle the case where they don't necessarily reach the second client in the same order.
If you're sending changes one at a time to another user, you can time stamp the changes to track what order they were made on the first client.
Do they pause code, that is already running, or do they wait until it has finished?
They wait until it has finished. JavaScript is single threaded, more than one piece of code can not run at once. JS uses an event loop to handle asynchronous stuff. If an event such as a click handler or timer firing happens while another piece of code is running, that event is queued up and runs after the currently running code finishes executing.
Assuming makeFullSync(); and deleteIncrementalChanges(); are called in the same chunk of code they will be executed one after another without any click events being processed until after they have both run.
One almost exception to the nothing runs in parallel rule in JS is WebWorkers. You can send data off to a worker for processing which will happen in another thread. Even though they run in parallel their results are inserted back into the event loop like any other event.
I have an AJAX Request that takes a long time to complete executing. So long that it usually times out and just continues running on server. So I was thinking that if there was some way to just start it and occasionally poll the same request for completeness, that would be ideal.
So my question is can you run a server script, let it run, and poll occasionally to see if its completed? I'm using a web method to run on server if that helps/matters.
I don't want to store the data on a database so I cant just poll for the database so can I poll the script itself.
My fallback is however to create a temp table so I can call from it to see progress. Or is there some better way?
Firstly if you do not want your Ajax Request to timeout, you can use the 'timeout' attribute.
XMLHttpRequest object used for Ajax calls has a 'timeout' attribute. Check out the documentation at http://www.w3.org/TR/XMLHttpRequest/#the-timeout-attribute .If you are making the Ajax call using jquery, you can simply do something like
$.ajax({
url: 'your url',
timeout: 100000 //this sets the timeout as 100 sec
……//your other attributes for the call
});
If you are interested in tracking your request progress, that might be a little difficult unless you want to track the completeness(complete), error, success or a particular status code. jquery provides different(complete, error, success and status code) properties for this. If you write a native ajax call(without query), the onreadystatechange event can track the following.
0 - Unsent, 1- Opened, 2 - Response_Headers received, 3 - Loading(response is being received), 4 - Done. More at http://www.w3.org/TR/XMLHttpRequest/#event-handlers.
Tracking anything besides this would be for you to develop, maintain and monitor in your code I think.
i am working with extjs 4.2 and at one place i am loading the store object like this :
var userDetailStore = Ext.create('Ext.data.Store', {
model : 'Person.DetailsModel',
autoLoad : true,
proxy : {
type : 'ajax',
method : 'POST',
url : 'getValueAction.action',
reader : {
type : 'json',
root : 'details'
},
writer : {
type : 'json',
root : 'details'
}
},
fields : ['id','loginName','referenceId' ,'name']
});//Here I load the store which will definitely contain a list of values.
and in the very next line i want to get the referenceId of the first value from the store object like this
var empId = userDetailStore.getAt(0).get('referenceId')
and i am getting the error because till now the getCount() of the store object userDetailStore is giving me zero.But if i write an alert statement like alert('loading data'); before the line where i am getting the referenceId then the code works fine.The line userDetailStore.getCount() is giving me the exact value.
So i think some kind of delay is required between the loading the store and then using the store but I don't want an alert to show.I have even used the sleep() method in place of alert statement.But that is also not working.(BTW i don't even want to freeze the browser by executing the sleep())
Am i doing anything wrong while loading the store ?Is there any general way so that i will execute my code for using the store after the store is completely loaded ?
Somebody please help me out here...
Regards :
Dev
Vijay's answer is correct, but I thought I'd expand on the concept so that it's clear how this answer fits into what you're doing.
It's important to understand that when you make an AJAX request, the request is asynchronous. What this means in practical terms is that (as you found out) the remainder of your calling script does not wait for the asynchronous process to complete. Rather, the moment that you make an asynchronous request, your script is going to continue on it's merry way, executing the very next line of code.
So if you think about it, this makes perfect sense why you were not seeing a "count" in your store. While your async request was in the process of going to the server, getting the result, and then returning it to your request, the rest of your code kept right on executing, oblivious to what was happening in the async request (and this is precisely why async requests are powerful and awesome).
This is also why adding the alert seemed to "fix" your problem. When you call alert(), you literally halt execution of your script at the point of the alert. However, since your request for data was asynchronous, the time it took you to click the "OK" button of the alert (and hence resume processing of your script) gave the async request enough time to complete its lifecycle and update the original calling object.
In light of this, it's understandable why it would seem that a "delay" would be a desirable way to go, since the "delay" (or really, "halting") of the alert fixed your issue (at least on the surface). However, with async requests, you can never really know how long it's going to take to complete. If you have a large response, or there is unusual network latency, or any other number of issues...the hard-coded delay might work, but it also might not. Most maddening of all is that you'd never get consistent results, and would constantly be increasing the "delay" in order to accomodate all the things that could contribute to your async request taking longer and longer.
This is why the load() event of the store (and callbacks in general) is such a critical concept to understand and implement. By listening for the load() event, and then executing what code you need only within the context of that event firing, you can know for sure that the store's async request for data has completed.
If you've not used callbacks and event handling before, it does take a bit of getting used to in order to break out of the linear, procedural mindset. However, when dealing with AJAX requests in general, and event-driven frameworks like ExtJS 4 in particular, it's a concept you need to embrace in order to build effective and consistent applications.
use on load event to get the count after it's fully loaded
userDetailStore.on('load', function(){
alert("Fully loaded");
});
Here set autoload to false and on some action you can use load() to load your store.
store.load({
callback: function(records, operation, success) {
// do something after the load finishes
},
scope: this
});
I'm having an issue with some asynchronous JavaScript code that fetches values from a database using ajax.
Essentially, what I'd like to do is refresh a page once a list has been populated. For that purpose, I tried inserting the following code into the function that populates the list:
var timer1;
timer1 = setTimeout([refresh function], 1000);
As you might imagine, this works fine when the list population takes less than 1 second, but causes issues when it takes longer. So I had the idea of inserting this code into the function called on the success of each ajax call:
clearTimeout(timer1);
timer1 = setTimeout([refresh function], 1000);
So in theory, every time a list element is fetched the timer should reset, meaning that the refresh function should only ever be called 1 second after the final list element is successfully retrieved. However, in execution all that happens is that timer1 is reset once, the first time the second block of code is reached.
Can anybody see what the problem might be? Or if there's a better way of doing this? Thanks.
==========
EDIT: To clear up how my ajax calls work: one of the issues with the code's structure is that the ajax calls are actually nested; the callback method of the original ajax call is itself another ajax call, whose callback method contains a database transaction (incorrect - see below). In addition, I have two such methods running simultaneously. What I need is a way to ensure that ALL calls at all levels have completed before refreshing the page. This is why I thought that giving both methods one timer, and resetting it every time one of the callback methods was called, would keep pushing its execution back until all threads were complete.
Quite honestly, the code is very involved-- around 140 lines including auxiliary methods-- and I don't think that posting it here is feasible. Sorry-- if no one can help without code, then perhaps I'll bite the bullet and try copying it here in a format that makes some kind of sense.
==========
EDIT2: Here's a general workflow of what the methods are trying to do. The function is a 'synchronisation' function, one that both sends data to and retrieves data from the server.
I. Function is called which retrieves items from the local database
i. Every time an item is fetched, it is sent to the server (ajax)
a. When the ajax calls back, the item is updated locally to reflect
its success/failure
II. A (separate) list of items is retrieved from the local database
i. Every time an item is fetched, an item matching that item's ID is fetched from the server (ajax)
a. On successful fetch from server, the items are compared
b. If the server-side item is more recent, the local item is updated
So the places I inserted the second code block above are in the 'i.' sections of each method, in other words, where the ajax should be calling back (repeatedly). I realize that I was in error in my comments above; there is actually never a nested ajax call, but rather a database transaction inside an ajax call inside a database transaction.
You're doing pretty well so far. The trick you want to use is to chain your events together, something like this:
function refresh()
{
invokeMyAjaxCall(param1, param2, param3, onSuccessCallback, onFailureCallback);
}
function onSuccessCallback()
{
// Update my objects here
// Once all the objects have been updated, trigger another ajax call
setTimeout(refresh, 1000);
}
function onFailureCallback()
{
// Notify the user that something failed
// Once you've dealt with the failures, trigger another call in 1 sec
setTimeout(refresh, 1000);
}
Now, the difficulty with this is: what happens if a call fails? Ideally, it sounds like you want to ensure that you are continually updating information from the server, and even if a temporary failure occurs you want to keep going.
I've assumed here that your AJAX library permits you to do a failure callback. However, I've seen some cases when libraries hang without either failing or succeeding. If necessary, you may need to use a separate set of logic to determine if the connection with the server has been interrupted and restart your callback sequence.
EDIT: I suspect that the problem you've got is a result of queueing the next call before the first call is done. Basically, you're setting up a race condition: can the first call finish before the next call is triggered? It may work most times, or it may work once, or it may work nearly all the time; but unless the setTimeout() is the very last statement in your "response-processing" code, this kind of race condition will always be a potential problem.