js/ajax w/ onkeypress with throttling - javascript

working on a dynamic request as a person types.
Would like to try to throttle it so not EVERY key press fires off a call.
First thought was to do a setimeout of 1s and clear the timeout with each keypress, therefore waiting till there is a lag of 1s before pushing off the request.
Wondering if there are any cleaner suggestions

Underscore.js Offers a throttle function that creates a version of a function that is executed only once every x milliseconds. You might want to look into that

That solution will work well if you don't have a lot of other script that will run simultaneous.
Otherwise you can set up an if loop which triggers the call every 5 keypress for example.
Something like this:
if(i == 5) {
//Execute your call
}
And then you you increase the value of i each time a key is pressed.
But of course there are even more solutions as well.

Related

Difference between running immediately and delaying 0 millisecond [duplicate]

I've recently run into a rather nasty bug, wherein the code was loading a <select> dynamically via JavaScript. This dynamically loaded <select> had a pre-selected value. In IE6, we already had code to fix the selected <option>, because sometimes the <select>'s selectedIndex value would be out of sync with the selected <option>'s index attribute, as below:
field.selectedIndex = element.index;
However, this code wasn't working. Even though the field's selectedIndex was being set correctly, the wrong index would end up being selected. However, if I stuck an alert() statement in at the right time, the correct option would be selected. Thinking this might be some sort of timing issue, I tried something random that I'd seen in code before:
var wrapFn = (function() {
var myField = field;
var myElement = element;
return function() {
myField.selectedIndex = myElement.index;
}
})();
setTimeout(wrapFn, 0);
And this worked!
I've got a solution for my problem, but I'm uneasy that I don't know exactly why this fixes my problem. Does anyone have an official explanation? What browser issue am I avoiding by calling my function "later" using setTimeout()?
In the question, there existed a race condition between:
The browser's attempt to initialize the drop-down list, ready to have its selected index updated, and
Your code to set the selected index
Your code was consistently winning this race and attempting to set drop-down selection before the browser was ready, meaning that the bug would appear.
This race existed because JavaScript has a single thread of execution that is shared with page rendering. In effect, running JavaScript blocks the updating of the DOM.
Your workaround was:
setTimeout(callback, 0)
Invoking setTimeout with a callback, and zero as the second argument will schedule the callback to be run asynchronously, after the shortest possible delay - which will be around 10ms when the tab has focus and the JavaScript thread of execution is not busy.
The OP's solution, therefore was to delay by about 10ms, the setting of the selected index. This gave the browser an opportunity to initialize the DOM, fixing the bug.
Every version of Internet Explorer exhibited quirky behaviors and this kind of workaround was necessary at times. Alternatively it might have been a genuine bug in the OP's codebase.
See Philip Roberts talk "What the heck is the event loop?" for more thorough explanation.
Preface:
Some of the other answers are correct but don't actually illustrate what the problem being solved is, so I created this answer to present that detailed illustration.
As such, I am posting a detailed walk-through of what the browser does and how using setTimeout() helps. It looks longish but is actually very simple and straightforward - I just made it very detailed.
UPDATE: I have made a JSFiddle to live-demonstrate the explanation below: http://jsfiddle.net/C2YBE/31/ . Many thanks to #ThangChung for helping to kickstart it.
UPDATE2: Just in case JSFiddle web site dies, or deletes the code, I added the code to this answer at the very end.
DETAILS:
Imagine a web app with a "do something" button and a result div.
The onClick handler for "do something" button calls a function "LongCalc()", which does 2 things:
Makes a very long calculation (say takes 3 min)
Prints the results of calculation into the result div.
Now, your users start testing this, click "do something" button, and the page sits there doing seemingly nothing for 3 minutes, they get restless, click the button again, wait 1 min, nothing happens, click button again...
The problem is obvious - you want a "Status" DIV, which shows what's going on. Let's see how that works.
So you add a "Status" DIV (initially empty), and modify the onclick handler (function LongCalc()) to do 4 things:
Populate the status "Calculating... may take ~3 minutes" into status DIV
Makes a very long calculation (say takes 3 min)
Prints the results of calculation into the result div.
Populate the status "Calculation done" into status DIV
And, you happily give the app to users to re-test.
They come back to you looking very angry. And explain that when they clicked the button, the Status DIV never got updated with "Calculating..." status!!!
You scratch your head, ask around on StackOverflow (or read docs or google), and realize the problem:
The browser places all its "TODO" tasks (both UI tasks and JavaScript commands) resulting from events into a single queue. And unfortunately, re-drawing the "Status" DIV with the new "Calculating..." value is a separate TODO which goes to the end of the queue!
Here's a breakdown of the events during your user's test, contents of the queue after each event:
Queue: [Empty]
Event: Click the button. Queue after event: [Execute OnClick handler(lines 1-4)]
Event: Execute first line in OnClick handler (e.g. change Status DIV value). Queue after event: [Execute OnClick handler(lines 2-4), re-draw Status DIV with new "Calculating" value]. Please note that while the DOM changes happen instantaneously, to re-draw the corresponding DOM element you need a new event, triggered by the DOM change, that went at the end of the queue.
PROBLEM!!! PROBLEM!!! Details explained below.
Event: Execute second line in handler (calculation). Queue after: [Execute OnClick handler(lines 3-4), re-draw Status DIV with "Calculating" value].
Event: Execute 3rd line in handler (populate result DIV). Queue after: [Execute OnClick handler(line 4), re-draw Status DIV with "Calculating" value, re-draw result DIV with result].
Event: Execute 4th line in handler (populate status DIV with "DONE"). Queue: [Execute OnClick handler, re-draw Status DIV with "Calculating" value, re-draw result DIV with result; re-draw Status DIV with "DONE" value].
Event: execute implied return from onclick handler sub. We take the "Execute OnClick handler" off the queue and start executing next item on the queue.
NOTE: Since we already finished the calculation, 3 minutes already passed for the user. The re-draw event didn't happen yet!!!
Event: re-draw Status DIV with "Calculating" value. We do the re-draw and take that off the queue.
Event: re-draw Result DIV with result value. We do the re-draw and take that off the queue.
Event: re-draw Status DIV with "Done" value. We do the re-draw and take that off the queue.
Sharp-eyed viewers might even notice "Status DIV with "Calculating" value flashing for fraction of a microsecond - AFTER THE CALCULATION FINISHED
So, the underlying problem is that the re-draw event for "Status" DIV is placed on the queue at the end, AFTER the "execute line 2" event which takes 3 minutes, so the actual re-draw doesn't happen until AFTER the calculation is done.
To the rescue comes the setTimeout(). How does it help? Because by calling long-executing code via setTimeout, you actually create 2 events: setTimeout execution itself, and (due to 0 timeout), separate queue entry for the code being executed.
So, to fix your problem, you modify your onClick handler to be TWO statements (in a new function or just a block within onClick):
Populate the status "Calculating... may take ~3 minutes" into status DIV
Execute setTimeout() with 0 timeout and a call to LongCalc() function.
LongCalc() function is almost the same as last time but obviously doesn't have "Calculating..." status DIV update as first step; and instead starts the calculation right away.
So, what does the event sequence and the queue look like now?
Queue: [Empty]
Event: Click the button. Queue after event: [Execute OnClick handler(status update, setTimeout() call)]
Event: Execute first line in OnClick handler (e.g. change Status DIV value). Queue after event: [Execute OnClick handler(which is a setTimeout call), re-draw Status DIV with new "Calculating" value].
Event: Execute second line in handler (setTimeout call). Queue after: [re-draw Status DIV with "Calculating" value]. The queue has nothing new in it for 0 more seconds.
Event: Alarm from the timeout goes off, 0 seconds later. Queue after: [re-draw Status DIV with "Calculating" value, execute LongCalc (lines 1-3)].
Event: re-draw Status DIV with "Calculating" value. Queue after: [execute LongCalc (lines 1-3)]. Please note that this re-draw event might actually happen BEFORE the alarm goes off, which works just as well.
...
Hooray! The Status DIV just got updated to "Calculating..." before the calculation started!!!
Below is the sample code from the JSFiddle illustrating these examples: http://jsfiddle.net/C2YBE/31/ :
HTML code:
<table border=1>
<tr><td><button id='do'>Do long calc - bad status!</button></td>
<td><div id='status'>Not Calculating yet.</div></td>
</tr>
<tr><td><button id='do_ok'>Do long calc - good status!</button></td>
<td><div id='status_ok'>Not Calculating yet.</div></td>
</tr>
</table>
JavaScript code: (Executed on onDomReady and may require jQuery 1.9)
function long_running(status_div) {
var result = 0;
// Use 1000/700/300 limits in Chrome,
// 300/100/100 in IE8,
// 1000/500/200 in FireFox
// I have no idea why identical runtimes fail on diff browsers.
for (var i = 0; i < 1000; i++) {
for (var j = 0; j < 700; j++) {
for (var k = 0; k < 300; k++) {
result = result + i + j + k;
}
}
}
$(status_div).text('calculation done');
}
// Assign events to buttons
$('#do').on('click', function () {
$('#status').text('calculating....');
long_running('#status');
});
$('#do_ok').on('click', function () {
$('#status_ok').text('calculating....');
// This works on IE8. Works in Chrome
// Does NOT work in FireFox 25 with timeout =0 or =1
// DOES work in FF if you change timeout from 0 to 500
window.setTimeout(function (){ long_running('#status_ok') }, 0);
});
Take a look at John Resig's article about How JavaScript Timers Work. When you set a timeout, it actually queues the asynchronous code until the engine executes the current call stack.
setTimeout() buys you some time until the DOM elements are loaded, even if is set to 0.
Check this out: setTimeout
There are conflicting upvoted answers here, and without proof there is no way to know whom to believe. Here is proof that #DVK is right and #SalvadorDali is incorrect. The latter claims:
"And here is why: it is not possible to have setTimeout with a time
delay of 0 milliseconds. The Minimum value is determined by the
browser and it is not 0 milliseconds. Historically browsers sets this
minimum to 10 milliseconds, but the HTML5 specs and modern browsers
have it set at 4 milliseconds."
The 4ms minimum timeout is irrelevant to what is happening. What really happens is that setTimeout pushes the callback function to the end of the execution queue. If after setTimeout(callback, 0) you have blocking code which takes several seconds to run, the callback will not be executed for several seconds, until the blocking code has finished. Try this code:
function testSettimeout0 () {
var startTime = new Date().getTime()
console.log('setting timeout 0 callback at ' +sinceStart())
setTimeout(function(){
console.log('in timeout callback at ' +sinceStart())
}, 0)
console.log('starting blocking loop at ' +sinceStart())
while (sinceStart() < 3000) {
continue
}
console.log('blocking loop ended at ' +sinceStart())
return // functions below
function sinceStart () {
return new Date().getTime() - startTime
} // sinceStart
} // testSettimeout0
Output is:
setting timeout 0 callback at 0
starting blocking loop at 5
blocking loop ended at 3000
in timeout callback at 3033
Browsers have a process called "main thread", that is responsible for executing some JavaScript tasks, UI updates e.g.: painting, redraw, reflow, etc.
JavaScript tasks are queued to a message queue and then are dispatched to the browser's main thread to be executed.
When UI updates are generated while the main thread is busy, tasks are added into the message queue.
Both of these two top-rated answers are wrong. Check out the MDN description on the concurrency model and the event loop, and it should become clear what's going on (that MDN resource is a real gem). And simply using setTimeout can be adding unexpected problems in your code in addition to "solving" this little problem.
What's actually going on here is not that "the browser might not be quite ready yet because concurrency," or something based on "each line is an event that gets added to the back of the queue".
The jsfiddle provided by DVK indeed illustrates a problem, but his explanation for it isn't correct.
What's happening in his code is that he's first attaching an event handler to the click event on the #do button.
Then, when you actually click the button, a message is created referencing the event handler function, which gets added to the message queue. When the event loop reaches this message, it creates a frame on the stack, with the function call to the click event handler in the jsfiddle.
And this is where it gets interesting. We're so used to thinking of Javascript as being asynchronous that we're prone to overlook this tiny fact: Any frame has to be executed, in full, before the next frame can be executed. No concurrency, people.
What does this mean? It means that whenever a function is invoked from the message queue, it blocks the queue until the stack it generates has been emptied. Or, in more general terms, it blocks until the function has returned. And it blocks everything, including DOM rendering operations, scrolling, and whatnot. If you want confirmation, just try to increase the duration of the long running operation in the fiddle (e.g. run the outer loop 10 more times), and you'll notice that while it runs, you cannot scroll the page. If it runs long enough, your browser will ask you if you want to kill the process, because it's making the page unresponsive. The frame is being executed, and the event loop and message queue are stuck until it finishes.
So why this side-effect of the text not updating? Because while you have changed the value of the element in the DOM — you can console.log() its value immediately after changing it and see that it has been changed (which shows why DVK's explanation isn't correct) — the browser is waiting for the stack to deplete (the on handler function to return) and thus the message to finish, so that it can eventually get around to executing the message that has been added by the runtime as a reaction to our mutation operation, and in order to reflect that mutation in the UI.
This is because we are actually waiting for code to finish running. We haven't said "someone fetch this and then call this function with the results, thanks, and now I'm done so imma return, do whatever now," like we usually do with our event-based asynchronous Javascript. We enter a click event handler function, we update a DOM element, we call another function, the other function works for a long time and then returns, we then update the same DOM element, and then we return from the initial function, effectively emptying the stack. And then the browser can get to the next message in the queue, which might very well be a message generated by us by triggering some internal "on-DOM-mutation" type event.
The browser UI cannot (or chooses not to) update the UI until the currently executing frame has completed (the function has returned). Personally, I think this is rather by design than restriction.
Why does the setTimeout thing work then? It does so, because it effectively removes the call to the long-running function from its own frame, scheduling it to be executed later in the window context, so that it itself can return immediately and allow the message queue to process other messages. And the idea is that the UI "on update" message that has been triggered by us in Javascript when changing the text in the DOM is now ahead of the message queued for the long-running function, so that the UI update happens before we block for a long time.
Note that a) The long-running function still blocks everything when it runs, and b) you're not guaranteed that the UI update is actually ahead of it in the message queue. On my June 2018 Chrome browser, a value of 0 does not "fix" the problem the fiddle demonstrates — 10 does. I'm actually a bit stifled by this, because it seems logical to me that the UI update message should be queued up before it, since its trigger is executed before scheduling the long-running function to be run "later". But perhaps there're some optimisations in the V8 engine that may interfere, or maybe my understanding is just lacking.
Okay, so what's the problem with using setTimeout, and what's a better solution for this particular case?
First off, the problem with using setTimeout on any event handler like this, to try to alleviate another problem, is prone to mess with other code. Here's a real-life example from my work:
A colleague, in a mis-informed understanding on the event loop, tried to "thread" Javascript by having some template rendering code use setTimeout 0 for its rendering. He's no longer here to ask, but I can presume that perhaps he inserted timers to gauge the rendering speed (which would be the return immediacy of functions) and found that using this approach would make for blisteringly fast responses from that function.
First problem is obvious; you cannot thread javascript, so you win nothing here while you add obfuscation. Secondly, you have now effectively detached the rendering of a template from the stack of possible event listeners that might expect that very template to have been rendered, while it may very well not have been. The actual behaviour of that function was now non-deterministic, as was — unknowingly so — any function that would run it, or depend on it. You can make educated guesses, but you cannot properly code for its behaviour.
The "fix" when writing a new event handler that depended on its logic was to also use setTimeout 0. But, that's not a fix, it is hard to understand, and it is no fun to debug errors that are caused by code like this. Sometimes there's no problem ever, other times it concistently fails, and then again, sometimes it works and breaks sporadically, depending on the current performance of the platform and whatever else happens to going on at the time. This is why I personally would advise against using this hack (it is a hack, and we should all know that it is), unless you really know what you're doing and what the consequences are.
But what can we do instead? Well, as the referenced MDN article suggests, either split the work into multiple messages (if you can) so that other messages that are queued up may be interleaved with your work and executed while it runs, or use a web worker, which can run in tandem with your page and return results when done with its calculations.
Oh, and if you're thinking, "Well, couldn't I just put a callback in the long-running function to make it asynchronous?," then no. The callback doesn't make it asynchronous, it'll still have to run the long-running code before explicitly calling your callback.
If you don't want to watch a whole video, here's a simple explanation of the things one needs to understand, in order to be able to understand the answer to this question:
JavaScript is single-threaded meaning it does only one thing at a time when running.
But the environments in which the JavaScript is running, can be multi-threaded. E.g., browsers are often multi-threaded creatures, i.e., are able to do multiple things at a time. So they can run JavaScript and at the same time keep track of dealing with other stuff too.
From this point on, we're talking about JavaScript "in browsers". Things like setTimeout are indeed browser things, and are not part of the JavaScript itself.
The thing that allows JavaScript to run asynchronously is the multi-threaded browser! Other than the main space Javascript uses (called the the call stack) to put each line of code on and run them one by one, browsers also provide JavaScript with another space to put things on.
Now let's call that other space the second space.
Let's assume fn is a function. The important thing to understand here is that fn(); call is not equal to the setTimeout(fn, 0); call as will be explained further below.
Instead of a 0 delay, let's assume another delay first, e.g., 5000 milliseconds: setTimeout(fn, 5000);. It's important to note that this is still a "function call", so it has to be put on the main space, and removed from it when it's done, but wait!, we don't like a whole lengthy and boring 5 seconds delay. That would block the main space and will not allow JavaScript to run ANYTHING else in the meantime.
Thankfully this is not how the browser designers designed them to work. Instead, this call(setTimeout(fn, 5000);) is done instantly. This is very important: Even with the 5000 milliseconds delay, this function call is complete in an instant! What will happen next? It gets removed from the main space. Where will it be put on? (because we don't want to lose it). You might have guessed right: The browser hears this call and puts it on the second space.
The browser keeps track of the 5 seconds delay and once it's passed, it looks at the main space, and "WHEN IT'S EMPTY", puts the fn(); call back on it. That is how the setTimeout works.
So, back to the setTimeout(fn, 0), even though the delay is zero, this is still a call to the browser, and the browser hears it instantly and picks it up, and puts it on the second space and puts it back on the main space only when the main space is empty again, and not really 0 milliseconds later.
I really recommend watching that video as well since he's explained it really well, and opens technical things up more.
One reason to do that is to defer the execution of code to a separate, subsequent event loop. When responding to a browser event of some kind (mouse click, for example), sometimes it's necessary to perform operations only after the current event is processed. The setTimeout() facility is the simplest way to do it.
edit now that it's 2015 I should note that there's also requestAnimationFrame(), which isn't exactly the same but it's sufficiently close to setTimeout(fn, 0) that it's worth mentioning.
This is an old questions with old answers. I wanted to add a new look at this problem and to answer why is this happens and not why is this useful.
So you have two functions:
var f1 = function () {
setTimeout(function(){
console.log("f1", "First function call...");
}, 0);
};
var f2 = function () {
console.log("f2", "Second call...");
};
and then call them in the following order f1(); f2(); just to see that the second one executed first.
And here is why: it is not possible to have setTimeout with a time delay of 0 milliseconds. The Minimum value is determined by the browser and it is not 0 milliseconds. Historically browsers sets this minimum to 10 milliseconds, but the HTML5 specs and modern browsers have it set at 4 milliseconds.
If nesting level is greater than 5, and timeout is less than 4, then
increase timeout to 4.
Also from mozilla:
To implement a 0 ms timeout in a modern browser, you can use
window.postMessage() as described here.
P.S. information is taken after reading the following article.
Since it is being passed a duration of 0, I suppose it is in order to remove the code passed to the setTimeout from the flow of execution. So if it's a function that could take a while, it won't prevent the subsequent code from executing.
The other thing this does is push the function invocation to the bottom of the stack, preventing a stack overflow if you are recursively calling a function. This has the effect of a while loop but lets the JavaScript engine fire other asynchronous timers.
By calling setTimeout you give the page time to react to the whatever the user is doing. This is particularly helpful for functions run during page load.
Some other cases where setTimeout is useful:
You want to break a long-running loop or calculation into smaller components so that the browser doesn't appear to 'freeze' or say "Script on page is busy".
You want to disable a form submit button when clicked, but if you disable the button in the onClick handler the form will not be submitted. setTimeout with a time of zero does the trick, allowing the event to end, the form to begin submitting, then your button can be disabled.
The problem was you were trying to perform a Javascript operation on a non existing element. The element was yet to be loaded and setTimeout() gives more time for an element to load in the following ways:
setTimeout() causes the event to be ansynchronous therefore being executed after all the synchronous code, giving your element more time to load. Asynchronous callbacks like the callback in setTimeout() are placed in the event queue and put on the stack by the event loop after the stack of synchronous code is empty.
The value 0 for ms as a second argument in function setTimeout() is often slightly higher (4-10ms depending on browser). This slightly higher time needed for executing the setTimeout() callbacks is caused by the amount of 'ticks' (where a tick is pushing a callback on the stack if stack is empty) of the event loop. Because of performance and battery life reasons the amount of ticks in the event loop are restricted to a certain amount less than 1000 times per second.
The answers about execution loops and rendering the DOM before some other code completes are correct. Zero second timeouts in JavaScript help make the code pseudo-multithreaded, even though it is not.
I want to add that the BEST value for a cross browser / cross platform zero-second timeout in JavaScript is actually about 20 milliseconds instead of 0 (zero), because many mobile browsers can't register timeouts smaller than 20 milliseconds due to clock limitations on AMD chips.
Also, long-running processes that do not involve DOM manipulation should be sent to Web Workers now, as they provide true multithreaded execution of JavaScript.
setTimout on 0 is also very useful in the pattern of setting up a deferred promise, which you want to return right away:
myObject.prototype.myMethodDeferred = function() {
var deferredObject = $.Deferred();
var that = this; // Because setTimeout won't work right with this
setTimeout(function() {
return myMethodActualWork.call(that, deferredObject);
}, 0);
return deferredObject.promise();
}
//When need "new a", setTimeout(fn, 0) is useful, when need to wait some action. Example:
var a = function (){console.log('a');};
var b = function(){setTimeout(b, 100);}; //wait some action before override this function
//without setTimeout:
console.log('no setTimeout: b.toString():', b.toString());
b(); //"b" is an old function
console.log('no setTieout: a.toString(): ', a.toString());
a(); //and "a" is not overrided
setTimeout(//but with setTimeout(fn, 0):
function(){
console.log('After timeout 0, b.toString(): ', b.toString());
b(); //"b" is a new function
console.log('After timeout 0, a.toString(): ', a.toString());
a(); //and "a" is overrided
},
0
);
//override var "b", which was been undefined
b = function (){
a = function(){console.log('new a');};
}
Javascript is single threaded application so that don't allow to run function concurrently so to achieve this event loops are use. So exactly what setTimeout(fn, 0) do that its pussed into task quest which is executed when your call stack is empty. I know this explanation is pretty boring, so i recommend you to go through this video this will help you how things work under the hood in browser.
Check out this video:- https://www.youtube.com/watch?time_continue=392&v=8aGhZQkoFbQ

onkeyup Event Firing delay

I have a input textbox for people to enter a username, it has a onkeyup event attribute attached which fires off an AJAX request to check if the username is taken.
What is the delay between the first key to the second ? Meaning if I type a 5 letter word one letter after the other. Does the AJAX fire 5 times ?
If there is no delay, would this be "computationally expensive" since there would so many database queries ? Or would there be no noticeable difference ?
If there is a difference in performance, what methods could I take using Javascript to check if the user is actually "done" typing. I still want it to automatically check after typing. Thus ruling out onblur attributes etc.
My Code: http://pastebin.com/hXfgk7nL
(code indentation wasn't working for me on stack overflow)
Yes it will fire EVERYTIME you type a character and you're not going to like that in terms of your page performance. You can implement delays on executing the call back if you like ie. it will not be executed until the user stopped typing. Here's an example:
$(document).ready(function(){
$('#txtboxid').keyup(function(){
clearTimeout(timer);
timer = setTimeout(function(){
//call your function here
}, 250) // you can change the 250 here to a higher number if you want the delay to be longer. In this case, 250 means a quarter of a second.
});});
Following are my replies/suggestion for your queries:
Yes , it will fire 5 fives because 5 times key-up event will be triggered.
it will be a performance issue, resulting in slow response from server,also since your making multiple ajax request sequentially, the response from each request may not be sequential.your code will suffer from logic issue.
instead of making ajax call for key-up, you can use it for blur event.
Suggestion: before making a ajax call validate the field for basic errors like blank string, numbers etc.,(depending on your requirement).
Yes it will fire multiple times, one for each keystroke. You are triggering multiple AJAX calls, so you are wasting network and server resources.
Additionally, you are not guaranteed on the order in which the calls will return, so if for some network reason, the first call issued return last, it will overwrite the results of the most recent request.
To tackle the problem, you are looking for something like this plugin: https://github.com/cowboy/jquery-throttle-debounce.
From the plugin usage example:
function ajax_lookup( event ) {
// Perform an AJAX lookup on $(this).val();
};
// Console logging happens on keyup, for every single key
// pressed, which is WAAAY more often than you want it to.
$('input:text').keyup( ajax_lookup );
// Console logging happens on window keyup, but only after
// the user has stopped typing for 250ms.
$('input:text').keyup( $.debounce( 250, ajax_lookup ) );
Note that despite the name the plugin can also be used stand alone, without jquery.
Yes, it will fire every key-up event. You can reduce the performance hit using following approach.
I am sure you must have a minimum character length for username. Wait until user type that number of characters to query the database.
You could always bring all the usernames starting with user typed username and process in local memory. This may not be real time but would reduce the number database queries. Also it would depend on the size of your current user list.
Double check the existence of the username before saving it to the database.
yes it will fire. if you don't need that means, you have to check conditions with a flag. check the given value with the previous value which is stored in a variable.
You can try to check if your input is modified in 500 ms. Is is not, then make only one ajax request. If the input is modified again than you must repeat the process again.
I suggest to use setInterval for that.

jQuery setInterval from getScript

I am trying to get a script from one page and get it again after a certain period of time. But, the time varies each time. On the main page I have
function varcontent() {
$.getScript("custom.php");
}
varcontent();
Then on the custom.php I have my script and
setInterval(varcontent, 20000);
at the end. Each time it may not be 20 seconds. It seems to work at first, but then the old ones are fired again too. I don't know how to get it out of this loop and they keep multiplying.
You probably want setTimeout, not setInterval. setInterval will keep invoking the callback (until you cancel it), whereas setTimeout will only invoke the callback once after the specified delay.
So in your custom.php, you should have instead:
setTimeout(varcontent, 20000);

In JavaScript, is there a universal approach to known when asynchronous actions are finished?

In my work, I frequently encounter following situation:
Situation A:
I need to make multiple ajax calls in one function to retrieve data from server. So I have to make callback functions and define a counter to determine whether all the calls get done.
For example , in each of the sub-functions ( with ajax calls), at the end of the done, I would callback to check the counter's value.
ajax.done(funciton(jsResponse){
.......
base.ajaxLoaded++;
base.dataLoaded();
});
then in base function, I check the value:
dataLoaded: function()
{
var _this = this;
if (_this.ajaxLoaded == 4)
{
// all loaded
_self.render();
_this.afterInit();
}
}
Situation B:
There is a modal pop up triggered by the completion of an animation. So, I have following choices:
1) make a setTimeout function with a timer ( estimated the length of animation)
something like:
setTimeout(function(){
window.MYAPP.triggerMymodal();
},1000);
2) set up a interval function to check repeatedly whether a flag's value has changed and I embedded this flag into the animation finish function, to set it true or false.
if this flag true then make my next move and kills this interval.
3) change animation div attributes and check it use interval function.
Situation C:
Use window.print() function to print something and then need detect when it's finish. This has been asked by myself in this:
how to detect window.print() finish
My Question:
In JavaScript, is there a certain kind of method or Technic to deal with those functions which have unknown execution time? or this is just a case by case thing based on what technology you use?
Yes, the modern approach to dealing with this is called Promises. Various libraries implement the concept, which is basically that you can chain together groups of things that need to happen, either is parallel or serial, and then define success and failure outcomes for each of them.
It takes a bit of time to get your head around it but once you do the code is much more straightforward.

Displaying a visual indicator while a synchronous function is running

I've got a function called compute() that does a bunch of computations and then updates the values of an HTML table, triggered by a click() event. This is not an ajax call, just several for loops.
What I'd like to do is provide a visual indicator that the computation is running. Something like modifying the opacity of the table. I thought that something like the following could work :
$("#mytable").css("opacity", "0.5");
compute();
$("#mytable").css("opacity", "1");
But when I use this code the opacity does not seem to be modified.
Any hint on how to do this ?
Many thanks in advance !
That's because the UI is not updated between the modification of the opacity and the compute(); function. UI is updated once a while, not after every line of code (that would slow everything down).
You can use a timeout to bypass that: setTimeout(compute, 0);.
That way your UI get's updated before running compute(). You do have to put the third line, where you modify the opacity back to 1, in that function because it will run before compute() is done.
$("#mytable").css("opacity", "0.5");
setTimeout(compute, 0);
function compute() {
...
$("#mytable").css("opacity", "1");
}
It might look dirty at first, but it is a genuine way to make sure your UI is updated!
Most likely this is because your compute() function executes so quickly that you don't perceive the opacity changes.
Try to place a break-point in compute(), let us know whether the opacity changed.
All client side implementations of JavaScript are single threaded, so you can't really expect the function-call compute to carry on running while the next statement is being executed.If you want you could use a web worker, though, to sort-of spawn a background thread.
After reading your question a second time, I must say that #tomdemuyt could very well be right: it's very likely that compute executes so quickly that the opacity is changed twice in a split second, so fast that you hardly notice it changing. also, since this is an event handler, you might want to consider this:
function compute(e)
{
$(this).css({opacity:'.5'});
//rest of your code
$(this).css({opacity:'1'});
}
$('#mytable').on('click',compute);
which is just a bit more tidy IMO - it could be that this is not applicable to your code, but just in case...

Categories

Resources