My application is a local server that receive about 2/3 requests per seconds.
At each request, it stores and update data, process some calculation, update view (react), ...
I would like to know what is faster, when i have to use closures :
Simply create the function where I need it:
var parentValue = 'ok';
randomAsyncFunction(function() {
console.log(parentValue);
}
Create a "global" function and then bind the callback with needed values:
function testCallback(value) {
console.log(value);
}
var parentValue = 'ok';
randomAsyncFunction(testCallback.bind(undefined, parentValue));
Note: theses pseudo-codes will be executed 2/3 times per seconds. For the second example, the testCallback function will be created once, and the bind will be called instead of re-creating the function.
So, is it better or worse to use the second example ?
Both bind and the closure function expression do create a new function object. Their difference in performance will be negligible. If you really care enough, run a benchmark with your actual code and real data to see which solution is faster.
In your case, you should only care which solution is more readable and maintainable. None is strictly better or worse than the other, you have to decide yourself which one you like better.
Related
I'll start with the exact nature of the problem and then give some background. I am trying to name a function -threadTimer- and give it a random unique identifier, such as 'threadTimer'+ID. A randomly generated ID would work fine. Then, I need to use setInterval on it, to make it fire repeatedly and therein lies my coding problem. I have tried every variation of new, function, function as an object and I just can't get my head around it. You'll notice that the function I have created is an object and perhaps this is where I'm going in circles.
OK, the background I mentioned. threadTimer is fired by a master timer co-ordinating several threads. That's why you'll see I have generated a 'global' object for reference elsewhere. similar HTML entities can fire threadTimer at the same time, hence my requirement to make each instance unique.
window['GlblThreadExe'+ID]=setInterval(function(){threadTimer(elid,parent,lft,top,diameter,point,bStyle,color,grp,startTime,size,ID,counter,div,divwth,divht,wthIncrement,htIncrement,lftStart,topStart,lftIncrement,topIncrement)},interval);
function threadTimer(elid,parent,lft,top,diameter,point,bStyle,color,grp,startTime,size,ID,counter,div,divwth,divht,wthIncrement,htIncrement,lftStart,topStart,lftIncrement,topIncrement){
// more code
}
In truth, I think its the volume of parameters that I'm passing that's confusing my syntax. Any help appreciated
Avoid polluting window
Generally instead of polluting the global namespace you can store your setInterval ids in some variable
let intervalIds = {}
intervalIds['GlblThreadExe'+ID] = setInterval(function()...)
If really necessary, then store intervalIds to window
window.intervalIds = intervalIds;
Wrap your anonymous function
When you create the "clock", do not call setInterval directly:
Here, createTimerWithId will return a function which calls threadTimer
Dirty id generation
Use a timestamp, and mix it with some random stuff. Or better use a UUID
setInterval(createTimerWithId(), 1000)
function createTimerWithId(){
let id = Date.now()+Math.random(); //no lib, oneliner. good enough to debug
return function(){//the same function you gave to setInterval in your example
threadTimer(id, ...)
}
}
We can do better
In 1. we generated an id on the fly and thus
your code is not testable (id will always change(well except if you mock Math and Date...)).
your id is ugly (a float...)
it will be hard to know from which setInterval you come from
instead, give it the ID.
function createTimerWithId(ID){
return function(){//the same function you gave to setInterval in your example
threadTimer(ID, ...)
}
}
window['..'+ID] = setInterval(createTimerWithId(ID));
shorter version being
window['..'+ID] = setInterval((id=>{
return function(){
threadTimer(id, ...)
}
})(ID),1000);
I have a set of JavaScript functions that handle certain objects. All these objects have the following flexibility:
Fields can be accessed like this: data[prop][sub-prop][etc.], OR
Like this (with a type sub-structure): data[TYPE][prop][sub-prop][etc.].
The object is accessed in many places, and the condition (let's call it is_mixed) is relevant everywhere.
I thought of the following alternatives:
Always access data like this: (is_mixed ? data[TYPE] : data)[prop][sub-prop][etc.]
Have a function called getData and always access data like this: getData()[prop][sub-prop][etc.].
The function code would be:
function getData() { return is_mixed ? data[TYPE] : data; }
Run the following on every new input: if (is_mixed) { data = data[TYPE]; }
It seems to me that options 2 and 3 might be copying the object data (which might be big) and performance is important here (I didn't find the literature to support this guess), but option 1 will make the code big and ugly.
Is there a better option? What's the best way to acheive this in terms of performance, code quality and basically best practices?
It seems to me that options 2 and 3 might be copying the JSON content
No, they won't. They both just copy an object reference, which is quick and cheap (like copying a boolean). #2 is of course slightly slower, since it's a function call, but if it's used a lot, any decent JavaScript engine will inline the function anyway, giving you the benefit of modularity at the source level. (It can take thousands of calls to the function in a shortish period of time to make that kick in, though; e.g., a modern engine only bothers with optimization when it looks likely to matter.)
If I have multiple instances of the following lines of code through out my js file:
document.querySelector('#IdName').play();
document.querySelector('#IdName').pause();
Is it a good idea to create a function and pass it the IdName(IdName will change in various parts of the code)? I know what it does but I'm really just curious if it's a good practice to call document.querySelector( )a bunch of times in the file or put it in a function where I only call it twice to perform the play and pause actions.
If you constantly need the same element, change the function to take a DOM node, and store the element in a variable instead
function doStuff(elem) {
elem.play();
}
function stopStuff(elem) {
elem.pause();
}
var element = document.querySelector('#IdName');
doStuff( element );
// later
stopStuff( element );
That way you only get the element once, and avoid unneccesary DOM lookups
The best approach is to cache that query in a variable so you don't need to search the DOM each time.
For an ID selector this time saving is likely minimal but for more complex collections can help
var $el = document.querySelector('#IdName');
$el.play();
$el.pause();
It is good practice to write code that is reusable, so in that case a function is better practice. If the function only contains 1 line of code and you call it many times, it is still preferable because then if you ever decide to update that line of code or add more code, it's centralized and you change in one place only.
As far as actual execution is concerned, these are the same:
document.querySelector('#IdName1').play();
document.querySelector('#IdName1').pause();
document.querySelector('#IdName2').play();
document.querySelector('#IdName2').pause();
document.querySelector('#IdName3').play();
document.querySelector('#IdName3').pause();
vs
playpause("#IdName1");
playpause("#IdName2");
playpause("#IdName3");
function playpause(idname){
document.querySelector(idname).play();
document.querySelector(idname).pause();
}
In addition to Steve's answer, also note that if you are using the same one twice in a row:
document.querySelector('#IdName').play();
document.querySelector('#IdName').pause();
then it is a better practice to do:
var thing_with_play_and_pause = document.querySelector('#IdName');
thing_with_play_and_pause.play();
thing_with_play_and_pause.pause();
This reduces the number of queries you have to make. Some IDEs (PyCharm for instance) will complain if you don't because it is less efficient.
I was doing this test case to see how much using the this selector speeds up a process. While doing it, I decided to try out pre-saved element variables as well, assuming they would be even faster. Using an element variable saved before the test appears to be the slowest, quite to my confusion. I though only having to "find" the element once would immensely speed up the process. Why is this not the case?
Here are my tests from fastest to slowest, in case anyone can't load it:
1
$("#bar").click(function(){
$(this).width($(this).width()+100);
});
$("#bar").trigger( "click" );
2
$("#bar").click(function(){
$("#bar").width($("#bar").width()+100);
});
$("#bar").trigger( "click" );
3
var bar = $("#bar");
bar.click(function(){
bar.width(bar.width()+100);
});
bar.trigger( "click" );
4
par.click(function(){
par.width(par.width()+100);
});
par.trigger( "click" );
I'd have assumed the order would go 4, 3, 1, 2 in order of which one has to use the selector to "find" the variable more often.
UPDATE: I have a theory, though I'd like someone to verify this if possible. I'm guessing that on click, it has to reference the variable, instead of just the element, which slows it down.
Fixed test case: http://jsperf.com/this-vs-thatjames/10
TL;DR: Number of click handlers executed in each test grows because the element is not reset between tests.
The biggest problem with testing for micro-optimizations is that you have to be very very careful with what you're testing. There are many cases where the testing code interferes with what you're testing. Here is an example from Vyacheslav Egorov of a test that "proves" multiplication is almost instantaneous in JavaScript because the testing loop is removed entirely by the JavaScript compiler:
// I am using Benchmark.js API as if I would run it in the d8.
Benchmark.prototype.setup = function() {
function multiply(x,y) {
return x*y;
}
};
var suite = new Benchmark.Suite;
suite.add('multiply', function() {
var a = Math.round(Math.random()*100),
b = Math.round(Math.random()*100);
for(var i = 0; i < 10000; i++) {
multiply(a,b);
}
})
Since you're already aware there is something counter-intuitive going on, you should pay extra care.
First of all, you're not testing selectors there. Your testing code is doing: zero or more selectors, depending on the test, a function creation (which in some cases is a closure, others it is not), assignment as the click handler and triggering of the jQuery event system.
Also, the element you're testing on is changing between tests. It's obvious that the width in one test is more than the width in the test before. That isn't the biggest problem though. The problem is that the element in one test has X click handlers associated. The element in the next test has X+1 click handlers.
So when you trigger the click handlers for the last test, you also trigger the click handlers associated in all the tests before, making it much slower than tests made earlier.
I fixed the jsPerf, but keep in mind that it still doesn't test just the selector performance. Still, the most important factor that skewes the results is eliminated.
Note: There are some slides and a video about doing good performance testing with jsPerf, focused on common pitfalls that you should avoid. Main ideas:
don't define functions in the tests, do it in the setup/preparation phase
keep the test code as simple as possible
compare things that do the same thing or be upfront about it
test what you intend to test, not the setup code
isolate the tests, reset the state after/before each test
no randomness. mock it if you need it
be aware of browser optimizations (dead code removal, etc)
You don't really test the performance between the different techniques.
If you look at the output of the console for this modified test:
http://jsperf.com/this-vs-thatjames/8
You will see how many event listeners are attached to the #bar object.
And you will see that they are not removed at the beginning for each test.
So the following tests will always become slower as the previous ones because the trigger function has to call all the previous callbacks.
Some of this increase in slowness is because the object reference is already found in memory, so the compiler doesn't have to go looking in memory for the variable
$("#bar").click(function(){
$(this).width($(this).width()+100); // Only has to check the function call
}); // each time, not search the whole memory
as opposed to
var bar = $("#bar");
...
bar.click(function(){
bar.width(bar.width()+100); // Has to search the memory to find it
}); // each time it is used
As zerkms said, dereferencing (having to look up the memory reference as I describe above) has some but little effect on the performance
Thus the main source of slowness in difference for the tests you have performed is the fact that the DOM is not reset between each function call. In actuality, a saved selector performs just about as fast as this
Looks like the performance results you're getting has nothing to do with the code. If you look at these edited tests, you can see that having the same code in two of the tests (first and last) yield totally different results.
I don't know, but if I had to guess I would say it is due to concurrency and multithreading.
When you do $(...) you call the jQuery constructor and create a new object that gets stored in the memory. However, when you reference to an existing variable you do not create a new object (duh).
Although I have no source to quote I believe that every javascript event gets called in its own thread so events don't interfere with eachother. By this logic the compiler would have to get a lock on the variable in order to use it, which might take time.
Once again, I am not sure. Very interesting test btw!
I'm learning lots of javascript these days, and one of the things I'm not quite understanding is passing functions as parameters to other functions. I get the concept of doing such things, but I myself can't come up with any situations where this would be ideal.
My question is:
When do you want to have your javascript functions take another function as a parameter? Why not just assign a variable to that function's return value and pass that variable to the function like so:
// Why not do this
var foo = doStuff(params);
callerFunction(foo);
//instead of this
callerFunction(doStuff);
I'm confused as to why I would ever choose to do things as in my second example.
Why would you do this? What are some use cases?
Thanks!!
There are several use cases for this:
1. "Wrapper" functions.
Lets say you have a bunch of different bits of code. Before and after every bit of code, you want to do something else (eg: log, or try/catch exceptions).
You can write a "Wrapper" function to handle this. EG:
function putYourHeadInTheSand(otherFunc) {
try{
otherFunc();
} catch(e) { } // ignore the error
}
....
putYourHeadInTheSand(function(){
// do something here
});
putYourHeadInTheSand(function(){
// do something else
});
2. Callbacks.
Lets say you load some data somehow. Rather than locking up the system waiting for it to load, you can load it in the background, and do something with the result when it arrives.
Now how would you know when it arrives? You could use something like a signal or a mutex, which is hard to write and ugly, or you could just make a callback function. You can pass this callback to the Loader function, which can call it when it's done.
Every time you do an XmlHttpRequest, this is pretty much what's happening. Here's an example.
function loadStuff(callback) {
// Go off and make an XHR or a web worker or somehow generate some data
var data = ...;
callback(data);
}
loadStuff(function(data){
alert('Now we have the data');
});
3. Generators/Iterators
This is similar to callbacks, but instead of only calling the callback once, you might call it multiple times. Imagine your load data function doesn't just load one bit of data, maybe it loads 200.
This ends up being very similar to a for/foreach loop, except it's asynchronous. (You don't wait for the data, it calls you when it's ready).
function forEachData(callback) {
// generate some data in the background with an XHR or web worker
callback(data1);
// generate some more data in the background with an XHR or web worker
callback(data2);
//... etc
}
forEachData(function(data){
alert('Now we have the data'); // this will happen 2 times with different data each time
});
4. Lazy loading
Lets say your function does something with some text. BUT it only needs the text maybe one time out of 5, and the text might be very expensive to load.
So the code looks like this
var text = "dsakjlfdsafds"; // imagine we had to calculate lots of expensive things to get this.
var result = processingFunction(text);
The processing function only actually needs the text 20% of the time! We wasted all that effort loading it those extra times.
Instead of passing the text, you can pass a function which generates the text, like this:
var textLoader = function(){ return "dsakjlfdsafds"; }// imagine we had to calculate lots of expensive things to get this.
var result = processingFunction(textLoader);
You'd have to change your processingFunction to expect another function rather than the text, but that's really minor. What happens now is that the processingFunction will only call the textLoader the 20% of the time that it needs it. The other 80% of the time, it won't call the function, and you won't waste all that effort.
4a. Caching
If you've got lazy loading happening, then the textLoader function can privately store the result text in a variable once it gets it. The second time someone calls the textLoader, it can just return that variable and avoid the expensive calculation work.
The code that calls textLoader doesn't know or care that the data is cached, it's transparently just faster.
There are plenty more advanced things you can do by passing around functions, this is just scratching the surface, but hopefully it points you in the right direction :-)
One of the most common usages is as a callback. For example, take a function that runs a function against every item in an array and re-assigns the result to the array item. This requires that the function call the user's function for every item, which is impossible unless it has the function passed to it.
Here is the code for such a function:
function map(arr, func) {
for (var i = 0; i < arr.length; ++i) {
arr[i] = func(arr[i]);
}
}
An example of usage would be to multiply every item in an array by 2:
var numbers = [1, 2, 3, 4, 5];
map(numbers, function(v) {
return v * 2;
});
// numbers now contains 2, 4, 6, 8, 10
You would do this if callerFunction wants to call doStuff later, or if it wants to call it several times.
The typical example of this usage is a callback function, where you pass a callback to a function like jQuery.ajax, which will then call your callback when something finishes (such as an AJAX request)
EDIT: To answer your comment:
function callFiveTimes(func) {
for(var i = 0; i < 5; i++) {
func(i);
}
}
callFiveTimes(alert); //Alerts numbers 0 through 4
Passing a function as a parameter to another function is useful in a number of situations. The simplest is a function like setTimeout, which takes a function and a time and after that time has passed will execute that function. This is useful if you want to do something later. Obviously, if you called the function itself and passed the result in to the setTimeout function, it would have already happened and wouldn't happen later.
Another situation this is nice is when you want to do some sort of setup and teardown before and after executing some blocks of code. Recently I had a situation where I needed to destroy a jQuery UI accordion, do some stuff, and then recreate the accordion. The stuff I needed to do took a number of different forms, so I wrote a function called doWithoutAccordion(stuffToDo). I could pass in a function that got executed in between the teardown and the setup of the accordion.
Callbacks. Say you're doing something asynchronous, like an AJAX call.
doSomeAjaxCall(callbackFunc);
And in doSomeAjaxCall(), you store the callback to a variable, like var ajaxCallback Then when the server returns its result, you can call the callback function to process the result:
ajaxCallback();
This probably won't be of much practical use to you as a web programmer, but there is another class of uses for functions as first-class objects that hasn't come up yet. In most functional languages, like Scheme and Haskell, passing functions around as arguments is, along with recursion, the meat-and-potatoes of programming, rather than something with an occasional use. Higher-order functions (functions that operate on functions) like map and fold enable extremely powerful, expressive, and readable idioms that are not as readily available in imperative languages.
Map is a function that takes a list of data and a function and returns a list created by applying that function to each element of the list in turn. So if I wanted to update the positions of all the bouncing balls in my bouncing ball simulator, instead of
for(ball : ball_list) {
ball.update();
ball.display();
}
I would instead write (in Scheme)
(display (map update ball-list))
or in Python, which offers a few higher-order functions and a more familiar syntax,
display( map(update, ball-list) )
Fold takes a two-place function, a default value, and a list, and applies the function to the default and the first element, then to the result of that and the second element, and so on, finally returning the last value returned. So if my server is sending in batches of account transactions, instead of writing
for(transaction t : batch) {
account_balance += t;
}
I would write
(fold + (current-account-balance) batch))
These are just the simplest uses of the most common HOFs.
I will illustrate is with sort scenario.
Let's assume that you have an object to represent Employee of the company. Employee has multiple attributes - id, age, salary, work-experience etc.
Now, you want to sort a list of employees - in one case by employee id, in another case by salary and in yet another case by age.
Now the only thing that you wish to change is how to compare.
So, instead of having multiple sort methods, you can have a sort a method that takes a reference to function that can do the comparison.
Example code:
function compareByID(l, r) { return l.id - r.id; }
function compareByAge(l, r) { return l.age - r.age; }
function compareByEx(l, r) { return l.ex - r.ex; }
function sort(emps, cmpFn) {
//loop over emps
// assuming i and j are indices for comparision
if(cmpFn(emps[i], emps[j]) < 0) { swap(emps, i, j); }
}