Why is "this" more effective than a saved selector? - javascript

I was doing this test case to see how much using the this selector speeds up a process. While doing it, I decided to try out pre-saved element variables as well, assuming they would be even faster. Using an element variable saved before the test appears to be the slowest, quite to my confusion. I though only having to "find" the element once would immensely speed up the process. Why is this not the case?
Here are my tests from fastest to slowest, in case anyone can't load it:
1
$("#bar").click(function(){
$(this).width($(this).width()+100);
});
$("#bar").trigger( "click" );
2
$("#bar").click(function(){
$("#bar").width($("#bar").width()+100);
});
$("#bar").trigger( "click" );
3
var bar = $("#bar");
bar.click(function(){
bar.width(bar.width()+100);
});
bar.trigger( "click" );
4
par.click(function(){
par.width(par.width()+100);
});
par.trigger( "click" );
I'd have assumed the order would go 4, 3, 1, 2 in order of which one has to use the selector to "find" the variable more often.
UPDATE: I have a theory, though I'd like someone to verify this if possible. I'm guessing that on click, it has to reference the variable, instead of just the element, which slows it down.

Fixed test case: http://jsperf.com/this-vs-thatjames/10
TL;DR: Number of click handlers executed in each test grows because the element is not reset between tests.
The biggest problem with testing for micro-optimizations is that you have to be very very careful with what you're testing. There are many cases where the testing code interferes with what you're testing. Here is an example from Vyacheslav Egorov of a test that "proves" multiplication is almost instantaneous in JavaScript because the testing loop is removed entirely by the JavaScript compiler:
// I am using Benchmark.js API as if I would run it in the d8.
Benchmark.prototype.setup = function() {
function multiply(x,y) {
return x*y;
}
};
var suite = new Benchmark.Suite;
suite.add('multiply', function() {
var a = Math.round(Math.random()*100),
b = Math.round(Math.random()*100);
for(var i = 0; i < 10000; i++) {
multiply(a,b);
}
})
Since you're already aware there is something counter-intuitive going on, you should pay extra care.
First of all, you're not testing selectors there. Your testing code is doing: zero or more selectors, depending on the test, a function creation (which in some cases is a closure, others it is not), assignment as the click handler and triggering of the jQuery event system.
Also, the element you're testing on is changing between tests. It's obvious that the width in one test is more than the width in the test before. That isn't the biggest problem though. The problem is that the element in one test has X click handlers associated. The element in the next test has X+1 click handlers.
So when you trigger the click handlers for the last test, you also trigger the click handlers associated in all the tests before, making it much slower than tests made earlier.
I fixed the jsPerf, but keep in mind that it still doesn't test just the selector performance. Still, the most important factor that skewes the results is eliminated.
Note: There are some slides and a video about doing good performance testing with jsPerf, focused on common pitfalls that you should avoid. Main ideas:
don't define functions in the tests, do it in the setup/preparation phase
keep the test code as simple as possible
compare things that do the same thing or be upfront about it
test what you intend to test, not the setup code
isolate the tests, reset the state after/before each test
no randomness. mock it if you need it
be aware of browser optimizations (dead code removal, etc)

You don't really test the performance between the different techniques.
If you look at the output of the console for this modified test:
http://jsperf.com/this-vs-thatjames/8
You will see how many event listeners are attached to the #bar object.
And you will see that they are not removed at the beginning for each test.
So the following tests will always become slower as the previous ones because the trigger function has to call all the previous callbacks.

Some of this increase in slowness is because the object reference is already found in memory, so the compiler doesn't have to go looking in memory for the variable
$("#bar").click(function(){
$(this).width($(this).width()+100); // Only has to check the function call
}); // each time, not search the whole memory
as opposed to
var bar = $("#bar");
...
bar.click(function(){
bar.width(bar.width()+100); // Has to search the memory to find it
}); // each time it is used
As zerkms said, dereferencing (having to look up the memory reference as I describe above) has some but little effect on the performance
Thus the main source of slowness in difference for the tests you have performed is the fact that the DOM is not reset between each function call. In actuality, a saved selector performs just about as fast as this

Looks like the performance results you're getting has nothing to do with the code. If you look at these edited tests, you can see that having the same code in two of the tests (first and last) yield totally different results.

I don't know, but if I had to guess I would say it is due to concurrency and multithreading.
When you do $(...) you call the jQuery constructor and create a new object that gets stored in the memory. However, when you reference to an existing variable you do not create a new object (duh).
Although I have no source to quote I believe that every javascript event gets called in its own thread so events don't interfere with eachother. By this logic the compiler would have to get a lock on the variable in order to use it, which might take time.
Once again, I am not sure. Very interesting test btw!

Related

Ok to call document.querySelector( ) a bunch

If I have multiple instances of the following lines of code through out my js file:
document.querySelector('#IdName').play();
document.querySelector('#IdName').pause();
Is it a good idea to create a function and pass it the IdName(IdName will change in various parts of the code)? I know what it does but I'm really just curious if it's a good practice to call document.querySelector( )a bunch of times in the file or put it in a function where I only call it twice to perform the play and pause actions.
If you constantly need the same element, change the function to take a DOM node, and store the element in a variable instead
function doStuff(elem) {
elem.play();
}
function stopStuff(elem) {
elem.pause();
}
var element = document.querySelector('#IdName');
doStuff( element );
// later
stopStuff( element );
That way you only get the element once, and avoid unneccesary DOM lookups
The best approach is to cache that query in a variable so you don't need to search the DOM each time.
For an ID selector this time saving is likely minimal but for more complex collections can help
var $el = document.querySelector('#IdName');
$el.play();
$el.pause();
It is good practice to write code that is reusable, so in that case a function is better practice. If the function only contains 1 line of code and you call it many times, it is still preferable because then if you ever decide to update that line of code or add more code, it's centralized and you change in one place only.
As far as actual execution is concerned, these are the same:
document.querySelector('#IdName1').play();
document.querySelector('#IdName1').pause();
document.querySelector('#IdName2').play();
document.querySelector('#IdName2').pause();
document.querySelector('#IdName3').play();
document.querySelector('#IdName3').pause();
vs
playpause("#IdName1");
playpause("#IdName2");
playpause("#IdName3");
function playpause(idname){
document.querySelector(idname).play();
document.querySelector(idname).pause();
}
In addition to Steve's answer, also note that if you are using the same one twice in a row:
document.querySelector('#IdName').play();
document.querySelector('#IdName').pause();
then it is a better practice to do:
var thing_with_play_and_pause = document.querySelector('#IdName');
thing_with_play_and_pause.play();
thing_with_play_and_pause.pause();
This reduces the number of queries you have to make. Some IDEs (PyCharm for instance) will complain if you don't because it is less efficient.

Rx.js fromEvent + flatMapLatest broken?

Well, the problem itself is kind of hard to describe briefly, so here's a live example to demonstrate. It seems like I'm misunderstanding something about how Rx.js works, otherwise the functionality here comes from a bug.
What I tried to do was a simple reactive rendering setup, where what you see on the screen, and what events happen are both described in terms of Observables. The problem is that, for some indiscernible reason, the events are dropped entirely when the code is written one way, yet work fine with code that should theoretically be equivalent.
So, let's start with the first case in the example code above:
var dom = makeBox('one');
var clicks = Rx.Observable.fromEvent(dom, 'click');
If you create a DOM fragment, then you can simply use fromEvent to get an Observable for whatever event it emits. So far, so good. You can click this box and see a bunch of lines written to the log.
Now, the next step would be to make the DOM reactive, to express how it changes over time.
var domStream = Rx.Observable.return(makeBox('two'));
var clicks = domStream.flatMapLatest(function(dom) {
return Rx.Observable.fromEvent(dom, 'click');
});
That would make it an Observable, using return here to produce the simplest, constant case. The events you're interested in would be the ones emitted by the latest version of the dom, and that's exactly what the flatMapLatest operator does. This variant still works.
Ultimately, the goal would be to generate the current DOM state based on some application state. That is, map it from one Observable to another. Let's go with the simplest version for now, have a single constant value as the state, and then map it to the same fixed output we used previously:
var updates = Rx.Observable.return(1);
var domStream = updates.map(function (update) {
return makeBox('three');
});
var clicks = domStream.flatMapLatest(function(dom) {
return Rx.Observable.fromEvent(dom, 'click');
});
This should not be any different from the previous version. However, this outputs no events, no matter what you do.
What exactly is going on here? Did I misunderstand some fundamental concept of Rx, or what? I've run into some issues with hot vs cold Observables, but that seems unrelated in this minimal case. So, I'm kind of out of ideas. Can anyone enlighten me?
Sorry to tell you but it is a Hot vs Cold issue.
It is a subtle issue, but the difference between
Rx.Observable.return(makeBox('two'))
and
Rx.Observable.return(1).map(function() {return makeBox('three'); })
Is that the first returns a constant every time you subscribe to it, that is,
a box that you created initially. The second returns a new box every time the Observable is subscribed to, this causes a problem since you actually subscribe to the domStream variable twice, you are creating two instances of Box three, one which has event handlers but isn't shown and one that does not and is shown.
The fix is that you either need to use approach 2 or you need to convert the third into a hot stream either by using:
domStream.replay(1).refCount()
Or by using
domStream.publish()
then after all subscriptions are completed:
domStream.connect()

How should I use Variables and jQuery Dom navigation?

I was just wondering which is the correct or most efficient way of navigating through the Dom using variables.
For example, can I concatenate selectors
var $container = '.my-container';
$($container).addClass('hidden');
$($container + ' .button').on('click', function(){
//something here
});
or should I use the jQuery traversal functions
var $container = $('.my-container');
$container.addClass('hidden');
$container.children('.button').on('click', function(){
//something here
});
Is there a different approach, is one best, or can you use them at different times?
The $ is usually used only when working with an actual jquery object. You generally shouldn't prefix anything with that unless it's really something from jquery.
Beyond that little bit though, performance-wise, your second bit of code is going to be faster. I made an example jsperf here: http://jsperf.com/test-jquery-select
The reason the second bit of code is faster is because (if I remember correctly) jquery caches the selection, and then any actions performed on that selection are scoped. When you use .find (which is really what you meant in your code, not .children), instead of trying to find elements through the entire document, it only tries to find them within the scope of whatever my-container is.
The time when you wouldn't want to use the second pattern is when you expect the dom to change frequently. Using a previous selection of items, while efficient, is potentially a problem if more buttons are added or removed. Granted, this isn't a problem if you're simply chaining up a few actions on an item, then discarding the selection anyway.
Besides all of that, who really wants to continuously type $(...). It's awkward.

Is $(document.body) and document.body the same? Cleaning garbage and binding in class? - MooTools 1.3

I am building a MooTools class and I have this in my initialize function:
this.css = null;
window.addEvent('domready', function(){
this.document = $(document);
this.body = $(document.body);
this.head = $(document.head);
}.bind(this));
Ok and now to the questions ...
Should I declare this.css = null or any other empty variable in the init:
this.css = null; // Maybe this.css = '' - empty string?
Next thing is about window and document ... Should I put it into $() or not because it works both way, so I just want to know which way is preferred? So to summarize:
window.addEvent() // or should I use $(window).addEvent()
this.document = $(document) // or this.document = document
this.body = $(document.body) // or this.body = document.body
I stored these values into object to avoid multiple DOM queries, is this ok? Or would it be better to call $(selector) / $$(selector) every time?
Two more things left ... About binding ... Is it ok to use .bind(this) every time or would it be better to use .bind(this.myDiv) and use it inside function as eg.: this.setStyle(...); instead of this.myDiv.setStyle(...)
(function(){
this.setStyle('overflow-y', 'visible');
}.bind(this.body)).delay(100);
or
(function(){
this.body.setStyle('overflow-y', 'visible');
}.bind(this)).delay(100);
And the last thing is about garbage collection ... Do I have to garbage myself and how to do it (as far as I know MooTools does it on its own on unload). The confusing part is that I found function in MT docs:
myElement.destroy();
They say: Empties an Element of all its children, removes and garbages the Element. Useful to clear memory before the pageUnload.
So I have to garbage on my own? How to do that? When to use .destroy()? I am working on a huge project and I notice that IE gets slow over multiple executions of the script (so how to handle that? probably some cleaning needed, memory leaks?).
pff, this is a bit long.
first, initial variables. this.css = null... the only time i'd set 'empty' variables are: typecast; when it's a property of an object i may reference and don't want undefined; when it's a string i will concatenate with or a number i will incre/decrement; null is not really useful at this point.
a common / good practice when writing a mootools class is to use the Options class as a mixin. this allows you to set default options object with your default settings that can be overridden upon instantiation. similarly, Object.merge({ var: val}, useroptions); can override a default val if supplied.
now, iirc, there are times when you'd have to use $(document.body) and it's not because document.body does not work, it's because applying $() also applies Element prototypes in IE (since Element prototype is not extended there, the methods are applied to the elements directly instead, which happens when you $ them). Also, $ assigns an internal UID of the target element and allows for element storage to be used for that element. I don't see a point to using $(document) or $(window) - they are 'extended' as much as needed by default. In any case, even in IE, you only need to $(something) the one time and can continue using it as just 'something' afterwards. check my document.getElementById("foo").method() example - you can just run $("foo"); on it's own and then try document.getElementById("foo").method() again - it will work in IE too.
window.addEvent(); // is fine.
document.body.adopt(new Element("div")); // not fine in IE.
new Element("div").inject(document.body); // fine.
and on their own:
$(document.body).adopt(new Element("div")); // fine.
document.body.adopt(new Element("span")); // now fine, after first $.
see this in ie8: http://www.jsfiddle.net/AePzD/1/ - first attempt to set the background fails but the second one works. subsequently, document.body.methods() calls are going to work fine.
http://www.jsfiddle.net/AePzD/2/ - this shows how the element (which $ also returns) can have methods in webkit/mozilla and not in trident. however, replace that with $("foo") and it will start working. rule of thumb: $ elements you don't dynamically create before applying methods to them.
storing selectors can be a good performance practice, for sure. but it can also fill your scope chain with many variables so be careful. if you will use a selector two or more times, it's good to cache it. failing to do so is not a drama, selector engines like sizzle and slick are so fast these days it does not matter (unless you are animating at the time and it impacts your FPS).
as for binding, whichever way you like.
keep in mind delay has a second argument, BIND:
(function(){
this.setStyle('background', 'blue');
}).delay(100, $("foo"));
so do quite a few functions. this particular bind is not very useful but in a class, you may want to do
(function(){
this.element.setStyle('background', 'blue');
this.someMethod();
}).delay(100, this));
GC. mootools does it's own GC, sure. however, .destroy is a very good practice, imo. if you don't need something in the DOM, use element.dispose(). if you won't attach it to the DOM again, use .destroy() - removes all child nodes and cleans up. more memory \o/
advice on IE... dodgy. you can use drip if you can to trace memory leaks, there are things like dynatrace that can be very good in profiling. in terms of practices... make sure you don't use inline js, you always clean up what you don't need (events, elements) and generally, be careful, esp when you are stacking up events and dealing with ajax (bring new elements that need events - consider event delegation instead...). use fewer dom nodes - also helps...

Performance: Which of these examples of code is faster and why?

$('#element').method();
or
var element = $('#element');
element.method();
Without using a profiler, everyone is just guessing. I would suspect that the difference is so small it isn't worth worrying about. There are small costs to the second above the first like having to preform a lookup to find 'var element' to call the method on, but I would have thought finding '#element' and then calling the method is far more expensive.
However, if you then went on to do something else with element, the second would be faster
//Bad:
$('#element').foo();
$('#element').bar();
//Good:
var e = $('#element');
e.foo();
e.bar();
If you were using a loop where the value of $('#element') was used a lot, then caching it as in the 2nd version before the loop would help a lot.
For just this small snippet, it makes little difference.
Lookups via id (#) are pretty fast. I just tested your scenario on a small page with 2 div tags. Here is the code i used
var x = $("#div1");
var y = $("#div2");
var z = $("#div1");
every lookup took about 0.3ms on my laptop. The 2nd lookup for div1 executed the same internal jQuery methods as the first - indicating that there is no caching of already looked up objects
Performance becomes a bigger problem when you use other selectors like classname or more advanced jQuery selectors. I did some analysis on jQuery Selector Performance - check it out - hope it is helpful.
If you run only this code, no one should realy be faster. The second one might need more memory (because of the additional variable created).
If you want to be sure, why not test it yourself using a small selfwritten benchmark?
I think $('#element').method(); does not need as much memory as
var element = $('#element');
... because you bind #element to a variable.
Juste fore funne
\Indifferent:
$('#element').foo().bar();

Categories

Resources