i tried to figure out, what is different between two versions of a small code-snippet in execution. Do not try to understand, what this is for. This is the final code after deleting all the other stuff to find the performance problem.
function test(){
var start=new Date(), times=100000;
var l=["a","a"];
for(var j=0;j<times;j++){
var result=document.getElementsByTagName(l[0]), rl=result.length;
for(var i=0;i<rl;i++){
l[0]=result[i];
}
}
var end=new Date();
return "by=" + (end-start);
}
For me this snippets takes 236ms in Firefox, but if you change l[0]=result[i]; to l[1]=result[i]; it only takes 51ms. Same happens if I change document.getElementsByTagName(l[0]) to document.getElementsByTagName(l[1]). And if both are change the snippet will be slow again.
After using Google Chrome with DevTools/Profiles I see that a toString function is added when executing the slow code. But i have no chance to get which toString this is and why it is needed in that case.
Can you please tell me what is the difference for the browser so that it will take 5 times longer than the other?
Thanks
If you only change one of the indexes to 0 or 1, the code doesn't do the same thing anymore. If you change both indexes, the performance remains identical.
When using the same index for reading and writing, what happens is that the value stored in l[0] is used in the next call to getElementsByTagName, which has to call toString on it.
In the code above you use l[0] to search for an element. Then you change l[0] und search again and so on. If you now change only one of the two uses (not both!) to l[1], you don't change what you are searching for, which boosts the performance.
That is why when you change both it is slow again.
Related
I'm really stuck on this javascript question!
So I'm making a web page that will be completely animated (so it can be used for display for example in a television). That animation will be configurable by the user (stored in a database).
Right now, I've already made the code to store the configuration and to get the configuration (I do an AJAX call and save the configuration in an array of json objects) and everything is as it should be.
The problem is in the animation in which I go through the array and use setTimeout function to create animations. To iterate through the array I rotate it
(I use array.push(array.shift()) according to the answer here).
The first time the intervalmaster function is used, everything goes according to plan, but when the function is called again I need to rotate the array once more (because of the last animation) and the array just doesn't rotate!
Bellow I've left a portion of the code that I'm using that reproduces the problem I'm getting. I've also added the array jsonanima with some possible values (In reality the array is probably much bigger and with higher values).
I really don't understand what is happening, I've also considered that this could be a problem of the multiple setTimeout functions because I've read somewhere (couldn't find the link, sorry!) that is not really advised to use multiple setTimeout.
If that's the case is there any other way to do this?
Thank you in advance!
EDIT: Thanks to the comment from mplungjan I've realized that if change the console.log(jsonanimate) to console.log(JSON.stringfy(jsonanima)) it outputs the correct values (the json array rotated). This got me even more confused! Why do I need to JSON.stringfy to get the array in the correct order?!
Anyway, can't test this with the full code now as I'm not in the office, tomorrow I'll give more feedback. Thank you mplungjan.
EDIT2: Finally solved my problem! So the thing was the call to the function recursivegroup (recursivegroup(0);), this call was made before I rotated the array, so when restarting the animation the array would still have the incorrect values and every sub-sequential value was wrong.
A special thanks to mplungjan and trincot for the comments that helped me debug this problem.
Bellow I leave the code corrected so anybody with the same problem can check it out.
jsonanima=[{"VD":5,"A":10,"diff":0.25},{"L":7,"IE":8,"diff":0.25}];
function intervalmaster(flag){
function recursivegroup(index)
{
if(index==0)
{
//animateeach(jsonanima,0);
}
setTimeout(function(){
//HERE IT WORKS
jsonanima.push(jsonanima.shift());
console.log(JSON.stringify(jsonanima));
//animateeach(jsonanima,0);
//removed the if statement, since it was irrelevant as mplungjan noted
recursivegroup(index+1);
},(jsonanima[0]['diff'])*60*1000);
}
//Changed this
//recursivegroup(0);
var mastertime=0;
for(var key in jsonanima)
{
mastertime+=(jsonanima[key]['diff']);
}
console.log(mastertime,flag);
console.log(JSON.stringify(jsonanima));
if(flag==true)
{
jsonanima.push(jsonanima.shift());
console.log(JSON.stringify(jsonanima));
}
//changed to here
recursivegroup(0);
masterinterval=setTimeout(function(){intervalmaster(true)},mastertime*60*1000);
}
intervalmaster(false);
I was doing this test case to see how much using the this selector speeds up a process. While doing it, I decided to try out pre-saved element variables as well, assuming they would be even faster. Using an element variable saved before the test appears to be the slowest, quite to my confusion. I though only having to "find" the element once would immensely speed up the process. Why is this not the case?
Here are my tests from fastest to slowest, in case anyone can't load it:
1
$("#bar").click(function(){
$(this).width($(this).width()+100);
});
$("#bar").trigger( "click" );
2
$("#bar").click(function(){
$("#bar").width($("#bar").width()+100);
});
$("#bar").trigger( "click" );
3
var bar = $("#bar");
bar.click(function(){
bar.width(bar.width()+100);
});
bar.trigger( "click" );
4
par.click(function(){
par.width(par.width()+100);
});
par.trigger( "click" );
I'd have assumed the order would go 4, 3, 1, 2 in order of which one has to use the selector to "find" the variable more often.
UPDATE: I have a theory, though I'd like someone to verify this if possible. I'm guessing that on click, it has to reference the variable, instead of just the element, which slows it down.
Fixed test case: http://jsperf.com/this-vs-thatjames/10
TL;DR: Number of click handlers executed in each test grows because the element is not reset between tests.
The biggest problem with testing for micro-optimizations is that you have to be very very careful with what you're testing. There are many cases where the testing code interferes with what you're testing. Here is an example from Vyacheslav Egorov of a test that "proves" multiplication is almost instantaneous in JavaScript because the testing loop is removed entirely by the JavaScript compiler:
// I am using Benchmark.js API as if I would run it in the d8.
Benchmark.prototype.setup = function() {
function multiply(x,y) {
return x*y;
}
};
var suite = new Benchmark.Suite;
suite.add('multiply', function() {
var a = Math.round(Math.random()*100),
b = Math.round(Math.random()*100);
for(var i = 0; i < 10000; i++) {
multiply(a,b);
}
})
Since you're already aware there is something counter-intuitive going on, you should pay extra care.
First of all, you're not testing selectors there. Your testing code is doing: zero or more selectors, depending on the test, a function creation (which in some cases is a closure, others it is not), assignment as the click handler and triggering of the jQuery event system.
Also, the element you're testing on is changing between tests. It's obvious that the width in one test is more than the width in the test before. That isn't the biggest problem though. The problem is that the element in one test has X click handlers associated. The element in the next test has X+1 click handlers.
So when you trigger the click handlers for the last test, you also trigger the click handlers associated in all the tests before, making it much slower than tests made earlier.
I fixed the jsPerf, but keep in mind that it still doesn't test just the selector performance. Still, the most important factor that skewes the results is eliminated.
Note: There are some slides and a video about doing good performance testing with jsPerf, focused on common pitfalls that you should avoid. Main ideas:
don't define functions in the tests, do it in the setup/preparation phase
keep the test code as simple as possible
compare things that do the same thing or be upfront about it
test what you intend to test, not the setup code
isolate the tests, reset the state after/before each test
no randomness. mock it if you need it
be aware of browser optimizations (dead code removal, etc)
You don't really test the performance between the different techniques.
If you look at the output of the console for this modified test:
http://jsperf.com/this-vs-thatjames/8
You will see how many event listeners are attached to the #bar object.
And you will see that they are not removed at the beginning for each test.
So the following tests will always become slower as the previous ones because the trigger function has to call all the previous callbacks.
Some of this increase in slowness is because the object reference is already found in memory, so the compiler doesn't have to go looking in memory for the variable
$("#bar").click(function(){
$(this).width($(this).width()+100); // Only has to check the function call
}); // each time, not search the whole memory
as opposed to
var bar = $("#bar");
...
bar.click(function(){
bar.width(bar.width()+100); // Has to search the memory to find it
}); // each time it is used
As zerkms said, dereferencing (having to look up the memory reference as I describe above) has some but little effect on the performance
Thus the main source of slowness in difference for the tests you have performed is the fact that the DOM is not reset between each function call. In actuality, a saved selector performs just about as fast as this
Looks like the performance results you're getting has nothing to do with the code. If you look at these edited tests, you can see that having the same code in two of the tests (first and last) yield totally different results.
I don't know, but if I had to guess I would say it is due to concurrency and multithreading.
When you do $(...) you call the jQuery constructor and create a new object that gets stored in the memory. However, when you reference to an existing variable you do not create a new object (duh).
Although I have no source to quote I believe that every javascript event gets called in its own thread so events don't interfere with eachother. By this logic the compiler would have to get a lock on the variable in order to use it, which might take time.
Once again, I am not sure. Very interesting test btw!
I have a js function named "GetListColumnValue". This function causes some problems with IE6. Is there any way to avoid the problem? (I thnk the problem is occured because of the concat) Here is the code sample. The last line is my solution which I am not sure that it works well. Any suggestions? Thanks.
function GetListColumnValue(listName, columnName) {
return document.getElementById(listName + "_" + columnName).value;
}
var DISCOUNT_QUANTITY = GetListColumnValue("lstRecords", "DISCOUNT_QUANTITY");
var DISCOUNT_QUANTITY = document.getElementById("lstRecords_DISCOUNT_QUANTITY");
IE6 has many many problems, but simple JS string concat isn't one of them. I don't think that's your problem.
You didn't specify what exactly the problem is, but looking at the two code samples you provided, they will do different things:
The first one (ie the function) returns the object.value, whereas the second one (ie setting it directly), you've just returned the object.
So the two code blocks set DISCOUNT_QUANTITY to different things. If you remove the .value from the function, it should work exactly the same as the other code block.
Hope that helps.
I have a project using Javascript parse json string and put data into div content.
In this case, all of itemname variables is the same div's id.
If variable i about 900, I run this code in Firefox 3 for <10ms, but it run on IE 7 for >9s, IE process this code slower 100 times than Firefox
I don't know what happen with IE ?
If I remove the line document.getElementById(itemname), speed of them seems the same.
The main problem arcording to me is document.getElementById() function?
Could you show me how to solve this prolem to increase this code on IE ?
Thank in advance.
var i = obj.items.length-2;
hnxmessageid = obj.items[i+1].v;
do{
itemname = obj.items[i].n;
itemvalue = obj.items[i].v;
document.getElementByid(itemname);
i--;
}while(i>=0);
Are you really noticing any latency?
gEBI is natively very very fast, I don't think you can avoid it anyway for what you're doing. Could you provide a low-down of what you're doing precisely? It looks like you're using a loop, but can you post exactly what you're doing inside of the loop, what your common goal of the script is?
document.getElementByid(itemname) is the fastest way to get a single element from the DOM in any real application you will sould not see any problems with using it, if you do see a problem you need to rethink you code a little it possible to acomplish any task with just a handfull of calls for this method. You can present you full problem if you like so I could show you an example
At least cache the reference to document:
var doc = document;
for(;;) {
doc.getElementById(..);
}
$('#element').method();
or
var element = $('#element');
element.method();
Without using a profiler, everyone is just guessing. I would suspect that the difference is so small it isn't worth worrying about. There are small costs to the second above the first like having to preform a lookup to find 'var element' to call the method on, but I would have thought finding '#element' and then calling the method is far more expensive.
However, if you then went on to do something else with element, the second would be faster
//Bad:
$('#element').foo();
$('#element').bar();
//Good:
var e = $('#element');
e.foo();
e.bar();
If you were using a loop where the value of $('#element') was used a lot, then caching it as in the 2nd version before the loop would help a lot.
For just this small snippet, it makes little difference.
Lookups via id (#) are pretty fast. I just tested your scenario on a small page with 2 div tags. Here is the code i used
var x = $("#div1");
var y = $("#div2");
var z = $("#div1");
every lookup took about 0.3ms on my laptop. The 2nd lookup for div1 executed the same internal jQuery methods as the first - indicating that there is no caching of already looked up objects
Performance becomes a bigger problem when you use other selectors like classname or more advanced jQuery selectors. I did some analysis on jQuery Selector Performance - check it out - hope it is helpful.
If you run only this code, no one should realy be faster. The second one might need more memory (because of the additional variable created).
If you want to be sure, why not test it yourself using a small selfwritten benchmark?
I think $('#element').method(); does not need as much memory as
var element = $('#element');
... because you bind #element to a variable.
Juste fore funne
\Indifferent:
$('#element').foo().bar();