I want to do some speed tests on very basic stuff like variable declaration.
Now i have a function that executes X times to have a more significant time difference.
http://jsfiddle.net/eTbsv/ (you need to open your console & it takes a few seconds to execute)
this is the code:
var doit = 10000000,
i = 0,
i2 = 0;
//testing var with comma
console.time('timer');
function test(){
var a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z;
};
while (i<=doit){
test();
i++;
};
console.timeEnd('timer');
//testing individual var declarations
console.time('timer2');
function test2(){
var a; var b; var c; var d; var e; var f; var g; var h; var i; var j; var k; var l; var m; var n; var o; var p; var q; var r; var s; var t; var u; var v; var w; var x; var y; var z;
};
while (i2<=doit){
test();
i2++;
};
console.timeEnd('timer2');
Now i have two questions:
Is this an accurate way of testing the speed of variable declarations?
how could i test more cycles without having firefox to crash? If i set doit to 1000000000 for example, firefox want to stop the script.
why are my results (of my script and in jspref) so different each time? Sometime the individual variable declaration is faster then the grouped :/
edit just made JS Pref testcase: http://jsperf.com/testing-js-variable-declaration-speed would be nice if some of you with different browsers and configuration could participate. But im still interested to know if this way of testing it is accurate.
Is this an accurate way of testing the speed of variable declarations?
It's good enough to get a rough idea, but it's not perfect. Like most things, it relies on the CPU. If the CPU spikes during testing due to another application, such as a virus scanner, or another action from the browser, such as a phishing check, the JavaScript execution can be slowed. Even when the CPU is idle, it's not an exact science and you will have to run it many times to get a good average.
how could i test more cycles without having firefox to crash? If i set doit to 1000000000 for example, firefox want to stop the script.
Firefox limits JavaScript execution to a maximum of 10 seconds. I'm not sure if there's a work around.
why are my results (of my script and in jspref) so different each time? Sometime the individual variable declaration is faster then the grouped :/
Because there's probably no real difference between the two. All variable declarations are "hoisted", and it's likely that this is done at parse-time instead of run-time as an optimization, so the internal representation of the functions after they have been parsed would be identical. The only difference is the subtle factors affecting the time it takes to initialize the undefined variables and execute the otherwise empty functions.
With regard to 2 Interrupting your loop for user input is the only way I can think of to easily stop the unresponsive script dialogs.
So display an alert every n iterations (obviously stop your timer for this duration).
Have you considered doing this in spidermonkey etc or are you specifically interested in the browser implementations?
Related
If you look a below code you will see measuring of performance of very simple for loop.
var res = 0;
function runs(){
var a1 = performance.now();
var x = 0;
for(var i=0;i<10**9;i++) {
x++;
}
var a2 = performance.now();
res += (a2-a1);
}
for(var j=0;j<10;j++){
runs();
}
console.log(`=${res/10}`);
Additionally, just for a good measure, this will run 10 times and average results. Now, issue with this is that it is not reliable, it highly depends on your CPU, memory and other programs running on your device.
First time, may run 9s and second 23s, then subsequent call can result 8s.
Is there a way to measure performance regardless of CPU, memory and everything else?
I am after something that will give relative number of FLOPS or any other measure that when you compare two codes you will exactly know that one code executes faster than the other.
For instance for loop with 1005 will always show slower than one with 1000 iterations.
Note: Saying FLOPS is wrong in this context as it means - floating point operations per second.
I would like to exclude time completely, I only need FPOs. Meaning I do not care about seconds, just about reliably knowing that regardless of device if you have same code it will always take let say 2000 FPOs to execute.
I have been playing around with the Riemann zeta function.
I want to optimize execution time as much as possible here so I put the intermediary results in temporary variables. But testing revealed that I get no performance boost from this. At least not noticeably.
function zeta(z, limit){
var zres = new Complex(0, 0);
for(var x = 1; x <= limit; x++){
var ii = z.imaginary * Math.log(1/x);
var pp = Math.pow(1/x, z.real);
zres.real += pp * Math.cos(ii);
zres.imaginary += pp * Math.sin(ii);
}
return zres;
}
My question is: Even though I couldn't measure a difference in execution time, what's theoretically faster? Calculating ii and pp once and handing them over as variables, or calculating them twice and not wasting time with the declaration?
Putting things in (local) variables on its own will usually not have a major effect on performance. If anything it could increase the pressure on the register allocator (or equivalent) and slightly reduce performance.
Avoiding calculating expressions multiple times by putting the result into a local variable can improve performance if the just-in-time compiler (or runtime) isn't smart enough to do the equivalent optimization (i.e. compute the value only once and re-use the computation result each times the expression is used).
There's really no universally applicable rule here. You need to benchmark and optimize on the specific system you want best performance on.
Instanciating a variable is surely faster then a math operation (like Math.log or Math.pow) so it is better to instantiate them. If you want to prevent the local scope of the for to waste some very little less time due to the variable initializzation and collection you can declare pp and ii out of the loop. This is a quite ridiculous time in respect of all the other operations.
function zeta(z, limit){
var zres = new Complex(0, 0);
var ii, pp;
for(var x = 1; x <= limit; x++){
ii = z.imaginary * Math.log(1/x);
pp = Math.pow(1/x, z.real);
zres.real += pp * Math.cos(ii);
zres.imaginary += pp * Math.sin(ii);
}
return zres;
}
As part of a little game I am making, I have an enemy object which fires projectiles at the character object, controlled by the player. The enemy has an hp attribute with a value of 10000, and as this value depletes, I want the projectile-firing patterns to change. This is my current situation:
this.fireOnce = function(){ ... }
this.fireRandomly = function(){ ... }
this.fireAtTarget = function(){ ... }
this.fireWave = function(){ ... }
this.beginFire = function(){
if(hp<3000){
this.fireWave();
}
else if(hp<5000){
this.fireAtTarget();
}
else if(hp<9000){
this.fireRandomly();
}
else{
this.fireOnce();
}
setTimeout(beginFire, 500);
}
The main loop already has enough complexity already, and things get laggy when many projectiles are on the screen. My concern about if-else statements derives from something my professor said about them being fairly expensive (I can't remember the context though so I could be wrong).
During the creation of this little game, I've used the above structure several times for different matters, and considering the functions get called several times each second, I assume that it takes its toll on the game's performance.
One possibility in an other situation would be to use an object containing the functions, but since we are talking about integer ranges, I can't think of something to use as a key.
"fairly expensive" is a relative term. Yes, a conditional branch, if the condition's value is not predicted, can easily cost dozens of clock cycles, meaning that a single CPU can only execute millions of if statements per second.
To verify this, run the following script in the java script runtime you target:
let odd = 0;
for (let i = 0; i < 1000000000; i++) {
if (i % 2) odd++;
}
odd;
This code executes one billion if statements. In Chrome, it takes about 3 seconds on my machine. Firefox is slower, but still executes one million if statements in about 0.2 seconds, and IE one million if statements in 0.1 seconds.
To conclude, there is no modern JavaScript runtime where a few if statements per second would result in a measurable, let alone human-perceptible, degradation of performance. Whatever the source of your performance problem, it's not your use of if statements.
You should always beware premature optimisation, however you might want to make your code a bit clearer (though there is nothing wrong with the OP). You can create more concise code if you can reduce the logic to generate a single value, then use that value as a key to call a method.
E.g. if the break points were 3000, 6000 and 9000 then the code can be derived from hp / 3000:
function Bot(){
// sorted by value
this.fireWave = function(){ // hp <= 3000
console.log('fireWave');
}
this.fireAtTarget = function(){ // hp <= 6000
console.log('fireAtTarget');
}
this.fireRandomly = function(){ // hp <= 9000
console.log('fireRandomly');
}
this.fireOnce = function(){ // hp > 9000
console.log('fireOnce');
}
// Method names in order
var methods = ['fireOnce','fireWave','fireAtTarget','fireRandomly'];
this.beginFire = function(){
this[methods[Math.ceil(hp / 3000)] || methods[0]]();
// Don't do this for testing
// setTimeout(beginFire, 500);
}
}
var bot = new Bot();
var hp = 0;
for (var i=0; i<10; i++) {
// Randomly set hp to value between 0 and 1200
hp = Math.random()*12000 | 0;
console.log('hp: ' + hp);
bot.beginFire();
}
So as long as you can determine a simple mathematic expression to calculate the code, then you can easily determine the method to call.
You could also have a helper method (possibly on the constructor) that does the logic to determine the method to call, like:
function getFireMethod() {
return hp < 3000? 'fireWave' :
hp < 5000? 'fireAtTarget' :
hp < 9000? 'fireRandomly' :
'fireOnce';
}
Which is clear and concise, but again may not have any perceptible impact on performance either way.
In any case, you will need to do testing across a variety of clients to determine whether there is any useful performance gain. Also, include comments in the code to describe the logic.
So I was curious what would be faster for iterating through an array, the normal for loop or forEach so I executed this code in the console:
var arr = [];
arr.length = 10000000;
//arr.fill(1);
for (var i_1 = 0; i_1 < arr.length; i_1++) {
arr[i_1] = 1;
}
//////////////////////////////////
var t = new Date();
var sum = 0;
for (var i = 0; i < arr.length; i++) {
var a = arr[i];
if (a & 1) {
sum += a;
}
else {
sum -= a;
}
}
console.log(new Date().getTime() - t.getTime());
console.log(sum);
t = new Date();
sum = 0;
arr.forEach(function (value, i, aray) {
var a = value;
if (a & 1) {
sum += a;
}
else {
sum -= a;
}
});
console.log(new Date().getTime() - t.getTime());
console.log(sum);
Now the results in Chrome are 49ms for the for loop, 376ms for the forEach loop. Which is ok but the results in Firefox and IE (and Edge) are a lot different.
In both other browsers the first loop takes ~15 seconds (yes seconds) while the forEach takes "only" ~4 seconds.
My question is can someone tell me the exact reason Chrome is so much faster?
I tried all kinds of operations inside the loops, the results were always in favor for Chrome by a mile.
Disclaimer: I do not know the specifics of V8 in Chrome or the interpreter of Firefox / Edge, but there are some very general insights. Since V8 compiles Javascript to native code, let's see what it potentially could do:
Very crudely: variables like your var i can be modelled as a very general Javascript variable, so that it can take any type of value from numbers to objects (modelled as a pointer to a struct Variable for instance), or the compiler can deduce the actual type (say an int in C++ for instance) from your JS and compile it like that. The latter uses less memory, exploits caching, uses less indirection, and can potentially be as fast as a for-loop in C++. V8 probably does this.
The above holds for your array as well: maybe it compiles to a memory efficient array of ints stored contiguously in memory; maybe it is an array of pointers to general objects.
Temporary variables can be removed.
The second loop could be optimized by inlining the function call, maybe this is done, maybe it isn't.
The point being: all JS interpreters / compilers can potentially exploit these optimizations. This depends on a lot of factors: the trade-off between compilation and execution time, the way JS is written, etc.
V8 seems to optimize a lot, Firefox / Edge maybe don't in this example. Knowing why precisely requires in-depth understanding of the interpreter / compiler.
For loop is the afastest when compared to other iterators in every browser. But when comparing browsers ie is the slowest in iteration of for loops. Go and try jsperf.com for optimization is going to be my best recommendation. V8 engine implementation is the reason. After chrome split from webkit it stripped off more than 10k line of code in first few days.
I've try to probe that plus (+) conversion is faster than parseInt with the following jsperf, and the results surprised me:
Parse vs Plus
Preparation code
<script>
Benchmark.prototype.setup = function() {
var x = "5555";
};
</script>
Parse Sample
var y = parseInt(x); //<---80 million loops
Plus Sample
var y = +x; //<--- 33 million loops
The reason is because I'm using "Benchmark.prototype.setup" in order to declare my variable, but I don't understand why
See the second example:
Parse vs Plus (local variable)
<script>
Benchmark.prototype.setup = function() {
x = "5555";
};
</script>
Parse Sample
var y = parseInt(x); //<---89 million loops
Plus Sample
var y = +x; //<--- 633 million loops
Can someone explain the results?
Thanks
In the second case + is faster because in that case V8 actually moves it out of the benchmarking loop - making benchmarking loop empty.
This happens due to certain peculiarities of the current optimization pipeline. But before we get to the gory details I would like to remind how Benchmark.js works.
To measure the test case you wrote it takes Benchmark.prototype.setup that you also provided and the test case itself and dynamically generates a function that looks approximately like this (I am skipping some irrelevant details):
function (n) {
var start = Date.now();
/* Benchmark.prototype.setup body here */
while (n--) {
/* test body here */
}
return Date.now() - start;
}
Once the function is created Benchmark.js calls it to measure your op for a certain number of iterations n. This process is repeated several times: generate a new function, call it to collect a measurement sample. Number of iterations is adjusted between samples to ensure that function runs long enough to give meaningful measurement.
Important things to notice here is that
both your case and Benchmark.prototype.setup are the textually inlined;
there is a loop around the operation you want to measure;
Essentially we discussing why the code below with a local variable x
function f(n) {
var start = Date.now();
var x = "5555"
while (n--) {
var y = +x
}
return Date.now() - start;
}
runs slower than the code with global variable x
function g(n) {
var start = Date.now();
x = "5555"
while (n--) {
var y = +x
}
return Date.now() - start;
}
(Note: this case is called local variable in the question itself, but this is not the case, x is global)
What happens when you execute these functions with a large enough values of n, for example f(1e6)?
Current optimization pipeline implements OSR in a peculiar fashion. Instead of generating an OSR specific version of the optimized code and discarding it later, it generates a version that can be used for both OSR and normal entry and can even be reused if we need to perform OSR at the same loop. This is done by injecting a special OSR entry block into the right spot in the control flow graph.
OSR entry block is injected while SSA IR for the function is built and it eagerly copies all local variables out of the incoming OSR state. As a result V8 fails to see that local x is actually a constant and even looses any information about its type. For subsequent optimization passes x2 looks like it can be anything.
As x2 can be anything expression +x2 can also have arbitrary side-effects (e.g. it can be an object with valueOf attached to it). This prevents loop-invariant code motion pass from moving +x2 out of the loop.
Why is g faster than? V8 pulls a trick here. It tracks global variables that contain constants: e.g. in this benchmark global x always contains "5555" so V8 just replaces x access with its value and marks this optimized code as dependent on the value of x. If somebody replaces x value with something different than all dependent code will be deoptimized. Global variables are also not part of the OSR state and do not participate in SSA renaming so V8 is not confused by "spurious" φ-functions merging OSR and normal entry states. That's why when V8 optimizes g it ends up generating the following IR in the loop body (red stripe on the left shows the loop):
Note: +x is compiled to x * 1, but this is just an implementation detail.
Later LICM would just take this operation and move it out of the loop leaving nothing of interest in the loop itself. This becomes possible because now V8 knows that both operands of the * are primitives - so there can be no side-effects.
And that's why g is faster, because empty loop is quite obviously faster than a non-empty one.
This also means that the second version of benchmark does not actually measure what you would like it to measure, and while the first version did actually grasp some of the differences between parseInt(x) and +x performance that was more by luck: you hit a limitation in V8's current optimization pipeline (Crankshaft) that prevented it from eating the whole microbenchmark away.
I believe the reason is because parseInt looks for more than just a conversion to an integer. It also strips any remaining text off of the string like when parsing a pixel value:
var width = parseInt(element.style.width);//return width as integer
whereas the plus sign could not handle this case:
var width = +element.style.width;//returns NaN
The plus sign does an implicit conversion from string to number and only that conversion. parseInt tries to make sense out of the string first (like with integers tagged with a measurement).