Javascript nested function call optimization - javascript

EDIT! I changed the answer to my own after a lot of follow-up research showed that there isn't a simple answer to my question. See below!
So, in a followup to my last question, I'm trying to get a better handle on best Javascript practices to optimize performance. For the following example, I'm testing in Chrome 28.0.1500.70 using the in-browser profiler.
I've got some math functions encapsulated in an object that are getting called a few hundred k-times a second and am trying to shave a bit of the execution time.
I've already done some optimization by making local copies of the parent objects locals as locals in the called functions themselves and got a decent (~16%) performance boost. However, when I did the same for calling another function from the parent object, i got a huge (~100%) performance increase.
The original setup was calcNeighbors calling fellow parent object function cirInd via this.cirInd.
Making a local var copy of cirInd and calling that instead gave a huge performance gain, less than half the execution time as before for calcNeighbors.
However, making cirInd an inline function in calcNeighbors caused a return to the same slower performance as calling it from the parent object.
I'm really perplexed by this. I suppose that it could be a quirk in Chrome's profiler (cirInd doesn't show up at all in the second case) but there is definitely a noticeable performance gain in the application when I use case 2.
Can someone explain why case 2 is so much faster than case 1 but more importantly, why case 3 seems to not give any performance gain?
The functions in question are here:
calling from parent object:
window.bgVars = {
<snip>
"cirInd": function(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
},
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
var cirInd = this.cirInd;
var neighbors = grid[this.cirInd(rep-foo-1, mod)] + grid[this.cirInd(rep-foo, mod)] + grid[this.cirInd(rep-foo+1, mod)] + grid[this.cirInd(rep-1, mod)] + grid[this.cirInd(rep+1, mod)] + grid[this.cirInd(rep+foo-1, mod)] + grid[this.cirInd(rep+foo, mod)] + grid[this.cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}
calling via local variable:
window.bgVars = {
<snip>
"cirInd": function(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
},
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
var cirInd = this.cirInd;
var neighbors = grid[cirInd(rep-foo-1, mod)] + grid[cirInd(rep-foo, mod)] + grid[cirInd(rep-foo+1, mod)] + grid[cirInd(rep-1, mod)] + grid[cirInd(rep+1, mod)] + grid[cirInd(rep+foo-1, mod)] + grid[cirInd(rep+foo, mod)] + grid[cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}
calling inline:
window.bgVars = {
<snip>
"calcNeighbors": function(rep){
var foo = this.xBlocks;
var grid = this.cGrid;
var mod = grid.length;
function cirInd(index, mod){
//returns modulus, array-wrapping value to implement circular array
if(index<0){index+=mod;}
return index%mod;
}
var neighbors = grid[cirInd(rep-foo-1, mod)] + grid[cirInd(rep-foo, mod)] + grid[cirInd(rep-foo+1, mod)] + grid[cirInd(rep-1, mod)] + grid[cirInd(rep+1, mod)] + grid[cirInd(rep+foo-1, mod)] + grid[cirInd(rep+foo, mod)] + grid[cirInd(rep+foo+1, mod)];
return neighbors;
},
<snip>
}

perhaps seeing #2 and #3 in a simplified view will help illustrate the object creation side-effects.
i believe this should make it obvious:
alls1=[];
alls2=[];
function inner1(){}
function outer1(){
if(alls1.indexOf(inner1)===-1){ alls1.push(inner1); }
}
function outer2(){
function inner2(){}
if(alls2.indexOf(inner2)===-1){ alls2.push(inner2); }
}
for(i=0;i<10;i++){
outer1();
outer2();
}
alert([ alls1.length, alls2.length ]); // shows: 1, 10
functions are objects, and making new objects is never free.
EDIT: expanding on #1 vs #2
again, the a simplified example will help illustrate:
function y(a,b){return a+b;}
var out={y:y};
var ob={
y:y,
x1: function(a){ return this.y(i,a);},
x2: function(a){ return y(i,a);},
x3: function(a){ return out.y(i,a);}
}
var mx=999999, times=[], d2,d3,d1=+new Date;
for(var i=0;i<mx;i++){ ob.x1(-i) }
times.push( (d2=+new Date)-d1 );
for(var i=0;i<mx;i++){ ob.x2(-i) }
times.push( (d3=+new Date)-d2 );
for(var i=0;i<mx;i++){ ob.x3(-i) }
times.push( (+new Date)-d3 );
alert(times); // my chrome's typical: [ 1000, 1149, 1151 ]
understand that there is more noise in a simple example, and closure is a big chunk of the overhead in all3, but the diffs between them is what's important.
in this demo you won't see the huge gain observed in your dynamic system, but you do see how close y and out.y profile compared to this.y, all else being equal.
the main point is that it's not the extra dot resolution per se that slows things down, as some have alluded to, it's specifically the "this" keyword in V8 that matters, otherwise out.y() would profile closer to this.y()...
firefox is a different story.
tracing allows this.whatever to be predicted, so all three profile within a bad dice roll of each other, on the same comp as chrome: [2548, 2532, 2545]...

The reason there is relatively extra time involved in number 1 should be obvious. You access the entire object scope, and then have to find a property.
Number 2 and 3 are both pointers to a function, so there is no seeking.
A very good resource for testing these types of situations is jsPerf, and I would highly recommend recreating the scenario there and running the test to see the exact differences and whether or not they are significant to you.

OK, I've been researching this issue for a while now and TL;DR - it's complicated.
Turns out that many performance questions really depend on the platform, browser and even minor browser revision number. And not by a little, either. There are many examples on jsPerf that show things such as 'for vs while; or 'typed arrays vs standard arrays' wildly swinging back and forth in terms of favorable execution speed with different browser releases. Presumably this is due to JIT optimization trade-offs.
Short answer to the general performance questions - just test everything in jsPerf. None of the suggestions I got in this thread were helpful in all cases. The JIT makes things complicated. This is particularly important if you have a background like mine and are used to C programs having certain rule-of-thumb coding patterns that tend to speed things up. Don't assume anything - just test it.
NOTE: many of the weird issues I listed in the original question weer due to using the default Chrome profiler. (e.g.: the profiler you get from the Ctl+Shift+I menu) If you are doing a lot of really fast loops (such as in graphics rendering), DO NOT USE THIS PROFILER. It has a time resolution of 1 ms which is much too coarse to do proper performance debugging.
In fact the ENTIRE issue I had with case 2 being so much faster than the others is entirely due to the profiler simply not 'seeing' many of the function calls and improperly reporting CPU percentages. Int he heat map, I could clearly see huge stretches where inner loop functions were firing but not being recorded by the profiler.
Solution: http://www.html5rocks.com/en/tutorials/games/abouttracing/#
Chrome has a less-obvious and much more powerful profiler built into about:tracing. It's got microsecond resolution, the ability o read code tags for sub-function resolution and is generally much more kickass. As soon as I started using this profiler, the results fell into line with what I saw on jsPerf and helped me reduce my rendering time by nearly half. How did I do that? Again, it wasn't simple. In some cases, calling out to subroutines helped, in others it didn't. Refactoring the whole rendering engine from an object literal to module pattern seemed to help a bit. Precalcing any multiplication operations in for loops did seem to have big effects. etc, etc.
Quick notes about the about:tracing profiler: Zooming and panning is with ASWD on the keyboard - that took me a while to figure out. Also, it profiles all tabs and operates in a tab outside the page being analyzed. So minimize the number of extraneous tabs you have open since they will clutter up the profiler view. Also, if testing Canvas apps, be sure to switch tabs over to the app since RequestAnimationFrame generally doesn't fire when a tab is not active and visible.

Related

Is it better in practice to make sure the algorithm only initiates the function if there is a need to do so or to just initiate the function in jquery [duplicate]

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

Performance difference between toString() and +"" [duplicate]

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

How to manage arguments

I apologise in advance if I'm too bad at using the search engine and this has already been answered. Please point me in the right direction in that case.
I've recently begun to use the arguments variable in functions, and now I need to slice it. Everywhere I look people are doing things like:
function getArguments(args, start) {
return Array.prototype.slice.call(args, start);
}
And according to MDN this is bad for performance:
You should not slice on arguments because it prevents optimizations in JavaScript engines (V8 for example).
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/arguments
Is there a reason why I don't see anyone doing things like this:
function getArguments(args, start) {
var i, p = 0;
var len = args.length;
var params = [];
for (i = start; i < len; ++i) {
params[p] = args[i];
p += 1;
}
return params;
}
You get the arguments you want, and no slicing is done. So from my point of view, you don't loose anything on this, well maybe it uses a little extra memory and is slightly slower, but not to the point where it really makes a difference, right?
Just wanted to know if my logic here is flawed.
Here is a discuss
and here is introduction
e.g. here uses the inline slice
It appears from the discussion that #Eason posted, (here) that the debate is in the "microptimization" category, ie: most of us will never hit those performance bumps because our code isn't being run through the kind of iterations needed to even appear on the radar.
Here's a good quote that sums it up:
Micro-optimizations like this are always going to be a trade-off
between the code's complexity/readability and its performance.
In many cases, the complexity/readability is more important. In this case, the
very slowest method that was tested netted a runtime of 4.3
microseconds. If you're writing a webservice and you're slicing args
two times per request and then doing 100 ms worth of other work, an
extra 0.0086 ms will not be noticeable and it's not worth the time or
the code pollution to optimize.
These optimizations are most helpful in really hot loops that you're hitting a gajillionty times. Use a
profiler to find your hot code, and optimize your hottest code first,
until the performance you've achieved is satisfactory.
I'm satisfied, and will use Array.prototype.slice.call() unless I detect a performance blip that points to that particular piece of code not hitting the V8 optimizer.

!! (not not) vs Boolean() performance [duplicate]

CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
http://www.w3.org/TR/hr-time/#dom-performance-now
JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
I find execution time to be the best measure.
You could use console.profile in firebug
I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
Sematext.com
Datadog.com
Uptime.com
Smartbear.com
Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* #param {Function} method to test
* #param {number} iterations number of executions.
* #param {Array} list of set of args to pass in.
* #param {T} context the context to call the method in.
* #return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);

Performance Memory Management in JavaScript

Let me start with the questions, and then fill in the reasons/background.
Question: Are there any memory profiling tools for JavaScript?
Question: Has anybody tested performance memory management in JavaScript already?
I would like to experiment with performance memory management in JavaScript. In C/C++/Assembly I was able to allocate a region of memory in one giant block, then map my data structures to that area. This had several performance advantages, especially for math heavy applications.
I know I cannot allocate memory and map my own data structures in JavaScript (or Java for that matter). However, I can create a stack/queue/heap with some predetermined number of objects, for example Vector objects. When crunching numbers I often need just a few such objects at any one time, but generate a large number over time. By reusing the old vector objects I can avoid the create/delete time, unnecessary garbage collection time, and potentially large memory footprint while waiting for garbage collection. I also hypothesize that they will all stay fairly close in memory because they were created at the same time and are being accessed frequently.
I would like to test this, but I am coming up short for memory profiling tools. I tried FireBug, but it does not tell you how much memory the JavaScript engine is currently allocating.
I was able to code a simple test for CPU performance (see below). I compared a queue with 10 "Vector" objects to using new/delete each time. To make certain I wasn't just using empty data, I assigned the Vector 6 floating point properties, a three value array (floats), and an 18 character string. Each time I created a vector, using either method, I would set all the values to 0.0.
The results were encouraging. The explicit management method was initially faster, but the javascript engine had some caching and it caught up after running the test a couple times. The most interesting part was that FireBug crashed when I tried to run standard new/delete on on 10 million objects, but worked just fine for my queue method.
If I can find memory profiling tools, I would like to test this on different structures (array, heap, queue, stack). I would also like to test it on a real application, perhaps a super simple ray tracer (quick to code, can test very large data sets with lots of math for nice profiling).
And yes, I did search before creating this question. Everything I found was either a discussion of memory leaks in JavaScript or a discussion of GC vs. Explicit Management.
Thanks,
JB
Standard Method
function setBaseVectorValues(vector) {
vector.x = 0.0;
vector.y = 0.0;
vector.z = 0.0;
vector.theta = 0.0;
vector.phi = 0.0;
vector.magnitude = 0.0;
vector.color = [0.0, 0.0, 0.0];
vector.description = "a blank base vector";
}
function standardCreateObject() {
var vector = new Object();
setBaseVectorValues(vector);
return vector;
}
function standardDeleteObject(obj) {
delete obj;
}
function testStandardMM(count) {
var start = new Date().getTime();
for(i=0; i<count; i++) {
obj = standardCreateObject();
standardDeleteObject(obj);
}
var end = new Date().getTime();
return "Time: " + (end - start)
}
Managed Method
I used the JavaScript queue from http://code.stephenmorley.org/javascript/queues/
function newCreateObject() {
var vector = allocateVector();
setBaseVectorValues(vector);
return vector;
}
function newDeleteObject(obj) {
queue.enqueue(obj);
}
function newInitObjects(bufferSize) {
queue = new Queue()
for(i=0; i<bufferSize; i++) {
queue.enqueue(standardCreateObject());
}
}
function allocateVector() {
var vector
if(queue.isEmpty()) {
vector = new Object();
}else {
vector = queue.dequeue();
}
return vector;
}
function testNewMM(count) {
start = new Date().getTime();
newInitObjects(10);
for(i=0; i<count; i++) {
obj = newCreateObject();
newDeleteObject(obj);
obj = null;
}
end = new Date().getTime();
return "Time: " + (end - start) + "Vectors Available: " + queue.getLength();
}
The chrome inspector has a decent javascript profiling tool. I'd try that...
I have never seen such a tool but, in actuality, javascript [almost] never runs independently; it is [almost] always hosted within another application (e.g. your browser). It does not really matter how much memory is associated with your specific data structures, what matters is how the overall memory consumption of the host application is affected by your scripts.
I would recommend finding a generic memory profiling tool for your OS and pointing it at your browser. Run a single page and profile the browser's change in memory consumption before and after triggering your code.
The only exception to what I said above that I can think of right now is node.js... If you are using node then you can use process.memoryUsage().
Edit: Oooo... After some searching, it appears that Chrome has some sweet tools as well. (+1 for Michael Berkompas). I still stand by my original statement, that it is actually more important to see how the memory usage of the browser process itself is affected, but the elegance of the Chrome tools is impressive.

Categories

Resources