Measuring JavaScript performance - javascript

I have a JavaScript file that takes an input, does some calculations with it, and returns a result. Now, I'd like to measure its performance, checking for example how much does it take to run 1.000 inputs. The problem is that I have nearly no knowledge of Javascript (the code isn't mine, neither), so I don't have any idea of how to do this. StackOverflowing I found some similar questions, but it's about "how much does it take for the script to run once" rather than "how much does it take for the script to elaborate 1.000 inputs".
If it can help, this is the script.

I would do something like this (depending on if the window console exists and has the time property):
if('console' in window && 'time' in window.console){
console.time('time');
for (var k=0;k<1000;k++) {
derp(input);
}
console.timeEnd('time');
} else {
var d = new Date();
for (var k=0;k<1000;k++) {
derp(input);
}
console.log('result: ' + new Date().getTime() - d.getTime() + 'ms');
}

if you want to measure and analyze this script, you just need to grow your javascript knowledge (at least a little bit).
then you could use a benchmarking tool like benchmark.js. you can use it in your browser or within node.js.
jsperf.com uses benchmark.js. you could set up a testcase there which should be done within minutes. it is usually designed to compare 2 scripts, but you could just put your script into both tests and you have a first indication

Related

Javascript shorthand loading time

I was playing around so I ran into a JS shorthands. I know they, of course, do not change code however do they lower loading time since there is less data?
Testing codes such as one below in Chrome DOM inspector did not give me an answer (probably because they are one-line codes so it does not make any difference).
if (x == 0) {x=1} else {x=2}
x == 0 ? x = 1 : x = 2;
If your goal is to optimize the speed with which your page loads by minimizing the size of your JS payload, there are lots of tools that will automatically rebuild your files into a single bundle that is compressed (i.e., all unnecessary whitespace removed, variables/functions renamed to shorter lengths, etc.). When it comes to writing code, you should always value readability first.
Write code that other people can easily understand. Then, when you're ready to deploy, look into a tool like UglifyJS2, which will enable you to take code like this:
function square(numToSquare) {
var squareProduct = numToSquare * numToSquare;
return squareProduct;
}
square(15);
..and turn it into this:
function square(r){return r*r}square(15);
The less characters and whitespace in a file, the lower the download size of said scripts is.
Readability is also a matter of utmost importance though, and ternary operators can be confusing in certain scenarios.
I would recommend that for those cases where you expect your codebase to increase to a certain extend over time, you stick to more readable constructs and use a minification/uglification process to lower file size.

Javascript Int vs BigInt libraries

From the JavaScript documentation we see that, due to using double-precision floating-point format numbers, to go beyond 9007199254740991 a library is needed.
We'll find a handy list of libraries to achieve that here.
While searching on homomorphic encryption I came across this question.There you can find this link to an in-browser implementation of Pailler.
After inspecting the code I saw the source included jsbn.js which made total sense (as you are gonna need BigInts for the crypto). However, the way it deals with the numbers to be encrypted looked a bit odd to me.
$('#btn_encrypt').click(function(event) {
var valA = parseInt($('#inputA').val()),
valB = parseInt($('#inputB').val()),
startTime,
elapsed;
startTime = new Date().getTime();
encA = keys.pub.encrypt(nbv(valA));
elapsed = new Date().getTime() - startTime;
$('#encA').html(encA.toString());
$('#encAtime').html(elapsed);
From using parseInt and nbv function nbv(i) { var r = nbi(); r.fromInt(i); return r; } it seems clear that they are relying on integers to create the BigInt that will then be encrypted.
Does that make any sense at all? Even less so when having a function to create BigInts directly from strings // (protected) set from string and radix
function bnpFromString(s,b) { ...} That link has been referenced in several other answers both in here, and the crypto site and as I said I am a newbie at JS so I wanted to check whether this is indeed an contraindicated way of implementing Pailler or I have understood something wrong.
Thanks a lot for helping out!
It appears that the Pailler implementation you linked to is intended as a proof-of-concept demo rather than as a full-fledged implementation. Presumably, the author intended it for use only on toy examples such as (36+14)*7 or (97+5)*11 rather than "real" examples involving hundreds of digits.

javascript performance for Array

i tried to figure out, what is different between two versions of a small code-snippet in execution. Do not try to understand, what this is for. This is the final code after deleting all the other stuff to find the performance problem.
function test(){
var start=new Date(), times=100000;
var l=["a","a"];
for(var j=0;j<times;j++){
var result=document.getElementsByTagName(l[0]), rl=result.length;
for(var i=0;i<rl;i++){
l[0]=result[i];
}
}
var end=new Date();
return "by=" + (end-start);
}
For me this snippets takes 236ms in Firefox, but if you change l[0]=result[i]; to l[1]=result[i]; it only takes 51ms. Same happens if I change document.getElementsByTagName(l[0]) to document.getElementsByTagName(l[1]). And if both are change the snippet will be slow again.
After using Google Chrome with DevTools/Profiles I see that a toString function is added when executing the slow code. But i have no chance to get which toString this is and why it is needed in that case.
Can you please tell me what is the difference for the browser so that it will take 5 times longer than the other?
Thanks
If you only change one of the indexes to 0 or 1, the code doesn't do the same thing anymore. If you change both indexes, the performance remains identical.
When using the same index for reading and writing, what happens is that the value stored in l[0] is used in the next call to getElementsByTagName, which has to call toString on it.
In the code above you use l[0] to search for an element. Then you change l[0] und search again and so on. If you now change only one of the two uses (not both!) to l[1], you don't change what you are searching for, which boosts the performance.
That is why when you change both it is slow again.

Is it a good idea to cache jQuery selections on a page level?

I am working on an intranet website where there is a lot of interaction with the main page. The site uses jQuery extensively. So I thought about caching the jQuery selections the first time they are requested and pull them from the cache for each subsequent request. (I also have built in an override if I want to force a re-selection).
Here is the basic snippet of code I am wondering about:
( function(window, $, undefined) {
var mp = {};
mp.cache = [];
mp.get = function ( selector, force) {
return ( !force || !mp.cache[ selector ] ) /*separating for readability */
? mp.cache[ selector ] = $( selector )
: mp.cache[ selector ];
}
/* more functions and code */
window.mp = mp;
window.$$ = mp.get;
})(window, jQuery);
It would be used just like a normal jQuery selection but checks to see if that selector already exists in the cache array and, if it does, just returns the results from there.
UPDATED EXAMPLE
$('#mainmenu').on('click','.menuitem', highlightSelectedMenuItem );
function highlightSelectedMenuItem () {
var menuitems = $$('.menuitem'); //selected first time, from cache every other time
menuitems.find('.current').removeClass('current');
$(this).addClass('current');
}
There are about 20 - 30 different selections in the code. So each would be cached on first call. This code will only be run on desktop clients (no phones/tablets).
Good idea? Any issues this might cause? Is there a good way to test if this helps/hurts performance? Any pros/cons you can come up with would be appreciated.
Let me know if more information is required.
Here is a fiddle to "fiddle" with.
Thanks!
Bottom line: Probably a bad idea.
You're introducing complexity that is likely unnecessary and is likely just going to eat up memory.
If you're trying to speed up selections that are repeated, then store the selection in a variable. When it's no longer needed, the garbage collector will remove it. For 99% of applications out there, that's as much caching as you need.
A big problem with your example is that the items you've selected will stick around indefinitely. Think about the people who may work on this code later. Everyone may just start using $$('....') because they see it in a few spots and it does the same thing as $('....'). Suddenly you're storing thousands of DOM elements in memory for no reason at all.
Beyond that, if the DOM changes and you don't know about it changing, the code that you have in cache is useless.. but unless you're forcing it to reload then you'll end up continuing to get that cached code. Which of course introduces bugs. In order to prevent that from happening, you'd have to force reload the cache constantly, which pretty much negates the usefulness of it.
You're going to be better off just following good, solid programming patterns that are already out there.. rather than programming up a caching mechanism that is going to be a bug magnet.
I made your question into a jsperf, and I found that un-cached jQuery appears to run faster.
http://jsperf.com/jquery-cache-array-test
because of this I would recommend just using pure jQuery.
I think it's a bad idea. If you need to use a selection in several places the save the selection to a variable.
var mySelection = $(#myId).
it will be garbage collected when its no longer needed.

Testing localStorage/sessionStorage max data amount

I'm writing a page to test how much data localStorage and sessionStorage can save in the browser.
It works somewhat but the browser gets unresponsive and the process bar/text is not updated progressively but mostly all at onces at the end or when the unresponsive dialogbox apears.
One of the reasons for the unresponsiveness if when i create a really large string.
blablaantal might be 1048576 to create a string with 1048576 x-charachers.
data = '';
for (i = 0; i < blablaantal; i++) {
data += 'x';
}
Code and Demo : http://netkoder.dk/netkoder/eksempler/eksempel0008.html
localStorage.remainingSpace will tell you how many bytes you can store.
EDIT: In a more general case, try this:
blablaantal = 1048576;
data = new Array(blablaantal+1).join("x");
Yeah, i played around a few years ago to see if i could find a safe, performant way to test this in all browsers. I didn't. The best i achieved at the time is here:
https://github.com/nbubna/store/blob/master/src/store.measure.js
as a plugin for my store2.js wrapper. It'll work faster than your method, but it will still crash/freeze things up sometimes. Though, i haven't tried in a few years, things may be better or, i suppose, worse.

Categories

Resources