timing in Javascript acting strange - javascript

I am working on js to compare the performance of brute force O(n^2) and Barnes-Hut O(nlog(n))
In my code right now I am doing the same thing with the same data five times
like this:
let data = [arrays of 100 data];
let count = 5;
while(count>0){
count--;
console.time('brute force time' + iterationCount);
brute force function()
console.timeEnd('brute force time' + iterationCount);
console.time('BH time' + iterationCount );
Barnes Hut algorithm();
console.timeEnd('BH time' + iterationCount);
}
}
Though, the code is the same for each time, console.time is showing the different and worrying results.
In the timing, the difference between BH and brute force is not relatively similar in multiple iterations.
and it is not predictable when I run code every time.
One more thing to notice is that every time I run the code, in the first iteration brute force and Barnes Hut algorithm timing are almost similar where as after first, it shows that Barnes-Hut is way better.
PS: everything in both functions is scoped to each function i.e local variable and shares the same data in each iteration so in each iteration code is identical !!!!
can anyone help me understand why I am getting like this?

I'd guess these times are so tiny random noise affects the results - your machine doing other processing perhaps. I'd increase the data size, and therefore the processing time - maybe you'd see more consistent results then.

Related

Is there a reliable way to measure performance of code in JavaScript?

If you look a below code you will see measuring of performance of very simple for loop.
var res = 0;
function runs(){
var a1 = performance.now();
var x = 0;
for(var i=0;i<10**9;i++) {
x++;
}
var a2 = performance.now();
res += (a2-a1);
}
for(var j=0;j<10;j++){
runs();
}
console.log(`=${res/10}`);
Additionally, just for a good measure, this will run 10 times and average results. Now, issue with this is that it is not reliable, it highly depends on your CPU, memory and other programs running on your device.
First time, may run 9s and second 23s, then subsequent call can result 8s.
Is there a way to measure performance regardless of CPU, memory and everything else?
I am after something that will give relative number of FLOPS or any other measure that when you compare two codes you will exactly know that one code executes faster than the other.
For instance for loop with 1005 will always show slower than one with 1000 iterations.
Note: Saying FLOPS is wrong in this context as it means - floating point operations per second.
I would like to exclude time completely, I only need FPOs. Meaning I do not care about seconds, just about reliably knowing that regardless of device if you have same code it will always take let say 2000 FPOs to execute.

Accurately measure a Javascript Function performance while displaying the output to user

As you can see in code below, when I increase the size of the string it leads to a 0 milliseconds difference. And moreover there is an inconsistency as the string count goes on increasing.
Am I doing something wrong here?
let stringIn = document.getElementById('str');
let button = document.querySelector('button');
button.addEventListener('click', () => {
let t1 = performance.now();
functionToTest(stringIn.value);
let t2 = performance.now();
console.log(`time taken is ${t2 - t1}`);
});
function functionToTest(str) {
let total = 0;
for(i of str) {
total ++;
}
return total;
}
<input id="str">
<button type="button">Test string</button>
I tried using await too, but result is the same (see code snippet below). The function enclosing the code below is async:
let stringArr = this.inputString.split(' ');
let longest = '';
const t1 = performance.now();
let length = await new Promise(resolve => {
stringArr.map((item, i) => {
longest = longest.length < item.length ? longest : item;
i === stringArr.length - 1 ? resolve(longest) : '';
});
});
const diff = performance.now() - t1;
console.log(diff);
this.result = `The time taken in mili seconds is ${diff}`;
I've also tried this answer as, but it is also inconsistent.
As a workaround I tried using console.time feature, but It doesn't allow the time to be rendered and isn't accurate as well.
Update: I want to build an interface like jsPerf, which will be quite similar to it but for a different purpose. Mostly I would like to compare different functions which will depend on user inputs.
There are 3 things which may help you understand what happening:
Browsers are reducing performance.now() precision to prevent Meltdown and Spectre attacks, so Chrome gives max 0.1 ms precision, FF 1ms, etc. This makes impossible to measure small timeframes. If function is extremely quick - 0 ms is understandable result. (Thanks #kaiido) Source: paper, additional links here
Any code(including JS) in multithreaded environment will not execute with constant performance (at least due to OS threads switching). So getting consistent values for several single runs is unreachable goal. To get some precise number - function should be executed multiple times, and average value is taken. This will even work with low precision performance.now(). (Boring explanation: if function is much faster than 0.1 ms, and browser often gives 0ms result, but time from time some function run will win a lottery and browser will return 0.1ms... longer functions will win this lottery more often)
There is "optimizing compiler" in most JS engines. It optimizes often used functions. Optimization is expensive, so JS engines optimize only often used functions. This explains performance increase after several runs. At first function is executed in slowest way. After several executions, it is optimized, and performance increases. (should add warmup runs?)
I was able to get non-zero numbers in your code snipet - by copy-pasting 70kb file into input. After 3rd run function was optimized, but even after this - performance is not constant
time taken is 11.49999990593642
time taken is 5.100000067614019
time taken is 2.3999999975785613
time taken is 2.199999988079071
time taken is 2.199999988079071
time taken is 2.099999925121665
time taken is 2.3999999975785613
time taken is 1.7999999690800905
time taken is 1.3000000035390258
time taken is 2.099999925121665
time taken is 1.9000000320374966
time taken is 2.300000051036477
Explanation of 1st point
Lets say two events happened, and the goal is to find time between them.
1st event happened at time A and second event happened at time B. Browser is rounding precise values A and B and returns them.
Several cases to look at:
A B A-B floor(A) floor(B) Ar-Br
12.001 12.003 0.002 12 12 0
11.999 12.001 0.002 11 12 1
Browsers are smarter than we think, there are lot's of improvements and caching techniques take in place for memory allocation, repeatable code execution, on-demand CPU allocation and so on. For instance V8, the JavaScript engine that powers Chrome and Node.js caches code execution cycles and results. Furthermore, your results may get affected by the resources that your browser utilizes upon code execution, so your results even with multiple executions cycles may vary.
As you have mentioned that you are trying to create a jsPerf clone take a look at Benchmark.js, this library is used by the jsPerf development team.
Running performance analysis tests is really hard and I would suggest running them on a Node.js environment with predefined and preallocated resources in order to obtain your results.
You might what to take a look at https://github.com/anywhichway/benchtest which just reuses Mocha unit tests. Note, performance testing at the unit level should only be one part of your performance testing, you should also use simulators that emulate real world conditions and test your code at the application level to assess
network impacts, module interactions, etc.

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

How can I make my setTimout functions run at the same speed?

Preface: I have a demo of the problem on my personal site (I hope this is ok. If not, I can try to set it up on jsfiddle). I'm intending this question to be a little fun, while also trying to understand the time functions take in javascript.
I'm incrementing the value of progress bars on a timeout. Ideally (if functions run instantaneously) they should fill at the same speed, but in the real world, they do not. The code is this:
function setProgress(bar, myPer) {
bar.progressbar({ value: myPer })
.children('.ui-progressbar-value')
.html(myPer.toPrecision(3) + '%')
.attr('align', 'center');
myPer++;
if(myPer == 100) { myPer = 0; }
}
function moveProgress(bar, myPer, inc, delay){
setProgress(bar, myPer);
if(myPer >= 100) { myPer = 0; }
setTimeout(function() { moveProgress(bar, myPer+inc, inc, delay); }, delay);
}
$(function() {
moveProgress($(".progressBar#bar1"), 0, 1, 500);
moveProgress($(".progressBar#bar2"), 0, 1, 500);
moveProgress($(".progressBar#bar3"), 0, .1, 50);
moveProgress($(".progressBar#bar4"), 0, .01, 5);
});
Naively, one would think should all run (fill the progress bar) at the same speed.
However, in the first two bars, (if we call "setting the progress bar" a single operation) I'm performing one operation every 500 ms for a total of 500 operations to fill the bar; in the third, I'm performing one operation every 50ms for a total of 5,000 operations to fill the bar; in the fourth, I'm performing one operation every 5ms for a total of 50,000 operations to fill the bar.
What part of my code is takes the longest, causes these speed differences, and could be altered in order to make them appear to function in the way that they do (the fourth bar gets smaller increments), but also run at the same speed?
The biggest problem with using setTimeout for things like this is that your code execution happens between timeouts and is not accounted for in the value sent to setTimeout. If your delay is 5 ms and your code takes 5 ms to execute, you're essentially doubling your time.
Another factor is that once your timeout fires, if another piece of code is already executing, it will have to wait for that to finish, delaying execution.
This is very similar to problems people have when trying to use setTimeout for a clock or stopwatch. The solution is to compare the current time with the time that the program started and calculate the time based on that. You could do something similar. Check how long it has been since you started and set the % based on that.
What causes the speed difference two things: first is the fact that you executing more code to fill the bottom bar (as you allude to in the 2nd to last paragraph). Also, every time you set a timeout, your browser queues it up... the actual delay may be longer than what you specify, depending on how much is in the queue (see MDN on window.setTimeout).
Love the question, i don't have a very precise answer but here are my 2 cents:
Javascript is a very fast language that deals very well with it's event loop and therefore eats setTimeouts and setIntervals for breakfast.
There are limits though, and they depend on a large number of factors, such as browser and computer speed, quantity of functions you have on the event loop, complexity of the code to execute and timeout values...
In this case, i think it's obvious that if you try to execute one function every 500ms, it is going to behave a lot better than executing it every 50ms, therefore a lot better than every 5ms. If you take into account that you are running them all on top of each other, you can predict that the performance will not be optimal.
You can try this exercise:
take the 500ms one, and run it alone. mark the total time it took to fill the bar (right here you will see that it's going to take a little longer than predicted).
try executing two 500ms timeouts at the same time, and see that the total time just got a bit longer.
If you add the 50ms to it, and then the 5ms one, you will see that you will lose performance everytime...

How to tell what's causing slow HTML5 Canvas performance?

How can I tell if the canvas's slow performance is caused by the drawing itself, or the underlying logic that calculates what should be drawn and where?
The second part of my question is: how to calculate canvas fps? Here's how I did it, seems logical to me, but I can be absolutely wrong too. Is this the right way to do it?
var fps = 0;
setInterval(draw, 1000/30);
setInterval(checkFps, 1000);
function draw() {
//...
fps++;
}
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
Edit:
I replaced the above with the following, according to Nathan's comments:
var lastTimeStamp = new Date().getTime();
function draw() {
//...
var now = new Date().getTime();
$("#fps").html(Math.floor(1000/(now - lastTimeStamp)));
lastTimeStamp = now;
}
So how's this one? You could also calculate only the difference in ms since the last update, performance differences can be seen that way too. By the way, I also did a side-by-side comparison of the two, and they usually moved pretty much together (a difference of 2 at most), however, the latter one had bigger spikes, when performance was extraordinarily low.
Your FPS code is definitely wrong
setInterval(checkFps, 1000);
No-one assures this function will be called exactly every second (it could be more than 1000ms, or less - but probably more), so
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
is wrong (if fps is 32 at that moment it is possible that you have 32 frames in 1.5s (extreme case))
beter is to see what was the real time passes since the last update and calculate the realtimepassed / frames (I'm sure javascript has function to get the time, but I'm not sure if it will be accurate enough = ms or better)
fps is btw not a good name, it contains the number of frames (since last update), not the number of frames per second, so frames would be a better name.
In the same way
setInterval(draw, 1000/30);
is wrong, since you want to achieve a FPS of 30, but since the setInterval is not very accurate (and is probably going to wait longer than you say, you will end up with lower FPS even if the CPU is able to handle the load)
Webkit and Firebug both provide profiling tools to see where CPU cycles are being spent in your javascript code. I'd recommend starting there.
For the FPS calculation, I don't think your code is going to work, but I don't have any good recommendation :(
Reason being: Most (all?) browsers use a dedicated thread for running javascript and a different thread for running UI updates. If the Javascript thread is busy, the UI thread won't be triggered.
So, you can run some javascript looping code that'll "update" the UI 1000 times in succession (for instance, setting the color of some text) - but unless you add a setTimeout to allow the UI thread to paint the change, you won't see any changes until the 1000 iterations are finished.
That said, I don't know if you can assertively increment your fps counter at the end of the draw() routine. Sure, your javascript function has finished, but did the browser actually draw?
Check if you dont use some innerHTML method to debug your project. This can slow your project in a way you can't imagine, especially if you do some concatenation like this innerHTML += newDebugValues;
Or like desau said, profile your cpu usage with firebug or webkit inner debugger.

Categories

Resources