I am testing my javascript's speed using the console.time(); method, so it logs the loading time of a function on load.
if (window.devicePixelRatio > 1) {
var images = $('img');
console.time('testing');
var imagesObj = images.length;
for ( var i = 0; i < imagesObj; i++ ) {
var lowres = images.eq(i).attr('src'),
highres = lowres.replace(".", "_2x.");
images.eq(i).attr('src', highres);
}
console.timeEnd('testing');
}
But every time I reload the page it gives me a pretty different value. Should it have this behaviour? Shouldn't it give me a consistent value?
I have loaded it 5 times in a row and the values are the following:
5.051 ms
4.977 ms
8.009 ms
5.325 ms
6.951 ms
I am running this on XAMPP and in Chrome btw.
Thanks in advance
console.time/endTime is working correctly and the timing does indeed fluctuate by a tiny amount.
However, when dealing with such small numbers - the timings are all less than 1/100 of a second! - the deviation is irrelevant and can be influenced by a huge number of factors.
There is always variation, could be caused by a number of things.
server responding slightly slower (hat can also block other parts of the browser)
your processor is doing something in the mean time
your processor clocked down to save power
random latency in the network
browser extension doing something in the background
Also, Firefox has a system that intelligently tries to optimize javascript execution, in most cases it will perform better but it is somewhat random.
Related
If you look a below code you will see measuring of performance of very simple for loop.
var res = 0;
function runs(){
var a1 = performance.now();
var x = 0;
for(var i=0;i<10**9;i++) {
x++;
}
var a2 = performance.now();
res += (a2-a1);
}
for(var j=0;j<10;j++){
runs();
}
console.log(`=${res/10}`);
Additionally, just for a good measure, this will run 10 times and average results. Now, issue with this is that it is not reliable, it highly depends on your CPU, memory and other programs running on your device.
First time, may run 9s and second 23s, then subsequent call can result 8s.
Is there a way to measure performance regardless of CPU, memory and everything else?
I am after something that will give relative number of FLOPS or any other measure that when you compare two codes you will exactly know that one code executes faster than the other.
For instance for loop with 1005 will always show slower than one with 1000 iterations.
Note: Saying FLOPS is wrong in this context as it means - floating point operations per second.
I would like to exclude time completely, I only need FPOs. Meaning I do not care about seconds, just about reliably knowing that regardless of device if you have same code it will always take let say 2000 FPOs to execute.
As you can see in code below, when I increase the size of the string it leads to a 0 milliseconds difference. And moreover there is an inconsistency as the string count goes on increasing.
Am I doing something wrong here?
let stringIn = document.getElementById('str');
let button = document.querySelector('button');
button.addEventListener('click', () => {
let t1 = performance.now();
functionToTest(stringIn.value);
let t2 = performance.now();
console.log(`time taken is ${t2 - t1}`);
});
function functionToTest(str) {
let total = 0;
for(i of str) {
total ++;
}
return total;
}
<input id="str">
<button type="button">Test string</button>
I tried using await too, but result is the same (see code snippet below). The function enclosing the code below is async:
let stringArr = this.inputString.split(' ');
let longest = '';
const t1 = performance.now();
let length = await new Promise(resolve => {
stringArr.map((item, i) => {
longest = longest.length < item.length ? longest : item;
i === stringArr.length - 1 ? resolve(longest) : '';
});
});
const diff = performance.now() - t1;
console.log(diff);
this.result = `The time taken in mili seconds is ${diff}`;
I've also tried this answer as, but it is also inconsistent.
As a workaround I tried using console.time feature, but It doesn't allow the time to be rendered and isn't accurate as well.
Update: I want to build an interface like jsPerf, which will be quite similar to it but for a different purpose. Mostly I would like to compare different functions which will depend on user inputs.
There are 3 things which may help you understand what happening:
Browsers are reducing performance.now() precision to prevent Meltdown and Spectre attacks, so Chrome gives max 0.1 ms precision, FF 1ms, etc. This makes impossible to measure small timeframes. If function is extremely quick - 0 ms is understandable result. (Thanks #kaiido) Source: paper, additional links here
Any code(including JS) in multithreaded environment will not execute with constant performance (at least due to OS threads switching). So getting consistent values for several single runs is unreachable goal. To get some precise number - function should be executed multiple times, and average value is taken. This will even work with low precision performance.now(). (Boring explanation: if function is much faster than 0.1 ms, and browser often gives 0ms result, but time from time some function run will win a lottery and browser will return 0.1ms... longer functions will win this lottery more often)
There is "optimizing compiler" in most JS engines. It optimizes often used functions. Optimization is expensive, so JS engines optimize only often used functions. This explains performance increase after several runs. At first function is executed in slowest way. After several executions, it is optimized, and performance increases. (should add warmup runs?)
I was able to get non-zero numbers in your code snipet - by copy-pasting 70kb file into input. After 3rd run function was optimized, but even after this - performance is not constant
time taken is 11.49999990593642
time taken is 5.100000067614019
time taken is 2.3999999975785613
time taken is 2.199999988079071
time taken is 2.199999988079071
time taken is 2.099999925121665
time taken is 2.3999999975785613
time taken is 1.7999999690800905
time taken is 1.3000000035390258
time taken is 2.099999925121665
time taken is 1.9000000320374966
time taken is 2.300000051036477
Explanation of 1st point
Lets say two events happened, and the goal is to find time between them.
1st event happened at time A and second event happened at time B. Browser is rounding precise values A and B and returns them.
Several cases to look at:
A B A-B floor(A) floor(B) Ar-Br
12.001 12.003 0.002 12 12 0
11.999 12.001 0.002 11 12 1
Browsers are smarter than we think, there are lot's of improvements and caching techniques take in place for memory allocation, repeatable code execution, on-demand CPU allocation and so on. For instance V8, the JavaScript engine that powers Chrome and Node.js caches code execution cycles and results. Furthermore, your results may get affected by the resources that your browser utilizes upon code execution, so your results even with multiple executions cycles may vary.
As you have mentioned that you are trying to create a jsPerf clone take a look at Benchmark.js, this library is used by the jsPerf development team.
Running performance analysis tests is really hard and I would suggest running them on a Node.js environment with predefined and preallocated resources in order to obtain your results.
You might what to take a look at https://github.com/anywhichway/benchtest which just reuses Mocha unit tests. Note, performance testing at the unit level should only be one part of your performance testing, you should also use simulators that emulate real world conditions and test your code at the application level to assess
network impacts, module interactions, etc.
Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.
My processor runs at 2.0 GHz.
I have a new computer with as much software removed as possible except my development tools. This system is clean with no malware.
When I run the code below I get about 2M loops per second. That is about 1 MHz.
Suppose doing and addition, and doing a compare takes 10x the simplest operation, I get about 10 MHz
Why do I not get more utilization of my processor?
var Utility =
{
time: function()
{
var end_time,
start_time,
index = 0;
start_time = new Date().getTime();
while ( index <= 1000000 )
{
index++;
}
end_time = new Date().getTime();
return ( end_time - start_time);
}
};
This is not really about JavaScript - it's more about the browser and the way it's handling JavaScript.
Each browser is doing this differently, but most modern browsers won't let JavaScript take 100% of the resources to prevent the client machine from crashing.
Bottom line you can't do such thing with client side scripting, you'll have to use "real" application with full access to the computer.
Suppose doing and addition, reading a clock, and doing a compare takes
100x the simplest operation (very conservative), I get (.1MHz * 100) =
1MHz.
This is not how computers work and measuring speed like this is not going to get you anywhere. Besides, it depends a lot on the JavaScript engine being used. I heard a lot of good things about the V8 JS engine that Chrome uses, Opera's seems to be pretty fast too.
So try it with different browsers to get a real comparison. But if you want to measure how much time it took to do some operation (pseudo code):
var start = get_current_time();
// do the complex operation
var end = get_current_time ();
var time_it_took = end - start;
The time functions should have as small a granularity as possible.
Well for one thing you would be creating a new date object every time thorugh the loop -- why not create the date object for comparision before you enter the loop?
How can I tell if the canvas's slow performance is caused by the drawing itself, or the underlying logic that calculates what should be drawn and where?
The second part of my question is: how to calculate canvas fps? Here's how I did it, seems logical to me, but I can be absolutely wrong too. Is this the right way to do it?
var fps = 0;
setInterval(draw, 1000/30);
setInterval(checkFps, 1000);
function draw() {
//...
fps++;
}
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
Edit:
I replaced the above with the following, according to Nathan's comments:
var lastTimeStamp = new Date().getTime();
function draw() {
//...
var now = new Date().getTime();
$("#fps").html(Math.floor(1000/(now - lastTimeStamp)));
lastTimeStamp = now;
}
So how's this one? You could also calculate only the difference in ms since the last update, performance differences can be seen that way too. By the way, I also did a side-by-side comparison of the two, and they usually moved pretty much together (a difference of 2 at most), however, the latter one had bigger spikes, when performance was extraordinarily low.
Your FPS code is definitely wrong
setInterval(checkFps, 1000);
No-one assures this function will be called exactly every second (it could be more than 1000ms, or less - but probably more), so
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
is wrong (if fps is 32 at that moment it is possible that you have 32 frames in 1.5s (extreme case))
beter is to see what was the real time passes since the last update and calculate the realtimepassed / frames (I'm sure javascript has function to get the time, but I'm not sure if it will be accurate enough = ms or better)
fps is btw not a good name, it contains the number of frames (since last update), not the number of frames per second, so frames would be a better name.
In the same way
setInterval(draw, 1000/30);
is wrong, since you want to achieve a FPS of 30, but since the setInterval is not very accurate (and is probably going to wait longer than you say, you will end up with lower FPS even if the CPU is able to handle the load)
Webkit and Firebug both provide profiling tools to see where CPU cycles are being spent in your javascript code. I'd recommend starting there.
For the FPS calculation, I don't think your code is going to work, but I don't have any good recommendation :(
Reason being: Most (all?) browsers use a dedicated thread for running javascript and a different thread for running UI updates. If the Javascript thread is busy, the UI thread won't be triggered.
So, you can run some javascript looping code that'll "update" the UI 1000 times in succession (for instance, setting the color of some text) - but unless you add a setTimeout to allow the UI thread to paint the change, you won't see any changes until the 1000 iterations are finished.
That said, I don't know if you can assertively increment your fps counter at the end of the draw() routine. Sure, your javascript function has finished, but did the browser actually draw?
Check if you dont use some innerHTML method to debug your project. This can slow your project in a way you can't imagine, especially if you do some concatenation like this innerHTML += newDebugValues;
Or like desau said, profile your cpu usage with firebug or webkit inner debugger.