performance.now() timing erratic in Chrome? - javascript

I am using performance.now to get a method start and end time. I have found the results in Firefox to be a fairly consistent 10-12 ms. In Chrome, the timings vary wildly from 30-70ms.
I am not concerned about the fact that Firefox runs faster (as browser JS implementations will vary) so much as the wide spread of results encountered in Chrome that makes getting an accurate result impossible. Comments in this article seem to imply that the Chrome implementation is only accurate to 1ms in any event - is this correct?
Does anyone have any suggestions as to what is going on, or how to produce more accurate and consistent performance evaluations?

This could be one of two problems. Either performance.now() is recording the wrong times (which seems unlikely), or Chrome is taking variable times to execute.
What you should do is test which one it is: use Date.now() instead of performance to find the execution time: if it stabilises, the problem is in performance.now(), otherwise it's Chrome.
The main advantage of performance is documented as being that it can record up to 1 microsecond's accuracy. However, this is unlikely to be required for the vast majority of problems, so using Date instead may well sort you out.

i know this is late, but performance.now is accurate, however chrome's v8 engine does crazy optimization on the code, and thats were the inconsistencies come from. the variation in time is case specific, since v8 does optimizations, and if you force it to de-optimize (ex. changing variable type from string to number) it will cost a lot performance wise.
additionally calling the same pure function multiple times will make v8 engine optimize it, hence it will become almost instantaneous over several repetitions.
if you share the code with me i can help you out with your specific case.

Related

Variation in JavaScript's Date accuracy

TLDR: Is there data on variation of JS's Date accuracy?
I'm looking into doing some research online, gathering reaction data for experiments.
As a contrived example, let's say a user clicks a button and a new image is displayed on the screen. For the purposes of the question imagine that this takes somewhere between 50 and 100ms
I need to measure the delay between an interaction (e.g. a button click) and the displaying of the new DOM state, ideally to millisecond accuracy.
I've looked into it (including through SO with questions like this) and so far it doesn't really seem like using JS's built-in Dates will really cut it, since a delay in the execution thread can push the time out of sync. This seems a bit odd to me as dates are measured to ms precision, and yet accuracy seems to be much larger.
I'm also aware that there are other latencies associated, such as screen refresh rate. This question is purely about the execution inaccuracies.
My question is this: Is there any data on the error rates/variations etc. of the Date object across browsers/operating systems? Although it would be good to get an idea of the overall variation across systems what I'm really after is the repeat trial variation (doing the same thing on the same system over and over).
I'm looking for a solution that can be delivered entirely using a client-side browser, so no extensions or other executables that a user would need to download.

Would calling Performance API frequently be causing a performance issue?

I want to measure the memory usage of my web SPA using performance.memory, and the purpose is to detect if there is any problem i.e. memory leak during the webapp's lifetime.
For this reason I would need to call this API for specific time interval - it could be every 3 second, every 30 second, or every 1 minute, ... Then I have a question - to detect any issue quickly and effectively I would have to make the interval as short as I could, but then I come up with the concern about performance. The measuring itself could affect the performance of the webapp if the measuring is such a expensive task (hopefully I don't think that is the case though)
With this background above, I have the following questions:
Is performance.memory such a method which would affect browser's main thread's performance so that I should care about the frequency of usage?
Would there be a right way or procedure to determine whether a (Javascript) task is affecting the performance of a device? If question 1 is uncertain, then I would have to try other way to find out the proper interval for calling of memory measurement.
(V8 developer here.)
Calling performance.memory is pretty fast. You can easily verify that in a quick test yourself: just call it a thousand times in a loop and measure how long that takes.
[EDIT: Thanks to #Kaiido for highlighting that this kind of microbenchmark can in general be very misleading; for example the first operation could be much more expensive; or the benchmark scenario could be so different from the real application's scenario that the results don't carry over. Do keep in mind that writing useful microbenchmarks always requires some understanding/inspection of what's happening under the hood!
In this particular case, knowing a bit about how performance.memory works internally, the results of such a simple test are broadly accurate; however, as I explain below, they also don't matter.
--End of edit]
However, that observation is not enough to solve your problem. The reason why performance.memory is fast is also the reason why calling it frequently is pointless: it just returns a cached value, it doesn't actually do any work to measure memory consumption. (If it did, then calling it would be super slow.) Here is a quick test to demonstrate both of these points:
function f() {
if (!performance.memory) {
console.error("unsupported browser");
return;
}
let objects = [];
for (let i = 0; i < 100; i++) {
// We'd expect heap usage to increase by ~1MB per iteration.
objects.push(new Array(256000));
let before = performance.now();
let memory = performance.memory.usedJSHeapSize;
let after = performance.now();
console.log(`Took ${after - before} ms, result: ${memory}`);
}
}
f();
(You can also see that browsers clamp timer granularity for security reasons: it's not a coincidence that the reported time is either 0ms or 0.1ms, never anything in between.)
(Second) however, that's not as much of a problem as it may seem at first, because the premise "to detect any issue quickly and effectively I would have to make the interval as short as I could" is misguided: in garbage-collected languages, it is totally normal that memory usage goes up and down, possibly by hundreds of megabytes. That's because finding objects that can be freed is an expensive exercise, so garbage collectors are carefully tuned for a good compromise: they should free up memory as quickly as possible without wasting CPU cycles on useless busywork. As part of that balance they adapt to the given workload, so there are no general numbers to quote here.
Checking memory consumption of your app in the wild is a fine idea, you're not the first to do it, and performance.memory is the best tool for it (for now). Just keep in mind that what you're looking for is a long-term upwards trend, not short-term fluctuations. So measuring every 10 minutes or so is totally sufficient, and you'll still need lots of data points to see statistically-useful results, because any single measurement could have happened right before or right after a garbage collection cycle.
For example, if you determine that all of your users have higher memory consumption after 10 seconds than after 5 seconds, then that's just working as intended, and there's nothing to be done. Whereas if you notice that after 10 minutes, readings are in the 100-300 MB range, and after 20 minutes in the 200-400 MB range, and after an hour they're 500-1000 MB, then it's time to go looking for that leak.

Complexity of Array methods

In a team project we need to remove the first element of an array, thus I called Array.prototype.shift(). Now a guy saw Why is pop faster than shift?, thus suggested to first reverse the array, pop, and reverse again, with Array.prototype.pop() and Array.prototype.reverse().
Intiutively this will be slower (?), since my approach takes O(n) I think, while the other needs O(n), plus O(n). Of course in asymptotic notation this will be the same. However notice the verb I used, think!
Of course I could write some, use jsPerf and benchmark, but this takes time (in contrast with deciding via the time complexity signs (e.g. a O(n3) vs O(n) algorithm).
However, convincing someone when using my opinion is much harder than pointing him to the Standard (if it refered to complexity).
So how to find the Time Complexity of these methods?
For example in C++ std::reverse() clearly states:
Complexity
Linear in half the distance between first and last: Swaps elements.
how to find the Time Complexity of these methods?
You cannot find them in the Standard.
ECMAScript is a Standard for scripting languages. JavaScript is such a language.
The ECMA specification does not specify a bounding complexity. Every JavaScript engine is free to implement its own functionality, as long as it is compatible with the Standard.
As a result, you have to benchmark with jsPerf, or even look at the souce code of a specific JavaScript Engine, if you would like.
Or, as robertklep's comment mentioned:
"The Standard doesn't mandate how these methods should be implemented. Also, JS is an interpreted language, so things like JIT and GC may start playing a role depending on array sizes and how often the code is called. In other words: benchmarking is probably your only option to get an idea on how different JS engines perform".
There are further evidence for this claim ([0], [1], [2]).
An undisputable method is to do a comparative benchmark (provided you do it correctly and the other party is not bad faith). Don't care about theoretical complexities.
Otherwise you will spend a hard time convincing someone who doesn't see the obvious: a shift moves every element once, while two reversals move it twice.
By the way, a shift is optimal, as every element has to move at least once. And if properly implemented as a memmov, it is very fast (while a single reversal cannot be as fast).

Does obfuscated javascript slow a browser down?

I have a script which is obfuscated and begins like this:
var _0xfb0b=["\x48\x2E\x31\x36\x28\x22\x4B\x2E
...it continues like that for more then 435.000 chars (the file has 425kB) and in the end this is coming:
while(_0x8b47x3--){if(_0x8b47x4[_0x8b47x3]){_0x8b47x1=_0x8b47x1[_0xfb0b[8]](
new RegExp(_0xfb0b[6]+_0x8b47x5(_0x8b47x3)+_0xfb0b[6],_0xfb0b[7]),
_0x8b47x4[_0x8b47x3]);} ;} ;return _0x8b47x1;}
(_0xfb0b[0],62,2263,_0xfb0b[3][_0xfb0b[2]](_0xfb0b[1])));
My question is: Isn't it way harder for a browser to execute that compared to a not-obfuscated script and if so, how much time I'm probably loosing because of the obfuscation? Especially the older browsers like IE6 which are really not that performant in JS must spend a lot more time on that, right?
It certainly does slow down the browser more significantly on older browsers (specifically when initializing), but it definitely slows it down even afterwards. I had a heavily obfuscated file that took about 1.2 seconds to initialize, unobfuscated in the same browser and PC was about 0.2 seconds, so, significant.
It depends on what the obfuscator does.
If it primarily simply renames identifiers, I would expect it to have little impact on performance unless the identifier names it used were artificially long.
If it scrambles control or data flow, it could have arbitrary impact on code execution.
Some control flow scrambling can be done with only constant overhead.
You'll have to investigate the method of obfuscation to know the answer to this. Might be easier to just measure the difference.
The obfuscation you're using seems to just store all string constants into one array and put them into the code where they originally were. The strings are obfuscated into the array but still come out as string. (Try console.log(_0xfb0b) to see what I mean).
It does, definitely, slow down the code INITIALIZATION. However, once that array has been initialized, the impact on the script is negligible.

Javascript: Optimizing details for Critical/Highly Processed Javascript code

I've been looking through a lot of Javascript Optimizing and most of them talk about string concatenation and a few other big ones found here, but I figured there had to be more details that you can optimize for when speed is critical and the processing of those pieces of code is very high.
Say you run this code for some reason: (unlikely, I know, but bear with me)
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
And there's no way of getting around having a loop that big... You're going to want to make sure that all the stuff you're doing in that loop is optimized to the point that you can't optimize it anymore... or your website will hang.
Edit: I'm not necessarily talking about a loop, what about a function that's repeatedly called such as onmousemove? Although in most cases we shouldn't need to use onmousemove, there are some cases that do. This questions is for those cases.
Using JQuery as our JS library
So what I would like is tips for optimizing, but only the more uncommon ones
- ie. Speed differences between switch or if-else
If you'd like to see the more common ones, you can find them here:
Optimizing Javascript for Execution Speed
Javascript Tips and Tricks; Javascript Best Practices
Optimize javascript pre-load of images
How do you optimize your Javascript
Object Oriented Javascript best practices
"And there's no way of getting around having a loop that big... "
In the real world of RIA, you HAVE to get around the big loops. As important as optimization is learning how to break large loops into small loops, and giving time to the browser to deal with its UI. Otherwise you'll give your users a bad experience and they won't come back.
So I'd argue that BEFORE you learn funky JS optimizations, you should know how to break a large loop into chunks called by setTimeout() and display a progress bar (or let animated GIFs loop).
Perceived speed is often more important than actual speed. The world of the client is different from the world of the server.
When animating, learn how to find out if you're running on a lame browser (usually IE) and try for a worse framerate (or just don't animate). I can get some animations to go 90fps in a good browser but just 15fps in IE. You can test for the browser, but it's usually better to use timeouts and the clock to see how animations are performing.
Also, for genuine speedups, learn how to use web workers in Gears and in newer browsers.
You can speed up this mofo thus:
for (var i = 100000000; i--;) {
//Do stuff
}
Reverses the loop and checks only for
i--
instead of
i < 10000000 and i < 10000000 = true
Performance gain of 50% in most browsers
Saw this in a great Google Code talk # http://www.youtube.com/watch?v=mHtdZgou0qU
The talk contains some other great tricks.
Good luck!
If it doesn't need to be synchronous, convert the loops into a recursive implementation with setTimeout calls
for( var i = 0; i < 100000000000; i++ ) {
//Do stuff
}
Can probably written as
function doSomething(n)
{
if (n === 0) return some_value;
setTimeout(function(){doSomething(n-1);}, 0);
}
OK, this might not be a good example, but you get the idea. This way, you convert long synchronous operations into an asynchronous operation that doesn't hang the browser. Very useful in certain scenarios where something doesn't need to be done right away.
Using split & join instead of replace:
//str.replace("needle", "hay");
str.split("needle").join("hay");
Store long reference chains in local variables:
function doit() {
//foo.bar.moo.goo();
//alert(foo.bar.moo.x);
var moo = foo.bar.moo;
moo.goo();
alert(moo.x);
}
After seeing a few good answers by the people here, I did some more searching and found a few to add:
These are tips on Javascript optimizing when you're looking to get down to the very little details, things that in most cases wouldn't matter, but some it will make all the difference:
Switch vs. Else If
A commonly used tactic to wring
whatever overhead might be left out of
a large group of simple conditional
statements is replacing If-Then-Else's
with Switch statements.
Just incase you wanted to see benchmarking you can find it here.
Loop Unrolling
To unroll a loop, you have it do more
than one of the same step per
iteration and increment the counter
variable accordingly. This helps a lot
because you then decrease the number
of times you are checking the
condition for the loop overall. You
must be careful when doing this though
because you may end up overshooting
bounds.
See details and benchmarking here.
Reverse Loop Counting
Reverse your loop so that it counts
down instead of up. I have also seen
in various documents about
optimization that comparing a number
to zero is much quicker than comparing
it to another number, so if you
decrement and compare to zero it
should be faster.
See more details and benchmarking here.
Duff's Device
It's simple, but complicated to grasp at first. Read more about it here.
Make sure to check out the improved version further down that page.
The majority of this information was quoted directly from here: JavaScript Optimization. It's interesting, since it's such an old site it looks at optimization from the perspective of the browser processing power they had back then. Although the benchmarks they have recorded there are for IE 5.5 and Netscape 4.73, their benchmarking tools give accurate results for the browser you're using.
For the people who think these details don't matter, I think it says a bit about the way people perceive the power in advancing technologies we have. Just because our browsers are processing many times faster than what they use to doesn't necessarily mean that we should abuse that processing power.
I'm not suggesting spend hours optimizing two lines of code for 0.005ms, but if you keep some these techniques in mind and implement them where appropriate it will contribute to a faster web. After all, there are still many people using IE 6, so it would be wrong to assume everyone's browsers can handle the same processing.
Which JavaScript engine are we supposed to be targeting? If you're talking about such extreme optimisation, it makes a big difference. For starters, I'll point out that the array.join() trick for string concatenation is only really applicable to Microsoft's JScript engine; it can actually give worse performance on other JS engines.

Categories

Resources