console.time shows different time running the same function - javascript

I use console.time to show the time of the function. But I found that it shows different running time of the same function.
I have simplified my function as below:
const findIP = (res) => {
let arr = []
arr = res.split(',')
}
console.time('1')
findIP('1,2,3,4,5,6,7,8,9,0')
console.timeEnd('1')
console.time('2')
findIP('1,2,3,4,5,6,7,8,9,0')
console.timeEnd('2')
The time difference between the two is very large.
I have tried to run several times. And it still cost different time.

To quote the answer in the the following link:
If you run shorten multiple times, the V8 engine has a JIT compiler that will optimize that piece of code so it runs faster the next time.
https://stackoverflow.com/a/54601440

Try changing the argument value, for example
console.time('1')
findIP('1,2,3,4,5,6,7,8,9,0')
console.timeEnd('1')
console.time('2')
findIP('1,2,3,4,43,6,7,8,9,4')
console.timeEnd('2')
you will see approx equal time
Reason of that difference is: The browser cache
Simple Definition
browser cache is a temporary storage area in memory or on disk that holds the most recently downloaded Web pages and/or calculated result.

Related

How do I round console.time logs?

When I use console.time and console.timeEnd to measure the execution speed of a function or code snippet in JavaScript it prints this to the console:
timer: 14657.580078125 ms
How do I round this to the nearest integer or some other digit? I've looked at the documentation for both of these functions and neither gives me a clue how to do this.
You'd have a better shot with the Performance API, since console timers are not exposed programmatically.
There are a couple of things you could do with the API. You could use a high resolution timestamp:
let ts = performance.now();
//some code later
let measure = performance.now() - ts;
That would get you the time in milliseconds with the decimals format you have with console timers. If you need it on seconds then you can just as well do:
console.log(Math.round(measure/1000));
The Performance api has several other way to mark timestamps and measure the difference. Take a look at it.
An example taken from MDN:
const markerNameA = "example-marker-a"
const markerNameB = "example-marker-b"
// Run some nested timeouts, and create a PerformanceMark for each.
performance.mark(markerNameA);
setTimeout(function() {
performance.mark(markerNameB);
setTimeout(function() {
// Create a variety of measurements.
performance.measure("measure a to b", markerNameA, markerNameB);
performance.measure("measure a to now", markerNameA);
performance.measure("measure from navigation start to b", undefined, markerNameB);
performance.measure("measure from navigation start to now");
// Pull out all of the measurements.
console.log(performance.getEntriesByType("measure"));
// Finally, clean up the entries.
performance.clearMarks();
performance.clearMeasures();
}, 1000);
}, 1000);
Unfortunately, you will not get console.time and console.timeEnd (methods of the console object) to do anything other than the output you've already seen exclusively in the console.
Perhaps it is too bad that console.timeEnd returns undefined instead of returning the same duration that it puts into the console. But, even if it did return that duration, and you set a variable (and rounded it), your console would still show those un-rounded values in the log. There's no way to change this behavior without hacking each javascript engine your code runs on and that's not practical.
For rounded values, you'll have to forget about console.time and console.timeEnd. Those won't assist.

JS String concatenation explodes memory consumption

I was extensively profiling a code till I found out that following code allocates more than 1GB of RAM on the latest Chrome version in private mode when the size of "array" is about 33MB, the size doesn't really matter, it's only a file that had this size with which I was running my tests.
I don't know how to generate such a big Uint8Array in the code for you test so the code below cannot be run as is, but maybe you can understand it anyways and help me with this.
const bytesToString = function (array) {
let uint8Array = new Uint8Array(array);
let length = uint8Array.byteLength;
let stringToEncode = "";
for (let i = 0; i < length; i++) {
stringToEncode += String.fromCharCode(uint8Array[i]);
}
return stringToEncode;
}
When uncommenting the "for loop", the RAM consumption stays at the same level while running my code, as soon as the "for loop" is active the consumption explodes to over 1GB. This of course gets at some point GC, but I have a general memory problem where the browser will crash eventually because of excessive memory consumption and I am trying to figure out if this function is the problem.
I could see with the performance analyzer from Chrome that GC is being called many times, I don't know how the GC from Chrome works, because you can read many "Minor GC" and at some point at the end "Major GC" and I was wondering if "Minor GC" does not really mean that the RAM is being freed but rather being "collected" and only at a later point the "Major GC" really frees RAM. If this is the case I suppose that between calling this function and "Major GC" my code runs something that also needs more RAM than usual and then the browser crashes. If this is the case it is the question if there is a better implementation for my function or can I manipulate the GC? As far as I could read, I cannot.
Strings in JS are immutable, so every time you add a character, it will create a new string that is 1 character longer than the previous one. GC will not run until everything is done, so you're stuck with tons of strings of various lengths.
You need other ways of combining strings. In this case your whole function could be written as String.fromCharCode(...array) (though if you actually want to make a string from binary data, you should consider using TextDecoder instead, which supports various encodings, caveat being that it is not available in environments such as Node.js).
Update: String.fromCharCode doesn't seem to work for very large arrays (there is a limit to number of parameters to any function), so instead you could try to map the array into 1-character strings, and then join them together:
Array.prototype.map.call(uint8Array, c => String.fromCharCode(c)).join("")
(Note the use of Array.prototype.map instead of uint8Array.map, since the latter will truncate your results to Uint8)
I think TextDecoder is probably the proper solution. But if you insist, you could also try creating a blob and then reading from it.
let blob = new Blob([arrayBuffer], {type: 'application/octet-stream'});
let reader = new FileReader();
reader.onload = function (event) {
console.log(event.target.result);
};
// Use if you want the UTF-8 encoded version
reader.readAsText(blob);
// Use if you for example need to use the result with "window.btoa" as it was in my case.
reader.readAsBinaryString(blob);

Allow the window to calculate before continuing Javascript (GAS)

I have seen a lot of duplicates on this subject, but I don't see how to actually get done what needs to be done.
I have a list of URLs in one sheet tab and an IMPORTXML() function in another. I'm writing a script to copy each URL to the second tab then perform an action based on the output of the IMPORTXML(). For this to work, I need a slight delay in the script to ensure the IMPORTXML() has calculated before continuing. setTimeout() doesn't seem appropriate here, because I need the other parameters of the script (which row it's checking, etc) to be calculated based on outputs. Help!
function test(){
var sh = SpreadsheetApp.getActiveSpreadsheet();
var list = sh.getSheetByName("Dec 2018").getRange(row,3,sh.getSheetByName("Dec 2018").getLastRow()-row).getValues();
var check = sh.getSheetByName("Check");
for(var row = 2;row<500;row++){
check.getRange(1,1).setValue(list[row-2][0]);
//wait right here
//other code to run based on the output of the =IMPORTXML() formula on the Check sheet
}
}
To insert a slight delay use Utilities.sleep(milliseconds) with a milliseconds value big enough to wait the slowest recalculation time (I think that it's 30000 ms for a single formula because it's the execution time limit for custom functions). If you want to optimize this time, maybe you will want to use a technique like exponential back-off
Note: The Window object isn't available on Google Apps Script server side code execution, so setTimeout() can't be used.

Does this for loop iterate multiple times?

I have been discussing some code with colleagues:
for(const a of arr) {
if(a.thing)
continue;
// do a thing
}
A suggestion was to filter this and use a forEach
arr.filter(a => !a.thing)
.forEach(a => /* do a thing */);
There was a discussion about iterating more than necessary. I've looked this up, and I can't find anything. I also tried to figure out how to view the optimized output, but I don't know how to do that either.
I would expect that the filter and forEach turn into code that is very much like the for of with the continue, but I don't know how to be sure.
How can I find out? The only thing I've tried so far is google.
Your first example (the for in loop) is O(n), which will execute n times (n being the size of the array).
Your second example (the filter forEach) is O(n+m), which will execute n times in the filter (n being the size of the array), and then m times (m being the size of the resulting array after the filter takes place).
As such, the first example is faster. However, in this type of example without an exceedingly large sample set the difference is probably measured in microseconds or nanoseconds.
With regards to compilation optimization, that is essentially all memory access optimization. The major interpreters and engines will all analyze issues in code relating to function, variable, and property access such as how often and what the shape of the access graph looks like; and then, with all of that information, optimize their hidden structure to be more efficient for access. Essentially no optimization so far as loop replacement or process analysis is done on the code as it for the most part is optimized while it is running (if a specific part of code does start taking an excessively long time, it may have its code optimized).
When first executing the JavaScript code, V8 leverages full-codegen which directly translates the parsed JavaScript into machine code without any transformation. This allows it to start executing machine code very fast. Note that V8 does not use intermediate bytecode representation this way removing the need for an interpreter.
When your code has run for some time, the profiler thread has gathered enough data to tell which method should be optimized.
Next, Crankshaft optimizations begin in another thread. It translates the JavaScript abstract syntax tree to a high-level static single-assignment (SSA) representation called Hydrogen and tries to optimize that Hydrogen graph. Most optimizations are done at this level.
-https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
*While continue may cause the execution to go to the next iteration, it still counts as an iteration of the loop.
The right answer is "it really doesn't matter". Some previously posted answer states that the second approach is O(n+m), but I beg to differ. The same exact "m" operations will also run in the first approach. In the worst case, even if you consider the second batch of operations as "m" (which doesn't really make much sense - we're talking about the same n elements given as input - that's not how complexity analysis works), in the worst case m==n and the complexity will be O(2n), which is just O(n) in the end anyway.
To directly answer your question, yes, the second approach will iterate over the collection twice while the first one will do it only once. But that probably won't make any difference to you. In cases like these, you probably want to improve readability over efficiency. How many items does your collection have? 10? 100? It's better to write code that will be easier to maintain over time than to strive for maximum efficiency all the time - because most of the time it just doesn't make any difference.
Moreover, iterating the same collection more than once doesn't mean your code runs slower. It's all about what's inside each loop. For instance:
for (const item of arr) {
// do A
// do B
}
Is virtually the same as:
for (const item of arr) {
// do A
}
for (const item of arr) {
// do B
}
The for loop itself doesn't add any significant overhead to the CPU. Although you would probably want to write a single loop anyway, if your code readability is improved when you do two loops, go ahead and do it.
Efficiency is about picking the right algorithm
If you really need to be efficient, you don't want to iterate through the whole collection, not even once. You want some smarter way to do it: either divide and conquer (O(log n)) or use hash maps (O(1)). A hash map a day keeps the inefficiency away :-)
Do things only once
Now, back to your example, if I find myself iterating over and over and doing the same operation every time, I'd just run the filtering operation only once, at the beginning:
// during initialization
const things = [];
const notThings = [];
for (const item of arr) {
item.thing ? things.push(item) : notThings.push(item);
}
// now every time you need to iterate through the items...
for (const a of notThings) { // replaced arr with notThings
// if (a.thing) // <- no need to check this anymore
// continue;
// do a thing
}
And then you can freely iterate over notThings, knowing that unwanted items were already filtered out. Makes sense?
Criticism to "for of is faster than calling methods"
Some people like to state that for of will always be faster than calling forEach(). We just cannot say that. There are lots of Javascript interpreters out there and for each one there are different versions, each with its particular ways of optimizing things. To prove my point, I was able to make filter() + forEach() run faster than for of in Node.js v10 on macOS Mojave:
const COLLECTION_SIZE = 10000;
const RUNS = 10000;
const collection = Array.from(Array(COLLECTION_SIZE), (e, i) => i);
function forOf() {
for (const item of collection) {
if (item % 2 === 0) {
continue;
}
// do something
}
}
function filterForEach() {
collection
.filter(item => item % 2 === 0)
.forEach(item => { /* do something */ });
}
const fns = [forOf, filterForEach];
function timed(fn) {
if (!fn.times) fn.times = [];
const i = fn.times.length;
fn.times[i] = process.hrtime.bigint();
fn();
fn.times[i] = process.hrtime.bigint() - fn.times[i];
}
for (let r = 0; r < RUNS; r++) {
for (const fn of fns) {
timed(fn);
}
}
for (const fn of fns) {
const times = fn.times;
times.sort((a, b) => a - b);
const median = times[Math.floor(times.length / 2)];
const name = fn.constructor.name;
console.info(`${name}: ${median}`);
}
Times (in nanoseconds):
forOf: 81704
filterForEach: 32709
for of was consistently slower in all tests I ran, always around 50% slower. That's the main point of this answer: Do not rely on an interpreter's implementation details, because that can (and will) change over time. Unless you're developing for embedded or high-efficiency/low-latency systems -- where you need to be as close to the hardware as possible -- get to know your algorithm complexities first.
An easy way to see how many times each part of that statement is called would be to add log statements like so and run it in the Chrome console
var arr = [1,2,3,4];
arr.filter(a => {console.log("hit1") ;return a%2 != 0;})
.forEach(a => {console.log("hit2")});
"Hit1" should print to the console 4 times regardless in this case. If it were to iterate too many times, we'd see "hit2" output 4 times, but after running this code it only outputs twice. So your assumption is partially correct, that the second time it iterates, it doesn't iterate over the whole set. However it does iterate over the whole set once in the .filter and then iterates again over the part of the set that matches the condition again in the .filter
Another good place to look is in the MDN developer docs here especially in the "Polyfill" section which outlines the exact equivalent algorithm and you can see that .filter() here returns the variable res, which is what .forEach would be performed upon.
So while it overall iterates over the set twice, in the .forEach section it only iterates over the part of the set that matches the .filter condition.

javascript performance for Array

i tried to figure out, what is different between two versions of a small code-snippet in execution. Do not try to understand, what this is for. This is the final code after deleting all the other stuff to find the performance problem.
function test(){
var start=new Date(), times=100000;
var l=["a","a"];
for(var j=0;j<times;j++){
var result=document.getElementsByTagName(l[0]), rl=result.length;
for(var i=0;i<rl;i++){
l[0]=result[i];
}
}
var end=new Date();
return "by=" + (end-start);
}
For me this snippets takes 236ms in Firefox, but if you change l[0]=result[i]; to l[1]=result[i]; it only takes 51ms. Same happens if I change document.getElementsByTagName(l[0]) to document.getElementsByTagName(l[1]). And if both are change the snippet will be slow again.
After using Google Chrome with DevTools/Profiles I see that a toString function is added when executing the slow code. But i have no chance to get which toString this is and why it is needed in that case.
Can you please tell me what is the difference for the browser so that it will take 5 times longer than the other?
Thanks
If you only change one of the indexes to 0 or 1, the code doesn't do the same thing anymore. If you change both indexes, the performance remains identical.
When using the same index for reading and writing, what happens is that the value stored in l[0] is used in the next call to getElementsByTagName, which has to call toString on it.
In the code above you use l[0] to search for an element. Then you change l[0] und search again and so on. If you now change only one of the two uses (not both!) to l[1], you don't change what you are searching for, which boosts the performance.
That is why when you change both it is slow again.

Categories

Resources