Speed up javascript run time - javascript

Although I have reviewed several of previous posts, my rookie capabilities are blind to a solution for speeding up the execution of the following code. There are hundreds of k, and for each k there are (tens)thousands of i and nearSum() has a loop that evaluates testStr.
This code is slow and timing out Chrome – how do I improve the execution?
Before you ask, the only reason for any of the code is ‘because it is working’. The values for nn are global variables.
Function()…
resArrLen = resultArray[k].length;
for (i=0;i<resArrLen;i++)
{
testStr = resultArray[k][i].toString();
resultArray[k][i] = testStr + "'" + nearSum(testStr);
}//end for i
…
function nearSum(seqFrag)
{
var i=0;
var ninj=0;
var seqFragLen=0;
var calcVal=0;
var nn="";
//sum values
seqFragLen = seqFrag.length;
for (i=0; i+1<seqFragLen; i++)
{
nn = seqFrag.substr(i,2); //gets value
ninj = eval(nn);
calcVal = calcVal.valueOf() + ninj.valueOf();
} //end for i
return calcVal.toFixed(2);
} //end nearSum

For one thing, it appears that you are using 'eval' to convert a string to a number. That's not how that's intended to be used. Use 'Number(nn)' or 'parseInt(nn)' instead.
Otherwise, the code is incomplete and with no example data it's difficult to optimize.

Related

V8 doesn't optimize function after 'manually' doing typed array .set()s

I have the following function (I'm posting it entirely because all the code parts might be relevant):
function buildUploadBuffers(cmds, len, vertexUploadBuffer, matrixUploadBuffer)
{
var vertexOffset = 0;
var matrixOffset = 0;
var quadLen = 24; //96 bytes for each quads, /4 since we're stepping 4 bytes at a time
var matLen = 16; //64/4, 4 rows of 4x floats with 4 bytes each, again stepping 4 bytes at a time
for (var i = 0; i < len; ++i)
{
var cmd = cmds[i];
var cmdQuads = cmd._numQuads;
var source = cmd._quadU32View;
var slen = cmdQuads * quadLen;
vertexUploadBuffer.set(source,vertexOffset);
vertexOffset += slen;
var mat = cmd._stackMatrixMat;
for(var j=0;j<cmdQuads * 4;++j)
{
matrixUploadBuffer.set(mat, matrixOffset);
matrixOffset += matLen;
}
}
}
It retrieves some typedArrays from each cmd in the cmds array and uses it to set values in some typedarray buffers.
This function is optimized fine, however, 'len' here is quite large and the data that is copied from the source typedArrays is quite small, and I have tested and profiled in the past that manually writing out the "set()"s can be significantly faster than relying on the compiler to optimize correctly. Further sometimes you can merge computations (such as here in the second loop because I copy the same thing 4 times to different places, but this is omitted in the following code since it doesn't change the results, for simplicity)
Doing this with the function above turns it into this:
function buildUploadBuffers(cmds, len, vertexUploadBuffer, matrixUploadBuffer)
{
var vertexOffset = 0;
var matrixOffset = 0;
var quadLen = 24; //96/4 since we're stepping 4 bytes at a time
var matLen = 16; //64/4
for (var i = 0; i < len; ++i)
{
var cmd = cmds[i];
var cmdQuads = cmd._numQuads;
var source = cmd._quadU32View;
var slen = cmdQuads * quadLen;
for(var j=0;j<slen; ++j)
{
vertexUploadBuffer[vertexOffset + j] = source[j];
}
vertexOffset += slen;
var mat = cmd._stackMatrixMat;
for(var j=0;j<cmdQuads * 4;++j)
{
for(var k=0;k<matLen; ++k)
{
matrixUploadBuffer[matrixOffset + k] = mat[k];
}
matrixOffset += matLen;
}
}
}
However, this second function is not optimized ("optimized too many times") despite doing essentially the same thing.
Running v8 with deopt traces produces the following suspicious statements (these are repeated several times in the output, until finally the compiler says no thanks and stops optimizing):
[compiling method 0000015F320E2B59 JS Function buildUploadBuffers
(SharedFunctionInfo 0000002ACC62A661) using Crankshaft OSR]
[optimizing 0000015F320E2B59 JS Function buildUploadBuffers
(SharedFunctionInfo 0000002ACC62A661) - took 0.070, 0.385, 0.093 ms]
[deoptimizing (DEOPT eager): begin 0000015F320E2B59 JS Function
buildUploadBuffers (SharedFunctionInfo 0000002ACC62A661) (opt #724)
#28, FP to SP delta: 280, caller sp: 0x2ea1efcb50]
;;; deoptimize at 4437: Unknown map in polymorphic access
So it seems that the deoptimization fails because of polymorphic access somewhere. Needless to say, the types contained in cmds are not always the same. They can be one of two concrete types that share the same prototype one step up the chain ('base class') where all the queried attributes come from (numQuads, quadU32View etc.).
Further
Why would it not just fail optimization with the first function that just uses .set() then? I'm accessing the same properties on the same objects. I'd think polymorphic access would break it in either case.
Type info of the function seems to be fine? When optimizing it the debug output says
ICs with typeinfo: 23/23 (100%), generic ICs: 0/23 (0%)]
Assuming there's nothing weird going on and the fact that the cmds can be one of two different types is indeed the culprit, how can I help out the optimizer here? The data in those cmds that I need from them is always the same, so there should be some way to package it up better for the optimizer, right? Maybe put a "quadData" object inside each cmd that contains numQuads, quadU32View, etc.? (just stabbing in the dark here)
Something that's very weird: Commenting out either of the two inner loops (or both at the same time of course) leads to the function getting optimized again. Is the function getting too long for the optimizer or something?
Because of the above point, I figured something might be weird with the (j) loop variable, so I tried using different ones for the different loops, which didn't change anything.
edit: Sure enough, the function optimizes again after I (e.g.) take out the second inner loop (uploading the matrix) and put it into a separate function. Interestingly enough this separate function is then seemingly inlined perfectly and I got the performance improvement I hoped for. Still makes me wonder what's going on here that prevents optimization. Just for completeness here's the thing that now optimizes well (and performs better, by about 25%):
function uploadMatrix(matrixUploadBuffer, mat, matLen, numVertices, matrixOffset)
{
for(var j=0;j<numVertices;++j)
{
for(var k=0;k<matLen; ++k)
{
matrixUploadBuffer[matrixOffset + k] = mat[k];
}
matrixOffset += matLen;
}
return matrixOffset;
}
function buildUploadBuffers(cmds, len, vertexUploadBuffer, matrixUploadBuffer)
{
var vertexOffset = 0;
var matrixOffset = 0;
var quadLen = 24; //96/4 since we're stepping 4 bytes at a time
var matLen = 16; //64/4
for (var i = 0; i < len; ++i)
{
var cmd = cmds[i];
var cmdQuads = cmd._numQuads;
var source = cmd._quadU32View;
var slen = cmdQuads * quadLen;
for(var j=0;j<slen; ++j)
{
vertexUploadBuffer[vertexOffset + j] = source[j];
}
vertexOffset += slen;
var mat = cmd._stackMatrixMat;
matrixOffset = uploadMatrix(matrixUploadBuffer,mat, matLen, cmdQuads *4, matrixOffset);
}
}

Making a delay in Javascript [duplicate]

This question already has answers here:
JavaScript closure inside loops – simple practical example
(44 answers)
Closed 7 years ago.
I am working on a WordPress plugin. One of its features involves hiding and revealing segments of text by class using <span>.
This functionality works, but I have been hoping to enhance it by having the segments of text reveal one letter at a time (quickly of course) as though they were being typed out very quickly, rather than all at once in large chunks.
I know there are animations out there for this kind of thing ... and perhaps that would be a better solution, but I've been trying to keep it. But the functionality is not really graphic or "animation" oriented; my intent is more just to make a text-based feature look prettier.
I've gotten the portion of the code that builds each segment of text character by character, but I'm trying to insert a very short (5-10ms) delay between each character so that the effect is actually visible. I simply cannot get the setTimeout function to work; can anyone please give me some suggestions?
For simplicity I'm just including the segment of the text that does this; let me know if more context is needed. The following is the FOR loop that goes through every element of an array called cols[] and reveals each element in the array by character. This code works but the delay is never observed.
numberofSnippets = the size of the array cols[]
for (c = 0; c < numberofSnippets; c++)
{
h=0;
currentshown = '';
snippet = cols[c].textContent;
sniplength = snippet.length;
(function addNextCharacter()
{
onecharacter = snippet.charAt(h);
currentshown = currentshown.concat(onecharacter);
cols[c].textContent = currentshown;
h=h+1;
if (h < sniplength) {window.setTimeout(addNextCharacter, 200); }
})();*/
}
}
}
There were a few oddities in your code that was preventing the setTimeout from performing as expected, mostly due to the closure reusing variables within the loop due to the fact that the loop isn't going to wait for the IIFE to finish recursively executing with a setTimeout. I solved that by moving those variables to parameters passed to addNextCharacter.
var cols = document.getElementsByClassName('foo');
var numberofSnippets = cols.length;
for (var c = 0; c < numberofSnippets; c++) {
(function addNextCharacter(h, c, snippet, sniplength, currentshown) {
var onecharacter = snippet.charAt(h);
currentshown = currentshown.concat(onecharacter);
cols[c].textContent = currentshown;
h = h + 1;
if (h < sniplength) {
setTimeout(function () {
addNextCharacter(h, c, snippet, sniplength, currentshown);
}, 10);
}
})(0, c, cols[c].textContent, cols[c].textContent.length, '');
}
<div class="foo">Apple</div>
<div class="foo">Banana</div>
<div class="foo">Orange</div>
<p class="foo">There were a few oddities in your code that was preventing the setTimeout from performing as expected, mostly due to the closure reusing variables within the loop due to the fact that the loop isn't going to wait for the IIFE to finish recursively executing with a setTimeout. I solved that by moving those variables to parameters passed to addNextCharacter.</p>
And here's the obligatory .forEach version which avoids needing to pass the variables around as parameters.
var cols = document.getElementsByClassName('foo');
var numberofSnippets = cols.length;
[].forEach.call(cols, function(el) {
var snippet = el.textContent;
var sniplength = snippet.length;
var currentshown = '';
(function addNextCharacter(h) {
var onecharacter = snippet.charAt(h);
currentshown = currentshown.concat(onecharacter);
el.textContent = currentshown;
h = h + 1;
if (h < sniplength) {
setTimeout(function() {
addNextCharacter(h);
}, 1000);
}
})(0);
});
Well, one issue is that you're setting your timeout to 0, which means, effectively 'next tick'. If you want a 5 second delay, for example, you need to put 5000 in there as the second param.

Javascript performance array of objects preassignment vs direct use

I have a doubt about how can be affected to speed the use of object data arrays, that is, use it directly or preasign them to simple vars.
I have an array of elements, for example 1000 elements.
Every array item is an object with 10 properties (for example).
And finally I use some of this properties to do 10 calculations.
So I have APPROACH1
var nn = myarray.lenght;
var a1,a2,a3,a4 ... a10;
var cal1,cal2,.. cal10
for (var x=0;x<nn;x++)
{ // assignment
a1=my_array[x].data1;
..
a10 =my_array[x].data10;
// calculations
cal1 = a1*a10 +a2*Math.abs(a3);
...
cal10 = (a8-a7)*4 +Math.sqrt(a9);
}
And APPROACH2
var nn = myarray.lenght;
for (var x=0;x<nn;x++)
{
// calculations
cal1 = my_array[x].data1*my_array[x].data10 +my_array[x].data2*Math.abs(my_array[x].data3);
...
cal10 = (my_array[x].data8-my_array[x].data7)*4 +Math.sqrt(my_array[x].data9);
}
Assign a1 ... a10 values from my_array and then make calculations is faster than make the calculations using my_array[x].properties; or the right is the opposite ?????
I dont know how works the 'js compiler' ....
The kind of short answer is: it depends on your javascript engine, there is no right and wrong here, only "this has worked in the past" and "this don't seem to speed thing up no more".
<tl;dr> If i would not run a jsperf test, i would go with "Cached example" 1 example down: </tl;dr>
A general rule of thumb is(read: was) that if you are going to use an element in an array more then once, it could be faster to cache it in a local variable, and if you were gonna use a property on an object more then once it should also be cached.
Example:
You have this code:
// Data generation (not discussed here)
function GetLotsOfItems() {
var ret = [];
for (var i = 0; i < 1000; i++) {
ret[i] = { calc1: i * 4, calc2: i * 10, calc3: i / 5 };
}
return ret;
}
// Your calculation loop
var myArray = GetLotsOfItems();
for (var i = 0; i < myArray.length; i++) {
var someResult = myArray[i].calc1 + myArray[i].calc2 + myArray[i].calc3;
}
Depending on your browser (read:this REALLY depends on your browser/its javascript engine) you could make this faster in a number of different ways.
You could for example cache the element being used in the calculation loop
Cached example:
// Your cached calculation loop
var myArray = GetLotsOfItems();
var element;
var arrayLen = myArray.length;
for (var i = 0; i < arrayLen ; i++) {
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
You could also take this a step further and run it like this:
var myArray = GetLotsOfItems();
var element;
for (var i = myArray.length; i--;) { // Start at last element, travel backwards to the start
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
What you do here is you start at the last element, then you use the condition block to see if i > 0, then AFTER that you lower it by one (allowing the loop to run with i==0 (while --i would run from 1000 -> 1), however in modern code this is usually slower because you will read an array backwards, and reading an array in the correct order usually allow for either run-time or compile-time optimization (which is automatic, mind you, so you don't need to do anything for this work), but depending on your javascript engine this might not be applicable, and the backwards going loop could be faster..
However this will, by my experience, run slower in chrome then the second "kinda-optimized" version (i have not tested this in jsperf, but in an CSP solver i wrote 2 years ago i ended caching array elements, but not properties, and i ran my loops from 0 to length.
You should (in most cases) write your code in a way that makes it easy to read and maintain, caching array elements is in my opinion as easy to read (if not easier) then non-cached elements, and they might be faster (they are, at least, not slower), and they are quicker to write if you use an IDE with autocomplete for javascript :P

Node.js loops and JSON building

Respected ppl ....
This is my node.js code ...
https://gist.github.com/SkyKOG/99d47dbe5a2cec97426b
Im trying to parse the data of our exam results ...example ...
http://www.vtualerts.com/results/get_res.php?usn=1MV09IS002&sem=7
Im getting the results ...and i am traversing back for previous seems too ...
All works but traversing back happens in random ... prolly something wrong with the loops ...
json.results = [];
var output = '';
var k = response.query.results.body.div.div[0].table[1].tr.length;
for (var j = 1; j < k; j++) {
for (var i = 0; i <= 5; i++) {
var result_obj = {};
result_obj.subjects = [];
for (key in response.query.results.body.div.div[0].table[1].tr[j].td[i]) {
if (typeof response.query.results.body.div.div[0].table[1].tr[j].td[i].em === "undefined") {
continue;
}
var subject_obj = {};
output += "Subject : " + response.query.results.body.div.div[0].table[1].tr[j].td[i].em + " " + "\n";
var subtext = response.query.results.body.div.div[0].table[1].tr[j].td[i].em + " " + "\n";
subject_obj.subjectname = subtext.replace(/[(].*[)]/, "").trim();
result_obj.subjects.push(subject_obj);
console.log(subject_obj);
break;
}
console.log(result_obj.subjects);
I presume there is something like async concepts which need to implemented correctly to make the reordering of sems in correct order ...
And to get the JSON in this format ...
https://gist.github.com/SkyKOG/3845d6a94cea3b744296
I dont think im pushing the created objects at the right scope ...
Kindly help in this regard .... thank you ...
(I'll answer the ordering part. Suggest making the JSON issue a separate question to fit in with the Q&A format.)
When you make the HTTP request in your code (see the line below) you're bringing a varying delay into the order that responses are executed
new YQL.exec(queryname, function (response) {
You need to track the order of the requests yourself, or use a library to do it for you.
Code it yourself
In order to get around that you need something that keeps track of the original order of the requests. Because of the way closures work you can't just increment a simple counter because it'll be changed in the global scope as your loop progresses. The idiomatic way to solve this is by passing the counter into an immediately executed function (as a value type)
e.g.
var responseData = [];
for ( var i = 0; i < 100; i++ ){
(function(){
...
// http call goes in here somewhere
responseData[i] = data_from_this_response
...
})(i)
}
Use a library
Check out the async.parallel() call in Caolan's excellent library. You pass it an array of functions and it'll return to your callback with an array of the results.
https://github.com/caolan/async/#parallel
You'll need to create a loop that populates the array with curried versions of your function containing the appropriated variables.

How to optimize jquery grep method on 30k records array

Is it possible to optimize this code? I have very low performance on this keyup event.
$('#opis').keyup(function () {
if ($('#opis').val() != "") {
var search = $.grep(
svgs, function (value) {
reg = new RegExp('^' + $('#opis').val(), 'i');
return value.match(reg) == null;
}, true);
$('#file_list').html("");
var tohtml = "";
$cnt = 0;
for (var i = 0; i < search.length; i++) {
if ($cnt <= 30) {
tohtml += "<li class='file_item'><a href='' class='preview'>" + search[i] + "</a> <a href='" + search[i] + "' class='print_file'><img src='img/add16px.png' alt='dodaj'/></li></a>";
$cnt++;
} else {
break;
}
}
$('#file_list').html(tohtml);
$(".preview").click(function () {
$('#file_preview').html('<embed src="opisy/' + $(this).html() + '" type="image/svg+xml" pluginspage="http://www.adobe.com/svg/viewer/install/" /> ');
$(".preview").parent().removeClass("selected");
$(this).parent().addClass("selected");
return false;
});
$(".print_file").click(function () {
if (jQuery.inArray($(this).attr('href'), prints) == -1) {
$('#print_list').append('<li>' + $(this).attr('href') + '</li>');
prints.push($(this).attr('href'));
} else {
alert("Plik znajduje się już na liście do wydruku!");
}
return false;
});
} else {
$('#file_list').html(" ");
}
});
var opis = $('#opis')[0]; // this line can go outside of keyup
var search = [];
var re = new RegExp('^' + opis.value, 'i');
for (var i = 0, len = svgs.length; i < len; i++) {
if (re.test(svgs[i])) {
search.push(svgs[i]);
}
}
It's up to 100x faster in Google Chrome, 60x in IE 6.
first thing you have to learn:
$('#opis').keyup(function() {
$this = $(this);
if($this.val()!=""){
// so *$this* instead of *$('#opis')*
// because you are reperforming a *getElementById("opis")* and you've already called it when you used the keyup method.
// and use $this instead of $(this) | pretty much the same problem
so about the grep function, maybe if you cache the results it would help in further searchs I guess, but I don't know if can help you with that
Well the thing with javascript is that it executes under the users environment and not the servers environment so optimization always varies, with large large arrays that need extensive work done on them to I would prefer to handle this server side.
Have you thought about serializing the data and passing them over to your server side, which would handle all the data calculations / modifications and return the prepared result back as the response.
You may also want to take alook at SE:Code Review for more optimization advise.
Some optimization, tips:
if($('#opis').val()!=""){ should be using '!=='.
return value.match(reg)==null; should be ===.
for(var i=0;i<search.length;i++){
reg = new RegExp(...); should be var reg ... as its not defined outside the function as a global.
Move all your variable declarations to the top of the function such as
var i,cnt,search,tohtml etc
i would advise you to start using Google Chrome, it has a built in system for memeory tracking on perticular tabs, you can go to the url about:memory in chrome, which would produce a result like so:
Image taken from: http://malektips.com/google-chrome-memory-usage.html
Each time you perform the grep, you are calling the 'matching' function once per array entry.
The matching function creates a RegExp object and then uses it to perform the match.
There are two ways you could improve this:
Create the RegExp once, outside of the function, and then use a closure to capture it inside the function, so that you don't have to keep recreating the object over and over.
It looks like all you're trying to do is to perform a case-insensitive tests to see whether the sought string is the start of a member of your array. It may be faster to do this more explicitly, using .toUpper and substring. However, that's a guess and you should test to find out.

Categories

Resources