I got a nice solution to get HTML Comments from the HTML Node Tree
var findComments = function(el) {
var arr = [];
for (var i = 0; i < el.childNodes.length; i++) {
var node = el.childNodes[i];
if (node.nodeType === 8) {
arr.push(node);
} else {
arr.push.apply(arr, findComments(node));
}
}
return arr;
};
var commentNodes = findComments(document);
// whatever you were going to do with the comment...
console.log(commentNodes[0].nodeValue);
from this thread.
Everything I did was adding this small loop to print out all the nodes.
var arr = [];
var findComments = function(el) {
for (var i = 0; i < el.childNodes.length; i++) {
var node = el.childNodes[i];
if (node.nodeType === 8) {
arr.push(node);
} else {
arr.push.apply(arr, findComments(node));
}
}
return arr;
};
var commentNodes = findComments(document);
//I added this
for (var counter = arr.length; counter > 0; counter--) {
console.log(commentNodes[counter].nodeValue);
}
I keep getting this Error Message:
RangeError: too many arguments provided for a function call debugger
eval code:9:13
EDIT: i had a typo while pasting changed the code from i-- to counter--
see this comment in MDN docs about the use of apply to merge arrays:
Do not use this method if the second array (moreVegs in the example) is very large, because the maximum number of parameters that one function can take is limited in practice. See apply() for more details.
the other note from apply page:
But beware: in using apply this way, you run the risk of exceeding the JavaScript engine's argument length limit. The consequences of applying a function with too many arguments (think more than tens of thousands of arguments) vary across engines (JavaScriptCore has hard-coded argument limit of 65536), because the limit (indeed even the nature of any excessively-large-stack behavior) is unspecified. Some engines will throw an exception. More perniciously, others will arbitrarily limit the number of arguments actually passed to the applied function. To illustrate this latter case: if such an engine had a limit of four arguments (actual limits are of course significantly higher), it would be as if the arguments 5, 6, 2, 3 had been passed to apply in the examples above, rather than the full array.
As the array start from index of 0, actually the last item in the array is arr.length - 1.
you can fix it by:
for (var counter = arr.length - 1; counter >= 0; counter--)
Notice I've added arr.length -1 and counter >= 0 as zero is the first index of the array.
Adding the for loop is not the only thing you changed (and see the other answer about fixing that loop too). You also moved the declaration of arr from inside the function to outside, making arr relatively global.
Because of that, each recursive call to findComments() works on the same array, and the .apply() call pushes the entire contents back onto the end of the array every time. After a while, its length exceeds the limit of the runtime.
The original function posted at the top of your question has arr declared inside the function. Each recursive call therefore has its own local array to work with. In a document with a lot of comment nodes, it could still get that Range Error however.
Related
Good afternoon, I encounter the following problem, I am trying to loop through a list with subdirectories to add to an array (repositories) the routes of these subdirectories, this is my code:
for (n=0; n<=pendingRepos.length; n++){
subruta = pendingRepos[pendingRepos.length -1]
pendingRepos.pop()
c.list(subruta, function(err, sublist) {
if (sublist.length != 0){
for (g=0; g < sublist.length; g++){
if (sublist[g].type === 'd' ){
repositories.push(subruta+'/'+sublist[g].name)
pendingRepos.push(subruta+'/'+sublist[g].name)
}
else {files.push(subruta+'/'+sublist[g].name)}
}
}
});
}
For example when starting the loop for my array pendingRepos has the following structure:
pendingRepos = ['/ dir1 / dir2', / dir3 / dir4 ']
the loop is executed correctly 2 times and the last element was removed, but at the time of the other loop to add another 'last' element to the array the first for loop does not take it into account.
I understand that the condition was already evaluated before I added more elements, is this correct? How can I avoid it?
It looks like you're treating the array of pending repos in two contradictory ways. The outer for loop:
for (n = 0; n <= pendingRepos.length; n++) { ... }
is treating pendingRepos as an immutable list, going through from beginning to end and processing each element. (And not doing this correctly, either - we should be iterating to n < pendingRepos.length if this is the option we're using).
The logic immediately after the loop, however,
subruta = pendingRepos[pendingRepos.length -1]
pendingRepos.pop()
treats pendingRepos as a mutable stack, from which you would keep processing the last element until the stack was empty.
In order to correctly process the array, you need to choose one or the other. Since it seems that the rest of your code is correctly using the stack approach, the loop at the top should be changed to match, which in this case would simply be
while (pendingRepos.length > 0) { ... }
The end result will look as follows:
while (pendingRepos.length > 0){
const subruta = pendingRepos[pendingRepos.length -1]
pendingRepos.pop()
c.list(subruta, function(err, sublist) {
if (sublist.length != 0){
for (let g = 0; g < sublist.length; g++){
if (sublist[g].type === 'd' ){
repositories.push(subruta+'/'+sublist[g].name)
pendingRepos.push(subruta+'/'+sublist[g].name)
} else {
files.push(subruta+'/'+sublist[g].name)
}
}
}
});
}
EDIT: The above answer only works if c.list() is a synchronous function that immediately runs your callback before returning - however, since it is contacting an FTP server, it is not. This means that the entire while loop will finish before any of those callbacks run, and anything they add to pendingRepos will not be processed. In order to use asynchronous functions, you have to structure your function completely differently, basically using more and more asynchronous functions as far up as you can go.
Fortunately, doing that is pretty easy in this case. What you are doing with pendingRepos is conceptually known as depth-first search (or "DFS"), where you search through a tree structure by repeating the search at each subnode. Using the stack of pending directories is one way to do DFS, and another way to do it is to use a recursive function (basically repeating the search function each time you reach a directory).
Here's a possible implementation of that, with the use of callbacks extending all the way out.
// an outer function for the whole operation. You would provide
// a callback that takes the lists of repositories and files.
function getTheRepos(startList, callbackForWholeThing) {
// build up our lists of repositories and files
const repositories = [];
const files = [];
// keep track of how many calculations are running
let repoGetCount = 0;
// an inner function to run exactly one result
function getOneRepo(subruta) {
// at the start, say we're running
repoGetCount++;
c.list(subruta, function(err, sublist) {
if (sublist.length != 0){
for (let g = 0; g < sublist.length; g++){
if (sublist[g].type === 'd' ){
repositories.push(subruta+'/'+sublist[g].name)
// for each directory we find, call this inner function again.
// This is the critical part that makes this all work.
getOneRepo(subruta+'/'+sublist[g].name)
} else {
files.push(subruta+'/'+sublist[g].name)
}
}
// at the end, say we're not running,
// and call the whole callback if we're the last one
repoGetCount--;
if (repoGetCount === 0) {
callbackForWholeThing(repositories, files);
}
}
});
// now that we have the function, run it on each of our
// start directories to start things off
for (let n = 0; n < startList.length; n++) {
getOneRepo(startList[n]);
}
// the cogs are in motion, so now return.
// The callback will be called when the tree has been searched.
}
I am creating a simple program that should utilize the bubble sort algorithm to sort a list of numbers in ascending order.
Just for testing purposes I have added the line alert(unsortedNumbers);and as you can see if you run it, the numbers do not change order no matter how many passes the algorithm does.
The program seems to be stuck in an infinite loop, as 'Another pass' is printed to the console repeatedly. As instructed by this line console.log("Another pass");
As with the bubble sort algorithm, once it does not have to swap any terms on a certain pass, we know this is the sorted list, I have created the variable swapped, however it looks like this is always 1. I think this may be caused by the swapArrayElements() function not swapping the terms.
Why is the function not swapping the index of the terms within the array?
(Code does't seem to run properly on SO's code snippet tool, may have to copy into notepad document)
function main(){
var unsortedNumbers =[7,8,13,1,6,9,43,80]; //Declares unsorted numbers array
alert(unsortedNumbers);
var swapped = 0;
var len = unsortedNumbers.length;
function swapArrayElements(index_a, index_b) { //swaps swapArrayElements[i] with swapArrayElements[ii]
var temp = unsortedNumbers[index_a];
unsortedNumbers[index_a] = unsortedNumbers[index_b];
unsortedNumbers[index_b] = temp;
}
function finish(){
alert(unsortedNumbers);
}
function mainBody(){
for(var i =0;i<len;i++){
var ii =(i+1);
if (unsortedNumbers[i]>unsortedNumbers[ii]){
console.log("Swap elements");
swapArrayElements(i,ii);
swapped=1; // Variable 'swapped' used to check whether or not a swap has been made in each pass
}
if (ii = len){
if (swapped = 1){ // if a swap has been made, runs the main body again
console.log("Another pass");
alert(unsortedNumbers); //Added for debugging
swapped=0;
mainBody();
}else{
console.log("Finish");
finish();
}
}
}
}
mainBody();
}
<head>
</head>
<body onload="main()">
</body>
You have an error in your code:
if (ii = len) {
and also
if (swapped = 1){
it should be double equal
Invalid condition check causing infinite loop:
if (ii = len) & if (swapped = 1) should have == or === operator. This is causing infinity loop.
NOTE: Your code is not appropriate as per the best practices to avoid global variables. You should not use global variables and try
passing variables and returning them back after processing.
Refer this for avoiding globals.
I have a doubt about how can be affected to speed the use of object data arrays, that is, use it directly or preasign them to simple vars.
I have an array of elements, for example 1000 elements.
Every array item is an object with 10 properties (for example).
And finally I use some of this properties to do 10 calculations.
So I have APPROACH1
var nn = myarray.lenght;
var a1,a2,a3,a4 ... a10;
var cal1,cal2,.. cal10
for (var x=0;x<nn;x++)
{ // assignment
a1=my_array[x].data1;
..
a10 =my_array[x].data10;
// calculations
cal1 = a1*a10 +a2*Math.abs(a3);
...
cal10 = (a8-a7)*4 +Math.sqrt(a9);
}
And APPROACH2
var nn = myarray.lenght;
for (var x=0;x<nn;x++)
{
// calculations
cal1 = my_array[x].data1*my_array[x].data10 +my_array[x].data2*Math.abs(my_array[x].data3);
...
cal10 = (my_array[x].data8-my_array[x].data7)*4 +Math.sqrt(my_array[x].data9);
}
Assign a1 ... a10 values from my_array and then make calculations is faster than make the calculations using my_array[x].properties; or the right is the opposite ?????
I dont know how works the 'js compiler' ....
The kind of short answer is: it depends on your javascript engine, there is no right and wrong here, only "this has worked in the past" and "this don't seem to speed thing up no more".
<tl;dr> If i would not run a jsperf test, i would go with "Cached example" 1 example down: </tl;dr>
A general rule of thumb is(read: was) that if you are going to use an element in an array more then once, it could be faster to cache it in a local variable, and if you were gonna use a property on an object more then once it should also be cached.
Example:
You have this code:
// Data generation (not discussed here)
function GetLotsOfItems() {
var ret = [];
for (var i = 0; i < 1000; i++) {
ret[i] = { calc1: i * 4, calc2: i * 10, calc3: i / 5 };
}
return ret;
}
// Your calculation loop
var myArray = GetLotsOfItems();
for (var i = 0; i < myArray.length; i++) {
var someResult = myArray[i].calc1 + myArray[i].calc2 + myArray[i].calc3;
}
Depending on your browser (read:this REALLY depends on your browser/its javascript engine) you could make this faster in a number of different ways.
You could for example cache the element being used in the calculation loop
Cached example:
// Your cached calculation loop
var myArray = GetLotsOfItems();
var element;
var arrayLen = myArray.length;
for (var i = 0; i < arrayLen ; i++) {
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
You could also take this a step further and run it like this:
var myArray = GetLotsOfItems();
var element;
for (var i = myArray.length; i--;) { // Start at last element, travel backwards to the start
element = myArray[i];
var someResult = element.calc1 + element.calc2 + element.calc3;
}
What you do here is you start at the last element, then you use the condition block to see if i > 0, then AFTER that you lower it by one (allowing the loop to run with i==0 (while --i would run from 1000 -> 1), however in modern code this is usually slower because you will read an array backwards, and reading an array in the correct order usually allow for either run-time or compile-time optimization (which is automatic, mind you, so you don't need to do anything for this work), but depending on your javascript engine this might not be applicable, and the backwards going loop could be faster..
However this will, by my experience, run slower in chrome then the second "kinda-optimized" version (i have not tested this in jsperf, but in an CSP solver i wrote 2 years ago i ended caching array elements, but not properties, and i ran my loops from 0 to length.
You should (in most cases) write your code in a way that makes it easy to read and maintain, caching array elements is in my opinion as easy to read (if not easier) then non-cached elements, and they might be faster (they are, at least, not slower), and they are quicker to write if you use an IDE with autocomplete for javascript :P
Is there a way to achieve indexOf functionality, to find out if a string is on an array, for very big arrays relatively fast? When my array grows beyond 40,000 values, my app freezes for a few seconds.
Consider the following code:
var arr = [];
function makeWord()
{
var text = "";
var possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
for( var i=0; i < 5; i++ )
text += possible.charAt(Math.floor(Math.random() * possible.length));
return text;
}
function populateArr(){
for (var i=0;i<40000;i++){
arr[i] = makeWord();
}
console.log("finished populateArr");
}
function checkAgainst(){
for (var i=0;i<40000;i++){
var wordToSearch = makeWord();
if (isFound(wordToSearch)){
console.log("found "+wordToSearch);
}
}
console.log("finished checkAgainst");
}
function isFound(wordToSearch){
//return $.inArray(wordToSearch,arr) > -1;
return arr.indexOf(wordToSearch) > -1;
}
populateArr();
checkAgainst();
FIDDLE here
In this code I'm populating an array arr with 40k random strings. Than, in checkAgainst I'm creating 40,000 other random strings, and than each one is checked if it is found on arr. This makes chrome freeze for about 2 seconds. Opening the profiler on Chrome DevTools, I see that isFound is obviously expensive in terms of CPU. even if I lower the for loop iterations number to just 4000 in checkAgainst , it still freezes for about a second or so.
In reality, I have a chrome extension and an array of keywords that grows to about 10k strings. Than, I have to use Array.indexOf to see if chucks of 200 other keywords are in that array. This makes my page freeze every once in a while, and from this example I suspect this is the cause. Ideas?
Try using keys in an object instead:
var arr = {};
function makeWord() // unchanged
function populateArr(){
for (var i=0;i<40000;i++){
arr[makeWord()] = true;
}
console.log("finished populateArr");
}
function checkAgainst() // unchanged
function isFound(wordToSearch){
return arr[wordToSearch];
}
populateArr();
checkAgainst();
If you then need the array of words, you can use Object.keys(arr)
Alternatively, combine the two: have an array and an object. Use the object to look up if a word is in the array, or the array to get the words themselves. This would be a classic compromise, trading memory usage for time.
I have a Node.js application where I have to very often do following things:
- check if particular array already contains certain element
- if element does exist, update it
- if element do not exist, push it to the array and then sort it using underscore _.sortBy
For checking if the element already exists in the array, I use this binary search function:
http://oli.me.uk/2013/06/08/searching-javascript-arrays-with-a-binary-search/
In this way, when the size of the array grows, the sorting becomes slower and slower.
I assume that the array size might grow to max 20 000 items per user. And eventually there will be thousands of users. The array is sorted by a key, which is quite a short string. It can be converted into integer if needed.
So, I would require a better way to keep the array sorted,
in stead of sorting it every time new element is pushed onto it.
So, my question is, how should/could I edit the binary search algorithm I use, to enable me to
get the array index where the new element should be placed, if it doesn't already exist in the array?
Or what other possibilities there would be to achieve this. Of course, I could use some kind of loop that would start from the beginning and go through the array until it would find the place for the new element.
All the data is stored in MongoDB.
In other words, I would like to keep the array sorted without sorting it every time a new element is pushed.
It's easy to modify this binaryIndexOf function to return an index of the next element when no matches found:
function binaryFind(searchElement) {
'use strict';
var minIndex = 0;
var maxIndex = this.length - 1;
var currentIndex;
var currentElement;
while (minIndex <= maxIndex) {
currentIndex = (minIndex + maxIndex) / 2 | 0; // Binary hack. Faster than Math.floor
currentElement = this[currentIndex];
if (currentElement < searchElement) {
minIndex = currentIndex + 1;
}
else if (currentElement > searchElement) {
maxIndex = currentIndex - 1;
}
else {
return { // Modification
found: true,
index: currentIndex
};
}
}
return { // Modification
found: false,
index: currentElement < searchElement ? currentIndex + 1 : currentIndex
};
}
So, now it returns objects like:
{found: false, index: 4}
where index is an index of the found element, or the next one.
So, now insertion of a new element will look like:
var res = binaryFind.call(arr, element);
if (!res.found) arr.splice(res.index, 0, element);
Now you may add binaryFind to Array.prototype along with some helper for adding new elements:
Array.prototype.binaryFind = binaryFind;
Array.prototype.addSorted = function(element) {
var res = this.binaryFind(element);
if (!res.found) this.splice(res.index, 0, element);
}
If your array is already sorted and you want to insert an element, to keep it sorted you need to insert it at a specific place in the array. Luckily arrays have a method that can do that:
Array.prototype.splice
So, once you get the index you need to insert at (you should get by a simple modification to your binary search), you can do:
myArr.splice(myIndex,0,myObj);
// myArr your sorted array
// myIndex the index of the first item larger than the one you want to insert
// myObj the item you want to insert
EDIT: The author of your binary search code has the same idea:
So if you wanted to insert a value and wanted to know where you should
put it, you could run the function and use the returned number to
splice the value into the array.
Source
I know this is an answer to an old question, but the following is very simple using javascripts array.splice().
function inOrder(arr, item) {
/* Insert item into arr keeping low to high order */
let ix = 0;
while (ix < arr.length) {
//console.log('ix',ix);
if (item < arr[ix]) { break; }
ix++;
}
//console.log(' insert:', item, 'at:',ix);
arr.splice(ix,0,item);
return arr
}
The order can be changed to high to low by inverting the test