Array.prototype.move = function(oldIndex, newIndex) {
var val = this.splice(oldIndex, 1);
this.splice(newIndex, 0, val[0]);
}
//Testing - Change array position
var testarray = [1, 2, 3, 4];
testarray.move(3, 0);
console.log(testarray);
This produces an error "this.splice is not a function" yet it returns the desired results. Why?
Array.prototype.move = function(oldIndex, newIndex) {
if(Object.prototype.toString.call(this) === '[object Array]') {
if(oldIndex && typeof oldIndex == 'number' && newIndex && typeof newIndex == 'number') {
if(newIndex > this.length) newIndex = this.length;
this.splice(newIndex, 0, this.splice(oldIndex, 1)[0]);
}
}
};
For some reason, the function is being called by the called by the document on load (still haven't quite figured that one out). I added a few checks to verify that this = an array, and then also reset the new index to be equal to the total size if the supplied int was greater than the total length. This solved the error issue I was having, and to me is the simplest way to move objects around in an array. As for why the function is being called onload must be something to do with my code.
You don't need the placeholder variable-
Array.prototype.move = function(oldIndex, newIndex) {
this.splice(newIndex, 0, this.splice(oldIndex, 1)[0]);
}
var a=[1,2,3,4,9,5,6,7,8];
a.move(4,8);
a[8]
/* returned value: (Number)
9
*/
Adding properties to built–in objects is not a good idea if your code must work in arbitrary environments. If you do extend such objects, you shouldn't use property names that are likely to be used by someone else doing the same or similar thing.
There seems to be more than one way to "move" a member, what you seem to be doing can be better named as "swap", so:
if (!Array.prototype.swap) {
Array.prototype.swap = function(a, b) {
var t = this[a];
this[a] = this[b];
this[b] = t;
}
}
I expect that simple re-assignment of values is more efficient than calling methods that need to create new arrays and modify the old one a number of times. But that might be moot anyway. The above is certainly simpler to read and is fewer characters to type.
Note also that the above is stable, array.swap(4,8) gives the same result as array.swap(8,4).
If you want to make a robust function, you first need to work out what to do in cases where either index is greater than array.length, or if one doesn't exist, and so on. e.g.
var a = [,,2]; // a has length 3
a.swap(0,2);
In the above, there are no members at 0 or 1, only at 2. So should the result be:
a = [2]; // a has length 1
or should it be (which will be the result of the above):
a = [2,,undefined]; // a has length 3
or
a = [2,,,]; // a has length 3 (IE may think it's 4, but that's wrong)
Edit
Note that in the OP, the result of:
var b = [,,2];
b.move(0,2);
is
alert(b); // [,2,];
which may not be what is expected, and
b.move(2,0);
alert(b); // [2,,];
so it is not stable either.
Related
I am a noobie in JavaScript algorithm and cannot understand this optimal solution of the 2-sum
function twoNumberSum(array, target) {
const nums = {};
for (const num of array) {
const potentialMatch = target - num;
console.log('potential', potentialMatch);
if (potentialMatch in nums) {
return [potentialMatch, num]
} else {
nums[num] = true;
}
}
}
So the 2-sum problem basically says "find two numbers in an array that sum to the given target, and return their index". Let's walk through this code and talk about what's happening.
First, we start the function; I'm going to assume this makes sense (a function that's called twoNumberSum that takes in two arguments; namely, array and target) - note that in JS, we don't annotate types, so there is no return type
Now, first thing we do is create a new object called nums. In JS, objects are effectively hash maps (with some very important differences - see my note below); they store a key and a corresponding value. In JS, a key can be any string or number
Next, we start our iteration. If we do for (const a of b), and b is an array, this iterates over all the values of the array, with each iteration having that value stored in a.
Next, we subtract our current value from the target. Then comes the key line: if (potentialMatch in nums). The in keyword checks for the existence of a key: 'hello' in obj returns true if obj has the key 'hello'.
In this case, if we find this potential match, then that means we have found another number that is equal to target - num, which of course means we've found the other partner for our sum! So in this case, we simply return the two numbers. If, on the other hand, we do not find this potentialMatch, that means we need to keep looking. But we do want to remember we've seen this number - thus, we add it as a key by doing nums[num] = true (this creates a new key-value pair; namely the key is num and the value is true).
As one of the comments explained, this is just trying to keep track of a list of numbers; however, the author is trying to be clever by using a Hash Table instead of a normal array. This way, lookups are O(1) instead of O(n). For eyes not used to JS semantics, another way of explaining this code is that we build up a Map of the numbers, and then we check that map for our target value.
I mentioned earlier that using objects as hash tables isn't the best idea; this is because if you aren't careful, if you use user-provided keys, you can accidentally mess with what's called the Prototype Chain. This is beyond this discussion, but a better way forward would be to use a Set:
function twoNumberSum(array, target) {
// Create a new Hash Set. Sets take in an iterable, so we could
// Do it this way. But to remain as close to your original solution
// as possible, we won't for now, and instead populate it as we go
// const nums = new Set(array);
const nums = new Set();
for (const num of array) {
const potentialMatch = target - num;
if (nums.has(potentialMatch)) {
return [potentialMatch, num];
} else {
nums.add(num);
}
}
Sometimes, the problem instead asks for you to return the indices; using a Map instead makes this relatively trivial. Just store the index as the value and you're good to go!
function twoNumberSum(array, target) {
// Create the new map instead
const nums = new Map();
for (let n = 0; n < array.length; ++n) {
const potentialMatch = target - array[n];
if (nums.has(potentialMatch)) {
return [nums.get(potentialMatch), n];
} else {
nums.set(array[n], n);
}
}
Let me explain to you what it's all is working-.
function twoNumberSum(array, target) {
// This is and object in Javascript
const nums = {};
for (const num of array) { // This is for of loop which iterates the array.
//For of Doc - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of
// Here's its calculating the potential.
const potentialMatch = target - num;
console.log('potential - ' + potentialMatch);
/**
* Nowhere `in` is used which check if any property exists in an object or not.
* in Usage - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/in
*
* It checks whether potential exists in the `nums` object, If exist it returns the array
* with potentialMatch and num to which it is matched.
*
* If the number is not there in nums object. It's setting there in else block
* to match in net iteration.
*/
if (potentialMatch in nums) {
return [potentialMatch, num]
} else {
nums[num] = true;
/**
* It forms an object when the potential match doesn't exist in nums for checking in the next iteration
* {
* 1: true,
* 2: true
* }
*/
}
console.log(nums)
}
}
console.log(twoNumberSum([1, 2, 4, 5, 6, 7, 8], 3))
You can also Run it from JSBin
I would like to filter an array of items by using the map() function. Here is a code snippet:
var filteredItems = items.map(function(item)
{
if( ...some condition... )
{
return item;
}
});
The problem is that filtered out items still uses space in the array and I would like to completely wipe them out.
Any idea?
EDIT: Thanks, I forgot about filter(), what I wanted is actually a filter() then a map().
EDIT2: Thanks for pointing that map() and filter() are not implemented in all browsers, although my specific code was not intended to run in a browser.
You should use the filter method rather than map unless you want to mutate the items in the array, in addition to filtering.
eg.
var filteredItems = items.filter(function(item)
{
return ...some condition...;
});
[Edit: Of course you could always do sourceArray.filter(...).map(...) to both filter and mutate]
Inspired by writing this answer, I ended up later expanding and writing a blog post going over this in careful detail. I recommend checking that out if you want to develop a deeper understanding of how to think about this problem--I try to explain it piece by piece, and also give a JSperf comparison at the end, going over speed considerations.
That said, **The tl;dr is this:
To accomplish what you're asking for (filtering and mapping within one function call), you would use Array.reduce()**.
However, the more readable and (less importantly) usually significantly faster2 approach is to just use filter and map chained together:
[1,2,3].filter(num => num > 2).map(num => num * 2)
What follows is a description of how Array.reduce() works, and how it can be used to accomplish filter and map in one iteration. Again, if this is too condensed, I highly recommend seeing the blog post linked above, which is a much more friendly intro with clear examples and progression.
You give reduce an argument that is a (usually anonymous) function.
That anonymous function takes two parameters--one (like the anonymous functions passed in to map/filter/forEach) is the iteratee to be operated on. There is another argument for the anonymous function passed to reduce, however, that those functions do not accept, and that is the value that will be passed along between function calls, often referred to as the memo.
Note that while Array.filter() takes only one argument (a function), Array.reduce() also takes an important (though optional) second argument: an initial value for 'memo' that will be passed into that anonymous function as its first argument, and subsequently can be mutated and passed along between function calls. (If it is not supplied, then 'memo' in the first anonymous function call will by default be the first iteratee, and the 'iteratee' argument will actually be the second value in the array)
In our case, we'll pass in an empty array to start, and then choose whether to inject our iteratee into our array or not based on our function--this is the filtering process.
Finally, we'll return our 'array in progress' on each anonymous function call, and reduce will take that return value and pass it as an argument (called memo) to its next function call.
This allows filter and map to happen in one iteration, cutting down our number of required iterations in half--just doing twice as much work each iteration, though, so nothing is really saved other than function calls, which are not so expensive in javascript.
For a more complete explanation, refer to MDN docs (or to my post referenced at the beginning of this answer).
Basic example of a Reduce call:
let array = [1,2,3];
const initialMemo = [];
array = array.reduce((memo, iteratee) => {
// if condition is our filter
if (iteratee > 1) {
// what happens inside the filter is the map
memo.push(iteratee * 2);
}
// this return value will be passed in as the 'memo' argument
// to the next call of this function, and this function will have
// every element passed into it at some point.
return memo;
}, initialMemo)
console.log(array) // [4,6], equivalent to [(2 * 2), (3 * 2)]
more succinct version:
[1,2,3].reduce((memo, value) => value > 1 ? memo.concat(value * 2) : memo, [])
Notice that the first iteratee was not greater than one, and so was filtered. Also note the initialMemo, named just to make its existence clear and draw attention to it. Once again, it is passed in as 'memo' to the first anonymous function call, and then the returned value of the anonymous function is passed in as the 'memo' argument to the next function.
Another example of the classic use case for memo would be returning the smallest or largest number in an array. Example:
[7,4,1,99,57,2,1,100].reduce((memo, val) => memo > val ? memo : val)
// ^this would return the largest number in the list.
An example of how to write your own reduce function (this often helps understanding functions like these, I find):
test_arr = [];
// we accept an anonymous function, and an optional 'initial memo' value.
test_arr.my_reducer = function(reduceFunc, initialMemo) {
// if we did not pass in a second argument, then our first memo value
// will be whatever is in index zero. (Otherwise, it will
// be that second argument.)
const initialMemoIsIndexZero = arguments.length < 2;
// here we use that logic to set the memo value accordingly.
let memo = initialMemoIsIndexZero ? this[0] : initialMemo;
// here we use that same boolean to decide whether the first
// value we pass in as iteratee is either the first or second
// element
const initialIteratee = initialMemoIsIndexZero ? 1 : 0;
for (var i = initialIteratee; i < this.length; i++) {
// memo is either the argument passed in above, or the
// first item in the list. initialIteratee is either the
// first item in the list, or the second item in the list.
memo = reduceFunc(memo, this[i]);
// or, more technically complete, give access to base array
// and index to the reducer as well:
// memo = reduceFunc(memo, this[i], i, this);
}
// after we've compressed the array into a single value,
// we return it.
return memo;
}
The real implementation allows access to things like the index, for example, but I hope this helps you get an uncomplicated feel for the gist of it.
That's not what map does. You really want Array.filter. Or if you really want to remove the elements from the original list, you're going to need to do it imperatively with a for loop.
Array Filter method
var arr = [1, 2, 3]
// ES5 syntax
arr = arr.filter(function(item){ return item != 3 })
// ES2015 syntax
arr = arr.filter(item => item != 3)
console.log( arr )
You must note however that the Array.filter is not supported in all browser so, you must to prototyped:
//This prototype is provided by the Mozilla foundation and
//is distributed under the MIT license.
//http://www.ibiblio.org/pub/Linux/LICENSES/mit.license
if (!Array.prototype.filter)
{
Array.prototype.filter = function(fun /*, thisp*/)
{
var len = this.length;
if (typeof fun != "function")
throw new TypeError();
var res = new Array();
var thisp = arguments[1];
for (var i = 0; i < len; i++)
{
if (i in this)
{
var val = this[i]; // in case fun mutates this
if (fun.call(thisp, val, i, this))
res.push(val);
}
}
return res;
};
}
And doing so, you can prototype any method you may need.
TLDR: Use map (returning undefined when needed) and then filter.
First, I believe that a map + filter function is useful since you don't want to repeat a computation in both. Swift originally called this function flatMap but then renamed it to compactMap.
For example, if we don't have a compactMap function, we might end up with computation defined twice:
let array = [1, 2, 3, 4, 5, 6, 7, 8];
let mapped = array
.filter(x => {
let computation = x / 2 + 1;
let isIncluded = computation % 2 === 0;
return isIncluded;
})
.map(x => {
let computation = x / 2 + 1;
return `${x} is included because ${computation} is even`
})
// Output: [2 is included because 2 is even, 6 is included because 4 is even]
Thus compactMap would be useful to reduce duplicate code.
A really simple way to do something similar to compactMap is to:
Map on real values or undefined.
Filter out all the undefined values.
This of course relies on you never needing to return undefined values as part of your original map function.
Example:
let array = [1, 2, 3, 4, 5, 6, 7, 8];
let mapped = array
.map(x => {
let computation = x / 2 + 1;
let isIncluded = computation % 2 === 0;
if (isIncluded) {
return `${x} is included because ${computation} is even`
} else {
return undefined
}
})
.filter(x => typeof x !== "undefined")
I just wrote array intersection that correctly handles also duplicates
https://gist.github.com/gkucmierz/8ee04544fa842411f7553ef66ac2fcf0
// array intersection that correctly handles also duplicates
const intersection = (a1, a2) => {
const cnt = new Map();
a2.map(el => cnt[el] = el in cnt ? cnt[el] + 1 : 1);
return a1.filter(el => el in cnt && 0 < cnt[el]--);
};
const l = console.log;
l(intersection('1234'.split``, '3456'.split``)); // [ '3', '4' ]
l(intersection('12344'.split``, '3456'.split``)); // [ '3', '4' ]
l(intersection('1234'.split``, '33456'.split``)); // [ '3', '4' ]
l(intersection('12334'.split``, '33456'.split``)); // [ '3', '3', '4' ]
First you can use map and with chaining you can use filter
state.map(item => {
if(item.id === action.item.id){
return {
id : action.item.id,
name : item.name,
price: item.price,
quantity : item.quantity-1
}
}else{
return item;
}
}).filter(item => {
if(item.quantity <= 0){
return false;
}else{
return true;
}
});
following statement cleans object using map function.
var arraytoclean = [{v:65, toberemoved:"gronf"}, {v:12, toberemoved:null}, {v:4}];
arraytoclean.map((x,i)=>x.toberemoved=undefined);
console.dir(arraytoclean);
I have a general question which is about whether it is possible to make zero-allocation iterators in Javascript. Note that by "iterator" I am not married to the current definition of iterator in ECMAScript, but just a general pattern for iterating over user-defined ranges.
To make the problem concrete, say I have a list like [5, 5, 5, 2, 2, 1, 1, 1, 1] and I want to group adjacent repetitions together, and process it into a form which is more like [5, 3], [2, 2], [1, 4]. I then want to access each of these pairs inside a loop, something like "for each pair in grouped(array), do something with pair". Furthermore, I want to reuse this grouping algorithm in many places, and crucially, in some really hot inner loops (think millions of loops per second).
Question: Is there an iteration pattern to accomplish this which has zero overhead, as if I hand-wrote the loop myself?
Here are the things I've tried so far. Let's suppose for concreteness that I am trying to compute the sum of all pairs. (To be clear I am not looking for alternative ways of writing this code, I am looking for an abstraction pattern: the code is just here to provide a concrete example.)
Inlining the grouping code by hand. This method performs the best, but obscures the intent of the computation. Furthermore, inlining by hand is error-prone and annoying.
function sumPairs(array) {
let sum = 0
for (let i = 0; i != array.length; ) {
let elem = array[i++], count = 1
while (i < array.length && array[i] == elem) { i++; count++; }
// Here we can actually use the pair (elem, count)
sum += elem + count
}
return sum
}
Using a visitor pattern. We can write a reduceGroups function which will call a given visitor(acc, elem, count) for each pair (elem, count), similar to the usual Array.reduce method. With that our computation becomes somewhat clearer to read.
function sumPairsVisitor(array) {
return reduceGroups(array, (sofar, elem, count) => sofar + elem + count, 0)
}
Unfortunately, Firefox in particular still allocates when running this function, unless the closure definition is manually moved outside the function. Furthermore, we lose the ability to use control structures like break unless we complicate the interface a lot.
Writing a custom iterator. We can make a custom "iterator" (not an ES6 iterator) which exposes elem and count properties, an empty property indicating that there are no more pairs remaining, and a next() method which updates elem and count to the next pair. The consuming code looks like this:
function sumPairsIterator(array) {
let sum = 0
for (let iter = new GroupIter(array); !iter.empty; iter.next())
sum += iter.elem + iter.count
return sum
}
I find this code the easiest to read, and it seems to me that it should be the fastest method of abstraction. (In the best possible case, scalar replacement could completely collapse the iterator definition into the function. In the second best case, it should be clear that the iterator does not escape the for loop, so it can be stack-allocated). Unfortunately, both Chrome and Firefox seem to allocate here.
Of the approaches above, the custom-defined iterator performs quite well in most cases, except when you really need to put the pedal to the metal in a hot inner loop, at which point the GC pressure becomes apparent.
I would also be ok with a Javascript post-processor (the Google Closure Compiler perhaps?) which is able to accomplish this.
Check this out. I've not tested its performance but it should be good.
(+) (mostly) compatible to ES6 iterators.
(-) sacrificed ...GroupingIterator.from(arr) in order to not create a (imo. garbage) value-object. That's the mostly in the point above.
afaik, the primary use case for this is a for..of loop anyways.
(+) no objects created (GC)
(+) object pooling for the iterators; (again GC)
(+) compatible with controll-structures like break
class GroupingIterator {
/* object pooling */
static from(array) {
const instance = GroupingIterator._pool || new GroupingIterator();
GroupingIterator._pool = instance._pool;
instance._pool = null;
instance.array = array;
instance.done = false;
return instance;
}
static _pool = null;
_pool = null;
/* state and value / payload */
array = null;
element = null;
index = 0;
count = 0;
/* IteratorResult interface */
value = this;
done = true;
/* Iterator interface */
next() {
const array = this.array;
let index = this.index += this.count;
if (!array || index >= array.length) {
return this.return();
}
const element = this.element = array[index];
while (++index < array.length) {
if (array[index] !== element) break;
}
this.count = index - this.index;
return this;
}
return() {
this.done = true;
// cleanup
this.element = this.array = null;
this.count = this.index = 0;
// return iterator to pool
this._pool = GroupingIterator._pool;
return GroupingIterator._pool = this;
}
/* Iterable interface */
[Symbol.iterator]() {
return this;
}
}
var arr = [5, 5, 5, 2, 2, 1, 1, 1, 1];
for (const item of GroupingIterator.from(arr)) {
console.log("element", item.element, "index", item.index, "count", item.count);
}
Given following array:
var arr = [undefined, undefined, 2, 5, undefined, undefined];
I'd like to get the count of elements which are defined (i.e.: those which are not undefined). Other than looping through the array, is there a good way to do this?
In recent browser, you can use filter
var size = arr.filter(function(value) { return value !== undefined }).length;
console.log(size);
Another method, if the browser supports indexOf for arrays:
var size = arr.slice(0).sort().indexOf(undefined);
If for absurd you have one-digit-only elements in the array, you could use that dirty trick:
console.log(arr.join("").length);
There are several methods you can use, but at the end we have to see if it's really worthy doing these instead of a loop.
An array length is not the number of elements in a array, it is the highest index + 1. length property will report correct element count only if there are valid elements in consecutive indices.
var a = [];
a[23] = 'foo';
a.length; // 24
Saying that, there is no way to exclude undefined elements from count without using any form of a loop.
No, the only way to know how many elements are not undefined is to loop through and count them. That doesn't mean you have to write the loop, though, just that something, somewhere has to do it. (See #3 below for why I added that caveat.)
How you loop through and count them is up to you. There are lots of ways:
A standard for loop from 0 to arr.length - 1 (inclusive).
A for..in loop provided you take correct safeguards.
Any of several of the new array features from ECMAScript5 (provided you're using a JavaScript engine that supports them, or you've included an ES5 shim, as they're all shim-able), like some, filter, or reduce, passing in an appropriate function. This is handy not only because you don't have to explicitly write the loop, but because using these features gives the JavaScript engine the opportunity to optimize the loop it does internally in various ways. (Whether it actually does will vary on the engine.)
...but it all amounts to looping, either explicitly or (in the case of the new array features) implicitly.
Loop and count in all browsers:
var cnt = 0;
for (var i = 0; i < arr.length; i++) {
if (arr[i] !== undefined) {
++cnt;
}
}
In modern browsers:
var cnt = 0;
arr.foreach(function(val) {
if (val !== undefined) { ++cnt; }
})
Unfortunately, No. You will you have to go through a loop and count them.
EDIT :
var arrLength = arr.filter(Number);
alert(arrLength);
If the undefined's are implicit then you can do:
var len = 0;
for (var i in arr) { len++ };
undefined's are implicit if you don't set them explicitly
//both are a[0] and a[3] are explicit undefined
var arr = [undefined, 1, 2, undefined];
arr[6] = 3;
//now arr[4] and arr[5] are implicit undefined
delete arr[1]
//now arr[1] is implicit undefined
arr[2] = undefined
//now arr[2] is explicit undefined
Remove the values then check (remove null check here if you want)
const x = A.filter(item => item !== undefined || item !== null).length
With Lodash
const x = _.size(_.filter(A, item => !_.isNil(item)))
To add the size() method to the Array object I do:
if (!Array.size) {
Array.prototype.size = function() {
return this.length;
};
}
Is there a simple way to define the size property that will work like length ?
(I don't really need it, I just want to understand if this is something easily achievable in Javascript.)
With ES5 it's possible. Add a size property on the Array prototype,
Object.defineProperty(Array.prototype, "size", {
get: function() {
return this.length;
},
set: function(newLength) {
this.length = newLength;
}
});
var x = [1, 2, 3];
x.length // 3
x.size // 3
x.push(4);
x // [1, 2, 3, 4]
x.length // 4
x.size // 4
x.length = 2;
x // [1, 2]
x.size = 1;
x // [1]
It basically works as a wrapper around length. Appears like a property to the naked eye, but is backed by an underlying function.
Thanks to #Matthew's comment, this size property works like a complete wrapper around length.
I'll assume you already know this is not a real life issue.
length property is modified to suit the size of the array and reflected by methods such as shift() and pop().
So, you would need a method to get the length property.
Anurag shows you how it can be done with ES5.
if(!Array.size)
{
Array.prototype.__defineGetter__('size', function(){
return this.length;
});
}
var a = new Array();
a.push(1);
a.push(4);
console.log(a.size);
Although I'm not entirely sure how cross-browser friendly that is (should work on Chrome and FF at least).