Related
I have an array of arrays of different sizes. The goal is to generate "rows" where each row can contain a max of 12 elements.
For example:
Input data can be something like this:
const groups = [[1,2,3,4],[1,2,3,4,5,6], [1,2,3,4,5,6,7,8,9,10,11,12], [1,2,3,4,5,6,7], [1,2,3],[1,2,3]]
groups[0].length + groups[1].length = 10 -> row0
groups[2].length = 12 -> row1
groups[3].length + groups[4].length = 10 -> row3
groups[5].length = 3 -> row4
Output for such array should be:
[[[1,2,3,4], [1,2,3,4,5,6]], [[1,2,3,4,5,6,7,8,9,10,11,12]], [[1,2,3,4,5,6,7], [1,2,3]], [[1,2,3]]]
I was thinking of a recursive function for this but couldn't figure out how to solve it.
You can use Array#reduce() to do this.
The code first checks if the current last element (last "row") has more than 12 numbers in it if you add the next group:
(acc[acc.length - 1].flat().length + cv.length) <= 12
if it will be less than 12, the elements will get pushed into the "row":
acc[acc.length - 1].push(cv)
and if not, a new "row" will be added to the outer array:
acc.push([cv])
const groups = [[1, 2, 3, 4],[1, 2, 3, 4, 5, 6],[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],[1, 2, 3, 4, 5, 6, 7],[1, 2, 3],[1, 2, 3]];
const rows = groups.reduce((acc, cv) => {
(acc[acc.length - 1].flat().length + cv.length) <= 12 ?
acc[acc.length - 1].push(cv) :
acc.push([cv])
return acc
}, [[]]);
console.log(JSON.stringify(rows))
Here's one way to solve it recursively:
const regroup = (max, [g, ...gs], filled = [], curr = [], cc = 0) =>
g == undefined
? filled .concat ([curr])
: g .length + cc <= max
? regroup (max, gs, filled, curr .concat ([g]), cc + g.length)
: regroup (max, gs, filled .concat ([curr]), [g], g .length)
const groups = [[1, 2, 3, 4], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[1, 2, 3, 4, 5, 6, 7], [1, 2, 3], [1, 2, 3]]
console .log (regroup (12, groups))
.as-console-wrapper {max-height: 100% !important; top: 0}
We pass the maximum size and the list of items, and then we default three parameters:
filled will track the output rows we've filled; it starts with an empty array
curr stores the row we're working on; it also starts with an empty array
cc stores the count of all elements in the current row; it starts with zero
On each recursive call, we have one of three possibilities:
There are no more arrays to process, and we return all the filled rows with the current row appended.
The next array is small enough to fit in the current row, and we update the current to include it, and the current count to accommodate it.
The next array is too large, and we add the existing current row to the filled ones, and start a new current row with this array, setting the count appropriately.
With an array of: [1, 2, 3, 4, 5, 6]
I would like to delete between 2 indices such as 2 and 4 to produce [1, 2, null, null, 5, 6]. What's the easiest way to do this?
Hopefully better than this:
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
let i = 2;
const rangeEnd = 9;
while (i < rangeEnd) {
delete array[i];
i++;
}
console.log(array)
If you want to use some native API you can actually do this with splice(). Otherwise, you should iterate a for loop through your array and change the value in each iteration.
Here is an example of how it would be done:
const array = [1, 2, 3, 4, 5, 6]
array.splice(3, 2, null, null) // the First element is beginning index and the second is count one will indicate how many indexes you need to traverse from the first one, then you should provide replace element for each of them.
console.log(array)
Note: For more info about it you can read more here.
There is a possible workaround for large scale replacement, so I will give it a touch here:
var arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var anotherArr = Array(2).fill(null); // or you can simply define [null, null, ...]
Array.prototype.splice.apply(arr, [3, anotherArr.length].concat(anotherArr));
console.log(arr);
As you mean the range (2, 4] so you can follow this:
The range is: lower limit exclusive and the upper limit inclusive.
const arr = [1, 2, 3, 4, 5, 6];
const deleteRange = (arr, f, t) => {
return arr.map((item, i) => {
if (i + 1 > f && i + 1 <= t) {
return null;
}
return item;
})
}
console.log(deleteRange(arr, 2, 4));
I have a D3 programme that is building a line chart where there are two lines and I want to ascertain the coordinates at where the lines cross . Does D3 have a function for this? The arrays are often of differing lengths and are dynamically generated, but will always have one value at which both will be equal at the same index.
e.g.
var line1 = [0,1,2,3,4];
var line2 = [4,3,2,1,0];
Answer = index 2, in this case. If there is no D3 function for this what would be the best approach using ES6 and above?
D3 has a quite unknown method, named d3.zip, which we can use to merge the arrays and look for any inner array in which all the elements are equal:
var line1 = [0, 1, 2, 3, 4];
var line2 = [4, 3, 2, 1, 0];
var zip = d3.zip(line1, line2).reduce(function(a, c, i) {
if (c[0] === c[1]) a.push(i);
return a;
}, []);
console.log(zip)
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min.js"></script>
The nice thing about d3.zip is that it can be used with several arrays, and also keeps the length of the shorter array. So, in a more complex case (equal values in the indices 6 and 9):
var line1 = [0, 1, 2, 3, 9, 9, 1, 4, 7, 6, 5, 4];
var line2 = [4, 3, 2, 1, 0, 8, 1, 2, 3, 6, 1];
var line3 = [9, 9, 9, 9, 4, 1, 1, 1, 1, 6, 1, 1, 1, 1, 1, 1];
var zip = d3.zip(line1, line2, line3).reduce(function(a, c, i) {
const every = c.every(function(e) {
return e === c[0]
})
if (every) a.push(i);
return a;
}, []);
console.log(zip)
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min.js"></script>
const [longer, shorter] = line1.length > line2.length ? [line1, line2] : [line2, line1];
const crossIndex = shorter.reduce((crossIndex, value, index) => {
if (crossIndex !== null) {
// already found it! just return
return crossIndex;
}
// find it! return
if (value === longer[index]) return index;
// no found, continue searching
return crossIndex;
}, null)
If you don't mind the search running a few extra useless iterations, the first step where we find out which line is longer or shorter, is actually not necessary. You call .reduce() on either line1 or line 2, then compare again the other, you'll still get the same result.
This question already has answers here:
Javascript random ordering with seed
(4 answers)
Closed 1 year ago.
I'm trying to predictably shuffle javascript arrays the same way each time the webpage is loaded.
I can shuffle the arrays randomly, but every time i reload the page it's a different sequence.
I'd like it to shuffle the arrays the same way every time the page loads. There are many arrays and they are part of a procedurally generated world.
Chance.js worked perfectly. Thank you Billy Moon.
My Example:
<script type="text/javascript" src="assets/js/chance.js"></script>
var chance1 = new Chance(124); // you can choose a seed here, i chose 124
console.log(chance1.shuffle(['alpha', 'bravo', 'charlie', 'delta', 'echo']));
// Array [ "alpha", "delta", "echo", "charlie", "bravo" ]
As long as you set the seed with new Chance(xxx) you get the same result every time.
Take a look at chancejs.com's seed function.
In order to shuffle an array in a seemingly random and predetermined way, you can break the problem into two parts.
1. Generate pseudo random numbers
You could use a different PRNG, but the Xorshift is very simple, fast to both initialise and step through, and evenly distributed.
This function takes an integer as a seed value, and returns a random function that always returns the same floating point values in the range 0 to 1.
const xor = seed => {
const baseSeeds = [123456789, 362436069, 521288629, 88675123]
let [x, y, z, w] = baseSeeds
const random = () => {
const t = x ^ (x << 11)
;[x, y, z] = [y, z, w]
w = w ^ (w >> 19) ^ (t ^ (t >> 8))
return w / 0x7fffffff
}
;[x, y, z, w] = baseSeeds.map(i => i + seed)
;[x, y, z, w] = [0, 0, 0, 0].map(() => Math.round(random() * 1e16))
return random
}
2. Shuffle using configurable random function
The Fisher Yates shuffle is an efficient shuffle algorithm with even distribution.
const shuffle = (array, random = Math.random) => {
let m = array.length
let t
let i
while (m) {
i = Math.floor(random() * m--)
t = array[m]
array[m] = array[i]
array[i] = t
}
return array
}
Putting it together
// passing an xor with the same seed produces same order of output array
console.log(shuffle([1, 2, 3, 4, 5, 6, 7, 8, 9], xor(1))) // [ 3, 4, 2, 6, 7, 1, 8, 9, 5 ]
console.log(shuffle([1, 2, 3, 4, 5, 6, 7, 8, 9], xor(1))) // [ 3, 4, 2, 6, 7, 1, 8, 9, 5 ]
// changing the seed passed to the xor function changes the output
console.log(shuffle([1, 2, 3, 4, 5, 6, 7, 8, 9], xor(2))) // [ 4, 2, 6, 9, 7, 3, 8, 1, 5 ]
We have:
var range = [9,18,3,14,2,6,12,7,11,2,1,4]
var total = 89;
var group_size = total / 3;
As result I need 3 groups similar sized but group1 and group2 can never be bigger than group_size.
The result for this example would be
var group1 = 27; // 9,18
var group2 = 25; // 3,14,2,6
var group3 = 37; // 12,7,11,2,1,4
How can I achieve this in Javascript/jQuery?
I think you need to pop values with Javascript if you want this. Something like:
var range = [9,18,3,14,2,6,12,7,11,2,1,4]
var total = 89;
var group_size = total / 3;
var values = [0];
var groupnr = 0;
// Reverse the array, because pop will get the last element
range = range.reverse();
// While elements
while( range.length ) {
// get the last element and remove from array
var curvalue = range.pop();
// To large, create a new element
if( values[groupnr] + curvalue > group_size && groupnr < 2 ) {
groupnr++;
values[groupnr] = 0;
}
// Increase
values[groupnr] += curvalue;
}
console.log(values);
Update: This really answers a similar question (since removed by the author) which didn't include the restriction that the first two groups could not exceed the mean. With that restriction, it's a much simpler problem, and one that probably wouldn't have caught my attention. I'm leaving my answer here as the question it answers seems to be algorithmically interesting.
I have an answer using the Ramda functional programming library. You can see it in action on JSFiddle. (See below for an updated version of this that doesn't depend upon Ramda.) Ramda offers a number of convenient functions that make the code simpler. None of them should be surprising if you're at all used to functional programming, although if you're used to tools like Underscore or LoDash the parameter orders might seem backwards. (Believe me, there is a good reason.)
var equalSplit = (function() {
var square = function(x) {return x * x;};
var variance = function(groups) {
var sizes = map(sum, groups),
mean = sum(sizes) / sizes.length;
return sum(map(pipe(subtract(mean), square), sizes));
};
var firstGroupChoices = function(group, count) {
if (group.length < 2 || count < 2) {return group;}
var mean = sum(group) / count;
var current = 0, next = group[0], idx = 0;
do {
current = next;
next = next + group[++idx];
} while (next < mean);
if (next === mean) {
return [group.slice(0, idx + 1)]
} else {
return [
group.slice(0, idx),
group.slice(0, idx + 1)
];
}
};
var val = function(group, count, soFar) {
if (count <= 0 || group.length == 0) {
return {groups: soFar, variance: variance(soFar)};
}
if (count == 1) {
return val([], 0, soFar.concat([group]));
}
var choices = firstGroupChoices(group, count);
var values = map(function(choice){
return val(group.slice(choice.length), count - 1,
soFar.concat([choice]));
}, choices);
return minWith(function(a, b) {
return a.variance - b.variance;
}, values);
};
return function(group, count) {
return val(group, count, []).groups;
}
}());
Here is some sample output from the Fiddle:
==================================================
Input: [9,18,3,14,2,6,12,7,11,2,1,4]
Split into 3 groups
--------------------------------------------------
Groups: [[9,18,3],[14,2,6,12],[7,11,2,1,4]]
Totals: [30,34,25]
Variance: 40.66666666666667
==================================================
==================================================
Input: [9,18,3,2,6,12,11,2,4]
Split into 3 groups
--------------------------------------------------
Groups: [[9,18],[3,2,6,12],[11,2,4]]
Totals: [27,23,17]
Variance: 50.66666666666667
==================================================
==================================================
Input: [23,10,6,22,22,21,22,14,16,21,13,14,22,16,22,6,16,14,8,20,10,19,12,14,12]
Split into 5 groups
--------------------------------------------------
Groups: [[23,10,6,22,22],[21,22,14,16],[21,13,14,22],[16,22,6,16,14,8],
[20,10,19,12,14,12]]
Totals: [83,73,70,82,87]
Variance: 206
==================================================
I am not at all convinced that this algorithm will give you an actual optimal solution. I think there is some chance that a search might fall into a local minimum and not notice the global minimum in a nearby valley of the search space. But I haven't tried very hard to either prove it or come up with a counterexample. There is also a reasonable chance that it actually is correct. I have no idea if there is some more efficient algorithm than this. The problem feels vaguely like partitioning problems and knapsack problems, but I know I've never run across it before. Many of those problems are NP Hard/NP Complete, so I wouldn't expect this to have a really efficient algorithm available.
This one works in a fairly simple recursive manner. The internal val function accepts an array of numbers, the number of groups to create, and and accumulator (soFar) containing all the groups that have been created so far. If count is zero, it returns a simple result based upon the accumulator. If count is one, it recurs with an empty group, a count of zero, a an accumulator that now includes the original group as its last element.
For any other value of count it calculates the mean size of the remaining group, and then chooses the last initial partial sum of the group smaller than the mean and the first one larger than it (with a special case if there is one exactly equal to it), recurs to finds the value if those partial sequences are used as groups in the anser, and then returns the one with the smaller value.
Values are determined by calculated the variance in the total values of each subgroup formed. The variance is the sum of the squares of the distance from the mean.
For instance, if you started with these values:
[8, 6, 7, 5, 3, 1, 9]
And wanted to break them into three groups, you would have a mean of
(8 + 6 + 7 + 5 + 3 + 1 + 9 = 39) / 3 => 13
If you broke them up like this:
[[8, 6], [7, 5, 3], [1, 9]]
you would have totals
[14, 15, 10]
and a variance of
(14 - 13)^2 + (15 - 13)^2 + (10 - 13)^2 => 14
But if you broke them like this:
[[8, 6], [7, 5], [3, 1, 9]]
your totals would be
[14, 12, 13]
and your variance would be
[14 - 13)^2 + (12 - 13)^2 + (13 - 13)^2 => 2
And since the latter split has the lower variance it is considered better.
Example
equalSplit([9, 18, 3, 2, 6, 12, 11, 2, 4], 3 []) = minVal(
equalSplit([18, 3, 2, 6, 12, 11, 2, 4], 2, [[9]]),
equalSplit([3, 2, 6, 12, 11, 2, 4], 2, [[9, 18]])
);
equalSplit([18, 3, 2, 6, 12, 11, 2, 4], 2, [[9]]) =
equalSplit([12, 11, 2, 4], 1, [[9], [18, 3, 2, 6]]);
equalSplit([3, 2, 6, 12, 11, 2, 4], 2, [9, 18]) = minVal(
equalSplit([12, 11, 2, 4], 1, [[9, 18], [3, 2, 6]])
equalSplit([11, 2, 4], 1, [[9, 18], [3, 2, 6, 12]]
);
equalSplit([12, 11, 2, 4], 1, [[9], [18, 3, 2, 6]]) =
equalSplit([], 0, [[9], [18, 3, 2, 6], [12, 11, 2, 4]])
equalSplit([12, 11, 2, 4], 1, [[9, 18], [3, 2, 6]]) =
equalSplit([], 0, [[9, 18], [3, 2, 6], [12, 11, 2, 4]]) =
equalSplit([11, 2, 4], 1, [[9, 18], [3, 2, 6, 12]]
equalSplit([], 0, [[9, 18], [3, 2, 6, 12], [11, 2, 4]]
equalSplit([], 0, [[9], [18, 3, 2, 6], [12, 11, 2, 4]]) =
variance((9), (18 + 3 + 2 + 6), (12 + 11 + 2 + 4)) =
variance(9, 29, 29) = 266.67
equalSplit([], 0, [[9, 18], [3, 2, 6], [12, 11, 2, 4]]) =
variance((9 + 18), (3 + 2 + 6), (12 + 11 + 2 + 4)) =
variance(27, 11, 29) = 194.67
equalSplit([], 0, [[9, 18], [3, 2, 6, 12], [11, 2, 4]] =
variance((9 + 18), (3 + 2 + 6 + 12), (11 + 2 + 4)) =
variance(27, 23, 17) = 50.67
There is almost certainly plenty that could be done to clean up this code. But perhaps it's at least a start on your problem. It's been a very interesting challenge.
Update
I did create a version which does not depend upon Ramda. The code is very similar. (I guess I really didn't need Ramda, at least not for the final version.):
var equalSplit = (function() {
var sum = function(list) {return list.reduce(function(a, b) {
return a + b;
}, 0);};
var square = function(x) {return x * x;};
var variance = function(groups) {
var sizes = groups.map(sum),
mean = sum(sizes) / sizes.length;
return sum(sizes.map(function(size) {
return square(size - mean);
}, sizes));
};
var firstGroupChoices = function(group, count) {
if (group.length < 2 || count < 2) {return group;}
var mean = sum(group) / count;
var current = 0, next = group[0], idx = 0;
do {
current = next;
next = next + group[++idx];
} while (next < mean);
if (next === mean) {
return [group.slice(0, idx + 1)]
} else {
return [
group.slice(0, idx),
group.slice(0, idx + 1)
];
}
};
var val = function(group, count, soFar) {
if (count <= 0 || group.length == 0) {
return {groups: soFar, variance: variance(soFar)};
}
if (count == 1) {
return val([], 0, soFar.concat([group]));
}
var choices = firstGroupChoices(group, count);
var values = choices.map(function(choice){
return val(group.slice(choice.length), count - 1,
soFar.concat([choice]));
});
return values.sort(function(a, b) {
return a.variance - b.variance;
})[0];
};
return function(group, count) {
return val(group, count, []).groups;
}
}());
Of course, as now noted at the top, this answers a somewhat different question than is now being asked, but I think it's a more interesting question! :-)