How to implement array support with Chevrotain parser? - javascript

I am building a calculator with Chevrotain parser and I have played with their calculator example
Chevrotain Playground: https://chevrotain.io/playground/
Parser Grammar: Calculator embedded semantics
Input Sample: Parenthesis precedence
The example above uses integer as input/output. I would like to support integers AND arrays.
Basic example:
2 * ( 3 + 7 )
Example with arrays:
2 * ( {{array1}} + {{array2}} )
const map = {
array1: [1, 2, 3, 4, 5],
array2: [10, 20, 30, 40, 50]
}
I know how to lex and parse {{array}} but I don't know how to visit them or loop over the arrays to achieve the following result
2 * ( 1 + 10 ) = 22
2 * ( 2 + 20 ) = 44
2 * ( 3 + 30 ) = 66
2 * ( 4 + 40 ) = 88
2 * ( 5 + 50 ) = 110
End result of the parsing of 2 * ( {{array1}} + {{array2}} )
should be [22, 44, 66, 88, 110]
Once the AST has been created (including the arrays on some leaf node), how can I reduce it to the end result which is an array
One approach is to have some sort of state (with the current index) and run the visitor as many time as array.length. But it is unclear to me how to can be implemented with Chevretain

This solution will work
const map = {
array1: [1, 2, 3, 4, 5],
array2: [10, 20, 30, 40, 50]
}
const { array1, array2 } = map;
const res = [];
for (let i = 0; i < (array1.length > array2.length ? array1.length : array2.length); i++) {
if (array1[i] && array2[i]) {
res.push((array1[i] + array2[i]) * 2);
} else if (array1[i] && !array2[i]) {
res.push(array1[i] * 2);
} else {
res.push(array1[i] * 2);
}
}
// res = [ 22, 44, 66, 88, 110 ]
First, we will extract the arrays from an object, then we will create a loop that will run several times of the longer array (trinary operator).
Inside the loop, we will insert the item from array1 plus the item from array2 doubled by 2 into the new array, and if there is no item in one of the spots in the arrays (meaning that one of the arrays is shorter than the other) we will insert only one item.

Related

Why am I getting wrong results when dividing numbers into arrays with weight percentage?

I have number of users to allocate to number of computers instances like docker or AWS. user can increase the number of instances and also can change the users. The instances have weight percentage.
Users: 10
Locations: 2 [{loc1.weight: 70%}, {loc2.weight: 30%}] so it's simple to have 7 and 3 users for each.
The total percentage might not be equal to hundred so scale factor can be anything.
My other requirement is each instance must have at least 1 user. I have condition applied that minimum user can not be less than locations so that part is good.
Other requirement is all users should be integers.
Example
Case #1
Users: 5
Locations: 4 where 1.weight = 15, 2.weight = 30, 3.weight = 15, 4.weight = 50 (total weight 110%)
Expected Results
Locations:
1.users = 1,
2.users = 1,
3.users = 1,
4.users = 2
Case #2
Users: 10
Locations: 4 where 1.weight = 10, 2.weight = 10, 3.weight = 90, 4.weight = 10 (total weight 120%)
Expected Results
Locations:
1.users = 1,
2.users = 1,
3.users = 7,
4.users = 1
Case #3
Users: 5
Locations: 2 where 1.weight = 50, 2.weight = 50
Expected Results
Locations:
1.users = 3,
2.users = 2
That was all of the explanation of the problem. Below is what I had tried.
function updateUsers(clients, weights) {
let remainingClients = clients;
const maxWeight = weights.reduce((total, weight) => total + parseInt(weight), 0);
let r = [];
weights.forEach(weight => {
let expectedClient = Math.round(clients * (weight / maxWeight));
let val = remainingClients <= expectedClient ? remainingClients : expectedClient;
remainingClients -= expectedClient;
r.push(val > 0 ? val : 1);
});
if ( remainingClients > 0 ) {
r = r.sort((a, b) => a > b ? 1 : -1);
for ( let i = 0; i < remainingClients; i++ ) {
r[i] = r[i] + 1;
}
}
return r;
}
I get good results for some numbers like
updateUsers(12, [5, 5, 5, 90]);
gives
[1, 1, 1, 9]; //total 12 users
but using very odd figures like below
updateUsers(12, [5, 5, 5, 200]);
returns
[2, 1, 1, 11]; //total 15 users which is wrong
At first get percentage, You said that in every quota should at least have 1 user, So we used Math.floor(), If its equal to 0, we return 1 and update userCount like so 1 - percentage.
const sumProcedure = (sum, n) => sum + n;
function updateUsers(userCount, weights) {
let n = userCount,
totalWeight = weights.reduce(sumProcedure),
results = weights.map(weight => {
let percentage = (weight * userCount) / totalWeight,
floor = Math.floor(percentage);
if (floor == 0) {
userCount -= 1 - percentage;
return 1
}
return floor;
}),
remain = n % results.reduce(sumProcedure);
while (remain--) {
let i = weights.indexOf(Math.max(...weights));
weights.splice(i, 1);
results[i]++
}
return results;
}
console.log(updateUsers(5, [50, 50])); // [3 , 2]
console.log(updateUsers(12, [5, 5, 5, 90])); // [1, 1, 1, 9]
console.log(updateUsers(12, [5, 5, 5, 200])); // [1, 1, 1, 9]
console.log(updateUsers(5, [15, 30, 15, 50])); // [ 1, 1, 1, 2 ]
console.log(updateUsers(10, [10, 10, 90, 10])); // [ 1, 1, 7, 1 ]
console.log(updateUsers(55, [5, 5, 5, 90])); // [ 3, 2, 2, 48 ]; It has 2 remainders
This approach works if speed is not super important. I don't know javascript, so a bit of it is going to be in pseudocode. I'll keep your notations though.
Let wSum = sum(weights) be the sum of all weights and unitWeight = wSum / weights.length be the weight each user should be assigned if all were given equal weight. Now, let
r[i] = 1;
weights[i] -= unitWeight;
for i = 0, 1 ... weights.length-1. This ensures that all locations receive at least 1 user and the weights are updated to reflect their 'remaining' weight. Now
remainingClients = clients - weights.length;
Assign the rest of the clients via a while(remainingClients > 0) loop or similar:
while(remainingClients > 0)
{
var indexMax = argMax(weights);
weights[indexMax] -= unitWeight;
r[indexMax] += 1;
remainingClients -= 1;
}
This gives the expected result for all your examples. The argMax should of course just return the index of the array corresponding to the maximum value. Due to argMax, the runtime becomes O(n^2), but it doesn't sound like you have tens of thousands of users, so I hope that's okay.

Partition N where the count of parts and each part are a power of 2, and part size and count are restricted

How do you take a number representing a list of items, and divide it into chunks, where the number of chunks is a power-of-two, AND where each chunk also has a power-of-two number of items (where size goes up to a max power of two, so 1, 2, 4, 8, 16, 32, 32 being the max)? Is this even possible?
So for example, 8 items could be divided into 1 bucket (power of two bucket) with 8 items (power of two items):
[8]
9 items could be:
[8, 1]
That works because both numbers are powers of two, and the size of the array is 2 (also a power of two).
Let's try 11:
[8, 2, 1]
Nope that doesn't work. Because the size of the array is 3 which is not a power of two, even though it adds to 11. How about this?
[4, 4, 2, 1]
That works! It's 4 elements which is a power of two.
[2, 2, 2, 1, 1, 1, 1, 1]
That also works, since there are 8 buckets (8 is a power of two), with 1 or 2 items each (each a power of two). But [4, 4, 2, 1] is better because it's shorter.
I guess an even better one (after getting comments) would be this, though I didn't see it the first time around:
[8, 1, 1, 1]
That one is short, and also starts with the largest number.
So following this pattern, here are some other numbers:
13:
[8, 1, 1, 1, 1, 1] // no, 6 not power of 2
[8, 2, 1, 1, 1] // no, 5 not power of 2
[8, 2, 2, 1] // yes, 4 is power of 2
[8, 4, 1] // no, 3 not power of 2
14:
[8, 2, 2, 2]
15:
[8, 4, 2, 1]
16:
[16]
18:
[16, 2]
200:
[32, 32, 32, 32, 32, 32, 4, 4]
When the size of the first layer of buckets in the bucket tree grows to longer than 32, then it nests. So take the number 1234 for example. This can be represented by 38 32's followed by 16 followed by 4.
[32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 16, 4]
But now the bucket size is 40 items long, which isn't a power of two AND it's greater than 32. So it should be nested! I can't quite visualize this in my head, so sorry if this isn't a perfect example, I think you can get the gist of what I mean though.
// the parent "x" is 2 in length
x = [a, b]
// child "a" is 32 in length
a = [32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32]
// child "b" is 8 in length
b = [32, 32, 32, 32, 32, 32, 16, 4]
Taken another layer up, say we have a very large number (I don't know what the minimum large number is) that requires another layer of nesting. What we can say about this layer is that, x will be 32 in length, but we will also have a y that is at least 1.
_x_ = [a, b, c, d, e, f, g, h,
i, j, k, l, m, n, o, p,
q, r, s, t, u, v, w, x,
y, z, a2, b2, c2, d2, e2, f2]
_y_ = [a3]
a = [32, 32, 32, ..., ?]
...
f2 = [32, ..., ?]
Then once we have _x_, _y_, _z_, ... 32 total of these, we build another layer on top.
What is an algorithm/equation that will take a number and divide it into this tree of buckets / item sizes that are all powers of two, up to a max (in this case, 32)?
A subgoal is to minimize the number of levels. There isn't a specific limit, but I am imagining no more than 1 million or very max 1 billion nodes in the entire runtime, so it seems like you'll only have 3 or 4 levels probably, I don't know exactly how to calculate it.
This is going to be used for array lookup. Essentially I am breaking apart a single, large, arbitrarily sized "contiguous" array into small contiguous chunks with size power-of-2 up to 32 in length. This balances lookup performance (i.e. fits within cpu cache), with memory fragmentation.
Update:
I think trying to incorporate this somehow to build up that nested tree I'm trying to describe will help. The last thing now missing is getting the nested arrays to be properly sized to power-of-two values...
The best I have been able to do so far is this:
console.log(spread(74))
console.log(spread(85))
console.log(spread(87))
console.log(spread(127))
console.log(spread(1279))
console.log(spread(12790))
console.log(spread(127901))
function spread(n) {
if (n == 0) {
return [0, 0, 0, 0, 0, 0]
}
let array = []
let r = split(n)
while (r[0] > 31) {
array.push([32, 0, 0, 0, 0, 0])
r[0] -= 32
}
array.push(r)
let s = sum(r)
if (!isPowerOf2(s)) {
let x = pow2ceil(s)
let i = 1
while (i < 5) {
if (r[i] > 1) {
i++
break
}
i++
}
if (i == 5) {
i = 0
}
main:
while (true) {
while (r[i]) {
r[i + 1] += 2
r[i]--
s += 1
if (s == x) {
break main
}
}
i++
}
}
if (array.length == 1) {
return array[0]
} else if (array.length < 33) {
return array
} else {
return divide(array, 32)
}
}
function sum(arr) {
let i = 0
arr.forEach(x => {
i += x
})
return i
}
function split(n) {
const r = [0, 0, 0, 0, 0, 0]
let u = 32
let i = 0
while (u > 0) {
while (n >= u) {
r[i]++
n -= u
}
i += 1
u >>= 1
}
return r
}
function isPowerOf2(v) {
return v && !(v & (v - 1))
}
function pow2floor(v) {
var p = 1;
while (v >>= 1) {
p <<= 1;
}
return p;
}
function pow2ceil(v) {
var p = 2
while (v >>= 1) {
p <<= 1
}
return p
}
function divide(data, size) {
const result = []
const upSize = data.length / size;
for (let i = 0; i < data.length; i += size) {
const chunk = data.slice(i, i + size);
result.push(chunk)
}
if (result.length > size) {
return divide(result, size)
}
return result;
}
It's always possible.
Start with the normal binary representation.
You get a number of summands that are all powers of 2.
So the problem is the number of summands is most of the times not a power of two.
You can always get an extra summand by splitting a power of 2 in 2 summand (also powers of 2). Only exception is 1.
So the question is there a case where not enough summand > 1 exists?
Answer: No
Worst case is we have n summand where n is a (power of 2)-1.
E.g. 3, 7,15, ...
Is we have 3 summand the smallest possible case is 1+2+4. We need 4 summand, so we must create 1 extra summand by splitting one of the summands >1 into two. e.g 1+1+1+4.
For bigger values the highest summand is always >= ceeling(value/2) and has at most ceeling(sqrt(value))+1 summands in binary representation.
ceeling(value/2) grows much faster than sqrt(value).
So we have with increasing values always plenty of summands to split to reach the power of 2 summands goal.
Example: value= 63
Binary representation: 32+16+8+4+2+1 (6 summands)
Highest summand is 32 (at least value/2) (which can be split in any number of summands (all powers of 2) up to 32 summands.
We need at most ceeling(sqrt(63))+1 = 8 summands to reach a power of 2 summands.
So we need 2 extra summands for our 32+16+8+4+2+1
Take any summand >1 and split it in two summands (powers of 2)
e.g 32 = 16+16
=> 16+16+16+8+4+2+1 (7 summands)
do it again (because we needed 2 extra summands).
Take any summand >1 e.g. 4 and split ist 2+2=4
=> 16+16+16+8+2+2+2+1 (8 summands)
Here is a possible algorithm:
Check the lowest 5 bits of the input number n and generate the corresponding powers of 2 in an array. So for instance for n = 13 we get [1, 4, 8]
Divide n by 32 ignoring the above-mentioned bits (floor).
Add to the above array as many values of 32 as n modulo 32. So for example for input = 77 we get [1, 4, 8, 32, 32]
Most of the times this array will have a length that does not exceed 32, but it could go up to 36: [1, 2, 4, 8, 16, 32, ..., 32]. If that happens, extract 16 values from the end of the array and store them in a "carry": this carry will be processed later. So not considering this potential carry, we ensure we end up with an array that is not longer than 32.
Then perform a split in halves of the greatest value in the array (thereby growing the array with one unit) until the array's length is a power of 2. For instance, for the case of 77 we'll have a few of such iterations in order to get [1, 4, 8, 8, 8, 16, 16, 16]
Divide n again by 32 ignoring the remainder (floor).
Consider again n modulo 32. If this is non-zero we have found summands of 32*32, so we create a new array [32, ..., 32] for each of those, and combine that with the previously established array into a new tree. So for instance for 1037, we could get
[
[1,4,4,4],
[32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32]
]
If there is room to add a potential carry (i.e. the top level array does not have a length of 32), then do so.
If the length of the array is not yet a power of 2, apply a similar algorithm as previously mentioned, although now a split in half concerns arrays instead of primitive values.
Repeat this division by 32 to identify even higher nested summands, so these are complete 32*32*32 trees, then in the next iteration, these are complete 324 trees, etc, until all of n is accounted for.
At the end, check if the carry is still there and could not yet be added somewhere: if this is the case add an extra level to the tree (at the top) to combine the achieved result with this carry, so they are siblings in an array of 2.
Implementation
Here is an interactive snippet: type a number and the resulting tree will be displayed in real time. Note that the nested tree is really created in this implementation and will consume quite some memory, so to keep the response times acceptable in JavaScript, I have limited the allowed input to numbers with 7 digits, but in theory the only limit is memory and floating point precision (which in this script is guaranteed up to 253).
// Utility functions
const sum = arr => arr.reduce((a, b) => a+b, 0);
const powersOf2andZero = [0,1,2,4,8,16,32];
const clone = tree => JSON.parse(JSON.stringify(tree));
function createTree(n) {
let tree = [];
let carry = null;
// Isolate 5 least significant bits
for (let pow of [1, 2, 4, 8, 16]) if (n & pow) tree.push(pow);
n = Math.floor(n / 32);
for (let i = n % 32; i > 0; i--) tree.push(32);
// This array could have more than 32 values, up to 36.
// If that happens: remove 16 occurrences of 32, and consider this as carry-over for later treatment.
if (tree.length > 32) carry = tree.splice(-16); // pop off 16 x 32s.
// Make the array length a power of 2 by splitting the greatest value (repeatedly)
let j = tree.length;
while (!powersOf2andZero.includes(tree.length)) {
if (j >= tree.length) j = tree.indexOf(tree[tree.length - 1]); // first occurrence of greatest
// Split greatest value into 2; keep list sorted
tree.splice(j, 1, tree[j] / 2, tree[j] / 2); // delete, and insert twice the half at same spot
j += 2;
}
// Isolate and count factors of 32, 32², 32³, ...etc.
// Add a superiour level in the tree for each power of 32 that is found:
n = Math.floor(n / 32);
let template = 32;
while (n) {
if (tree.length > 1) tree = [tree]; // nest
if (n % 32 < 31 && carry !== null) { // we have room to dump the carry here
tree.push(carry);
carry = null;
}
template = Array(32).fill(template); // nest the template tree, "multiplying" by 32.
for (let i = n % 32; i > 0; i--) tree.push(clone(template));
if (tree.length === 1 && typeof tree[0] !== "number") tree = tree[0]; // Eliminate useless array wrapper
// Turn this top level into a length that is a power of 2 by splitting the longest array (repeatedly)
let j = tree.length;
while (!powersOf2andZero.includes(tree.length)) {
if (j >= tree.length) j = tree.findIndex(elem => elem.length === tree[tree.length - 1].length);
// Split longest array into 2; keep list sorted
let size = tree[j].length / 2;
tree.splice(j, 1, tree[j].slice(0, size), tree[j].slice(size)); // delete, and insert twice the half
j += 2;
}
n = Math.floor(n / 32);
}
// Is the carry still there? Then we cannot avoid adding a level for it
if (carry) return [tree, carry];
return tree;
}
// I/O handling
let input = document.querySelector("input");
let output = document.querySelector("pre");
(input.oninput = function () {
let n = +input.value;
if (isNaN(n) || n % 1 !== 0 || n < 1 || n > 9999999) {
output.textContent = "Please enter an integer between 1 and 9999999";
} else {
let tree = createTree(n);
output.textContent = pretty(tree);
}
})();
function pretty(tree) {
return JSON.stringify(tree, null, 2)
.replace(/\[\s*\d+\s*(,\s*\d+\s*)*\]/g, m => m.replace(/\s+/g, ""))
.replace(/\b\d\b/g, " $&");
}
pre { font-size: 8px }
n: <input type="number" value="927">
<pre></pre>
(Note, the following answers the restriction on part size and the restriction on the number of parts being a power of 2. I missed the part about the number of parts also being restricted, indicating nesting. I'll try to get to that next.)
A simple proof that's also a method is that our minimal number of parts, MIN, is M = floor(N / max_power_of_2) plus the number of set bits in the binary representation of N - M*max_power_of_2; and the maximal number of parts, MAX, is N, where each part is 1.
Each time we divide one of the powers of 2, P, in the power-of-two representation of M (which starts as some count of max_power_of_2) or N-M*max_power_of_2, we have one count less of P, and two more of P/2, another power of 2. This action adds just one part to the partition, meaning we can represent any number of parts between MIN and MAX.
Greedy JavaScript code:
function f(n, maxP){
const maxPowerOf2 = 1 << maxP;
const m = ~~(n / maxPowerOf2);
const A = new Array(maxP + 1).fill(0);
A[maxP] = m;
let nm = n - m * maxPowerOf2;
let p = 0;
let bitCount = 0;
while (nm){
if (nm & 1){
bitCount += 1;
A[p] = 1;
}
nm >>= 1;
p += 1;
}
const min = m + bitCount;
let target = 1;
while (target < min)
target *= 2;
if (target > n)
return -1;
if (target == min)
return A.map((c, p) => [1 << Number(p), c]);
if (target == n)
return [n];
// Target is between MIN and MAX
target = target - min;
let i = m ? maxP : p;
while (target && i > 0){
if (!A[i]){
i -= 1;
continue;
}
const max = Math.min(target, A[i]);
A[i] -= max;
A[i-1] += 2*max;
target -= max;
i -= 1;
}
return target ? -1 : A.map((c, p) => [1 << Number(p), c]);
}
var ns = [74, 85, 87, 127, 1279, 12790, 127901, 63];
var maxP = 5;
for (let n of ns){
let result = f(n, maxP);
let [_n, numParts] = result.reduce(([_n, numParts], [p, c]) => [_n + p * c, numParts + c], [0, 0]);
console.log(n, maxP);
console.log(JSON.stringify(result));
console.log(JSON.stringify([_n, numParts]));
console.log('');
}

Function that finds largest pair sum in unordered array of integers

I want to write a function that, given a sequence of unordered numbers, finds the largest pair sum.
largestPairSum([10, 14, 2, 23, 19]) --> 42 (i.e. sum of 23 and 19)
largestPairSum([99, 2, 2, 23, 19]) --> 122 (i.e. sum of 99 and 23)
largestPairSum([-10,-20,-30,-40]) --> -30 (i.e sum of -10 and -20)
My try
function largestPairSum(numbers)
{
let counter =0;
let numbersord = numbers.sort();
if (numbersord[0] === -numbersord[0]) {
numbersord.reverse()
counter=numbersord[0]+numbersord[1]
}
else {
counter=numbersord[-1]+numbersord[-2]
}
return counter
}
When invoked, the function says 'NaN' , however when I
console.log(typeof(numbersord[0]))
it says number. Not sure where I have gone wrong, thanks for reading!
You approach does not works, because
sorting by string ascending (standard without callback)
using negative indices.
You could sort descending and take the first two elememts.
function largestPairSum(numbers) {
numbers.sort((a, b) => b - a);
return numbers[0] + numbers[1];
}
console.log(largestPairSum([10, 14, 2, 23, 19])); // 42 (23 + 19)
console.log(largestPairSum([99, 2, 2, 23, 19])); // 122 (99 + 23)
console.log(largestPairSum([-10, -20, -30, -40])) // -30 (-10 + -20)
A solution without sorting.
function largestPairSum(numbers) {
let largest = numbers.slice(0, 2),
smallest = largest[0] < largest[1] ? 0 : 1;
for (let i = 2; i < numbers.length; i++) {
if (largest[smallest] > numbers[i]) continue;
largest[smallest] = numbers[i];
smallest = largest[0] < largest[1] ? 0 : 1;
}
return largest[0] + largest[1];
}
console.log(largestPairSum([10, 14, 2, 23, 19])); // 42 (23 + 19)
console.log(largestPairSum([99, 2, 2, 23, 19])); // 122 (99 + 23)
console.log(largestPairSum([-10, -20, -30, -40])) // -30 (-10 + -20)
In O(n), as advised by #VLAZ:
const largestPairSum = (arr) => {
let a = -Infinity,
b = -Infinity;
for (let item of arr)
if (item > a && b > -Infinity)
a = item;
else if (item > b)
b = item;
return a+b;
}
let tests = [
largestPairSum([10, 14, 2, 23, 19]),
largestPairSum([99, 2, 2, 23, 19]),
largestPairSum([-10,-20,-30,-40]),
];
console.log(tests);
console.log("Nina's test:", largestPairSum([20, 50, 10, 1, 2]));
There are three problems in your code.
First, counter=numbersord[-1]+numbersord[-2] is incorrect. Trying to get a negative index from an array, just returns undefined, since there is nothing there. It's not flipping and getting from the end, to do that, you need to explicitly pass arrayLength - negativeIndex to get things from the end:
const arr = ["a", "b", "c", "d", "e"];
console.log(arr[1]);
console.log(arr[2]);
console.log(arr[-1]);
console.log(arr[-2]);
console.log(arr[arr.length - 1]);
console.log(arr[arr.length - 2]);
Second, numbers.sort() is not correct. It sorts using lexicographical order, not numeric, where 1 < 2 but also 10 < 2. You need to sort numbers properly (check the link for more information):
const arr = [1, 2, 10, 20]
arr.sort();
console.log("default (lexicographical) sort", arr);
arr.sort((a, b) => a - b);
console.log("ascending sort", arr);
arr.sort((a, b) => b - a);
console.log("descending sort", arr);
Finally, if(numbersord[0] === -numbersord[0]) this condition is useless. The only number that is equal to a negative of itself is zero:
console.log(0 === -0);
console.log(1 === -1);
console.log(42 === -42);
console.log(Infinity === -Infinity);
console.log(NaN === -NaN);
However, that's not useful to check for. The logic there (essentially) checks if the array starts with zero and if it does, it reverses it and takes the first two results. If it doesn't start with zero, it tries to take the last two elements of the array.
However, if you simply sort in descending order you get all of that for free and you can take the first two items every time.
So, with these changes, your code looks like this
function largestPairSum(numbers)
{
let counter =0;
//perform a descending sort
let numbersord = numbers.sort((a, b) => b - a);
//always take the first two items
counter=numbersord[0]+numbersord[1]
return counter
}
console.log(largestPairSum([10, 14, 2, 23, 19]))// --> 42 (i.e. sum of 23 and 19)
console.log(largestPairSum([99, 2, 2, 23, 19])) // --> 122 (i.e. sum of 99 and 23)
console.log(largestPairSum([-10,-20,-30,-40])) // --> -30 (i.e sum of -10 and -20)
NaN type is a number, it is correct. You get this because there are no elements with indexes "-1" or "-2" in any array. If you want to get an elements from the end you have to use something like
arr[arr.length - 1] // returns last element
Then,
let numbersord = numbers.sort(); // sort() method mutates the original array, be careful
numbersord[0] === -numbersord[0] // - returns true only when number is 0
// to check if it is negative use Math.abs()
The question is - can array contains both negative and positive elements? In this case you can not just reverse an array [-10, -5, -2, 10] - the max sum is 8, not -15.
I would use reduce method like this:
function largestPairSum(arr) {
const initialAcc = [-Infinity, -Infinity]
const highestNumbers = arr.reduce((acc, rec)=> {
if (rec <= acc[0]) return acc
if (rec >= acc[1]) return [acc[1], rec]
return [rec, acc[1]]
},initialAcc)
return highestNumbers[0] + highestNumbers[1]
}

How to pick n number of elements from n number of arrays with certain ratio?

I have m number of arrays. Lets say m is equal to 4.
let arr1 = [1, 2, 3, 4, ...., 53];
let arr2 = [54, 55, 56, ...., 76];
let arr3 = [77, 78, ...., 84];
let arr = [85, 86, 87, 88];
Here I want to pick n elements from all the arrays. Lets say n is equal to 18.
If I want pick 18 number from 4 arrays with certain percentage based on the array length, What would I do?
Result should be like, [1, 2, 3, ......, 10, 54, ....., 58, 77, 78, 85];
I have to divide 18 into 4 different sizes based on array lengths.
I have total of 88 elements from 4 arrays. I need 18 elements differently.
I have tried following.
let prct = Math.ceil((88 / 100) * arr.length);
let count = Math.ceil((18 / 100) * prct);
I have done the same for all four arrays. But it gives me the total of 16 only.
What should I do? Any suggestions?
Thanks.
Assuming the array is ordered by the length of the arrays descending.
You could calculate only one floored count for the actual element and take the not used count to the next iteration. At the end take the leftover count for getting the wanted amount of items.
var arrays = [
Array.from({ length: 53 }, (_, i) => i + 1),
Array.from({ length: 23 }, (_, i) => i + 54),
Array.from({ length: 8 }, (_, i) => i + 77),
Array.from({ length: 4 }, (_, i) => i + 85)
],
total = arrays.reduce((r, a) => r + a.length, 0),
wanted = 18,
parts = arrays.map((a, i, aa) => {
var count = i + 1 < aa.length
? Math.floor(a.length * wanted / total)
: wanted;
wanted -= count;
total -= a.length;
return count;
}),
result = arrays.map((a, i) => a.slice(0, parts[i]));
console.log(parts);
console.log(result);

JavaScript - Print a Histogram from a given array

// Given an array of integers [2, 1, 2, 101, 4, 95, 3, 250, 4, 1, 2, 2, 7, 98, 123, 99, ...]
I'm trying to Write a function (with linear run-time complexity) to print the following tabular output with ‘xxx' that resembles a histogram (the output should closely match the sample output below, including "99+" to capture the count for all numbers > 99):
Num | count
1 | xx
2 | xxxx
3 | x
4 | xx
98 | x
99 | x
99+| xxx
const dict = {}; // Empty dictionary
var min = Number.MAX_VALUE;
const maxRange = 5; // elements above maxRange will be clubbed in the same range.
//var arr = [2, 1, 2, 101, 4, 95, 3, 250, 4, 1, 2, 2, 7, 98, 123, 99];
const arr = [1, 2, 5, 3, 2, 2, 1, 5, 5, 6, 7, 1, 8, 10, 11, 12];
// iterate the array and set and update the counter in map
arr.forEach(function(num) {
min = Math.min(min, num); // find min
if (num > maxRange) {
num = maxRange + 1;
}
dict[num] = dict[num] ? dict[num] + 1 : 1;
});
console.log("Num | Count");
// Print the occurrences per item in array starting from min to max
while (min <= maxRange + 1) {
if (!dict[min]) { // print only those numbers which are defined in dictionary
min++;
continue;
}
var xArr = []
var range = dict[min];
for (i = 0; i < range; i++) {
xArr.push('x');
}
var disp = (min <= maxRange) ? (min + " | " + xArr.join("")) : (maxRange + "+ | " + xArr.join(""));
console.log(disp);
min = min + 1;
}
You might want to try iterating over the array using the forEach() method. then create an array of "x"'s as long as the current item in the array. Then use the join method to join the array into a string of x's.
I haven't included how to handle numbers over 99 but I think you this should be enough to get you started. The solution would involve using a conditional statement to check if the number is above 99 and printing accordingly.
i got the following output from my example:
Num | Count
2 ' |' 'xx'
4 ' |' 'xxxx'
6 ' |' 'xxxxxx'
8 ' |' 'xxxxxxxx'
have fun!
var arr = [2,4,6,8]
printHistogram = (array) => {
console.log("Num", '|', "Count")
array.forEach((x, i) => { //iterate over the array (x = current item, i = index)
var arrToJoin =[] //create an empty array
for(i = 0; i < x; i++) {
arrToConcat.push('x') //add an "x" to the array
}
console.log(i, ' |', arrToConcat.join(''))
})
}
printHistogram(arr)
How about this?
create an array of all the unique numbers called arrToCompare
then iterate over that array and compare each number to each number in the original array. If they are equal push an x to the array to join. Join it and log it with the appropriate symbols.
var arr = [7, 7, 8, 9, 2,4,6,8,2]
printHistogram = (array) => {
var arrToCompare =[]
console.log("Num", '|', "Count")
array.forEach((x, i) => {
arrToCompare.includes(x) ? "" : arrToCompare.push(x)
})
arrToCompare.forEach(function(x) {
var arrToJoin = []
array.forEach(function(i) {
if(i === x) {
arrToJoin.push('x')
}
})
console.log(x, '|', arrToJoin.join(''))
})
}
printHistogram(arr)

Categories

Resources