Algorithm to find the power of 2 - javascript

I have found a small algorithm to determine if a number is power of 2, but not an explanation for how it works, what really happens?
var potence = n => n && !(n & (n - 1));
for(var i = 2; i <= 16; ++i) {
if(potence(i)) console.log(i + " is potence of 2");
}

I'll explain how it works for non-negative n. The first condition in n && !(n & (n - 1)) simply checks that n is not zero. If n is not zero, then it has some least significant 1-bit at some position p. Now, if you subtract 1 from n, all bits before position p will change to 1, and the bit at p will flip to 0.
Something like this:
n: 1010100010100111110010101000000
n-1: 1010100010100111110010100111111
^ position p
Now, if you & these two bit-patterns, all the stuff after the position p remains unchanged, and everything before (and including p) is zeroed out:
after &: 1010100010100111110010100000000
^ position p
If the result after taking & happens to be zero, then it means that there was nothing after position p, thus the number must have been
2^p, which looked like this:
n: 0000000000000000000000001000000
n - 1: 0000000000000000000000000111111
n&(n-1): 0000000000000000000000000000000
^ position p
thus n is a power of 2. If the result of & is not zero (as in the first example), then it means that there was some junk in the more significant bits after the p-th position, and therefore n is not a power of 2.
I'm too lazy to play this through for the 2-complement representation of negative numbers.

If one number is potence of 2, then must be 10...0 in binary representation. Minus by 1, then then leading 1 should be 0, so that n & (n-1) to be 0. Otherwise, it is not the potence of 2.

Kinka's answer is essentially correct, but perhaps needs a bit more detail on the "otherwise" case. If the number isn't a power of two, then it must have the form n=(2^a + 2^(b) + y), where a>b and y <2^b. Subtracting 1 from that must be strictly greater than 2^a, so (n & (n-1)) is at least 2^a, and therefore non-zero.

Related

What's the most efficient way of getting position of least significant bit of a number in javascript?

I got some numbers and I need to get how much they should be shifted for their lower bit to be at position 0.
ex:
0x40000000 => 30 because 0x40000000 >> 30 = 1
768 = 512+256 => 8
This works
if (Math.log2(x) == 31)
return 31;
if (Math.log2(x) > 31)
x = x & 0x7FFFFFFF;
return Math.log2(x & -x)
Is there any more efficient or elegant way (builtin ?) to do this in javascript ?
You cannot get that result immediately with a builtin function, but you can avoid using Math.log2. There is a little known function Math.clz32, which counts the number of leading zeroes of a number in its 32-bit binary representation. Use it like this:
function countTrailingZeroes(n) {
n |= 0; // Turn to 32 bit range
return n ? 31 - Math.clz32(n & -n) : 0;
}
console.log(countTrailingZeroes(0b11100)); // 2
The ternary expression is there to catch the value n=0, which is like a degenerate case: it has no 1-bit.

algorithm to determine if a number is made of sum of multiply of two other number

let say it's given 2k+2+3p=n as the test, how to find out the test is true for a number is valid for a number when k>=0, p>=0, n>=0:
example1 : n=24 should result true since k=5 & p=4 => 2(5)+2+3(4)=24
example2 : n=11 should result true since k=0 & p=3 => 2(0)+2+3(3)=11
example3 : n=15 should result true since k=5 & p=1 => 2(5)+2+3(1)=15
i wonder if there is a mathematic solution to this. i solved it like bellow:
//let say 2k+2+3p=n
var accepted = false;
var betterNumber= n-2;
//assume p=0
var kReminder= (betterNumber)%2==0;
//assume k=0
var pReminder= (betterNumber)%3==0;
if (kReminder || pReminder){
accepted=true;
}else{
var biggerChunk= Math.Max(2,3); //max of 2k or 3p, here i try to find the bigger chunk of the
var smallerChunk= Math.Min(2,3);
if ((betterNumber%bigger)%smallerChunk==0){
accepted=true;
}else
{
accepted=false;
}
}
still there are edge cases that i didn't see. so i wonder if it has a better solution or not.
Update
the test above is just an example. the solution should be efficient enough for big numbers or any combination of number like 1000000k+37383993+37326328393p=747437446239902
By inspection, 2 is the smallest valid even number and 5 is the smallest valid odd number:
2 is valid (k=0, p=0)
5 is valid (k=0, p=1)
All even numbers >= 2 and all odd numbers >= 5 are valid.
Even numbers: k=n/2-1, p=0
odd numbers: k=(n-3)/2-1, p=1
What we're doing here is incrementing k to add 2s to the smallest valid even and odd numbers to get all larger even and odd numbers.
All values of n >= 2 are valid except for 3.
Dave already gave a constructive and efficient answer but I'd like to share some math behind it.
For some time I'll ignore the + 2 part as it is of less significance and concentrate on a generic form of this question: given two positive integers a and b check whether number X can be represented as k*a + m*b where k and m are non-negative integers. The Extended Euclidean algorithm essentially guarantees that:
If number X is not divisible by GCD(a,b), it can't be represented as k*a + m*b with integer k and m
If number X is divisible by GCD(a,b) and is greater or equal than a*b, it can be represented as k*a + m*b with non-negative integer k and m. This follows from the fact that d = GCD(a,b) can be represented in such a form (let's call it d = k0*a + m0*b). If X = Y*d then X = (Y*k0)*a + (Y*m0)*b. If one of those two coefficients is negative you can trade one for the other adding and subtracting a*b as many times as required as in X = (Y*k0 + b)*a + (Y*m0 - a)*b. And since X >= a*b you can always get both coefficients to be non-negative in such a way. (Note: this is obviously not the most efficient way to find a suitable pair of those coefficients but since you only ask for whether such coefficients exist it should be sufficient.)
So the only gray area is numbers X divisible by GCD(a,b) that lie between in the (0, a*b) range. I'm not aware of any general rule about this area but you can check it explicitly.
So you can just do pre-calculations described in #3 and then you can answer this question pretty much immediately with simple comparison + possibly checking against pre-calculated array of booleans for the (0, a*b) range.
If you actual question is about k*a + m*b + c form where a, b and c are fixed, it is easily converted to the k*a + m*b question by just subtracting c from X.
Update (Big values of a and b)
If your a and b are big so you can't cache the (0, a*b) range beforehand, the only idea I have is to do the check for values in that range on demand by a reasonably efficient algorithm. The code goes like this:
function egcd(a0, b0) {
let a = a0;
let b = b0;
let ca = [1, 0];
let cb = [0, 1];
while ((a !== b) && (b !== 0)) {
let r = a % b;
let q = (a - r) / b;
let cr = [ca[0] - q * cb[0], ca[1] - q * cb[1]];
a = b;
ca = cb;
b = r;
cb = cr;
}
return {
gcd: a,
coef: ca
};
}
function check(a, b, x) {
let eg = egcd(a, b);
let gcd = eg.gcd;
let c0 = eg.coef;
if (x % gcd !== 0)
return false;
if (x >= a * b)
return true;
let c1a = c0[0] * x / gcd;
let c1b = c0[1] * x / gcd;
if (c1a < 0) {
let fixMul = -Math.floor(c1a / (b / gcd));
let c1bFixed = c1b - fixMul * (a / gcd);
return c1bFixed >= 0;
}
else { //c1b < 0
let fixMul = -Math.floor(c1b / (a / gcd));
let c1aFixed = c1a - fixMul * (b / gcd);
return c1aFixed >= 0;
}
}
The idea behind this code is based on the logic described in the step #2 above:
Calculate GCD and Bézout coefficients using the Extended Euclidean algorithm (if a and b are fixed, this can be cached, but even if not this is fairly fast anyway).
Check for conditions #1 (definitely no) and #2 (definitely yes) from the above
For value in the (0, a*b) range fix some coefficients by just multiplying Bézout coefficients by X/gcd. F
Find which of the two is negative and find the minimum multiplier to fix it by trading one coefficient for another.
Apply this multiplier to the other (initially positive) coefficient and check if it remains positive.
This algorithm works because all the possible solutions for X = k*a + m*b can be obtained from some base solution (k0, m0) using as (k0 + n*b/gcd, m0 + n*a/gcd) for some integer n. So to find out if there is a solution with both k >= 0 and m >= 0, all you need is to find the solution with minimum positive k and check m for it.
Complexity of this algorithm is dominated by the Extended Euclidean algorithm which is logarithmic. If it can be cached, everything else is just constant time.
Theorem: it is possible to represent number 2 and any number >= 4 using this formula.
Answer: the easiest test is to check if the number equals 2 or is greater or equals 4.
Proof: n=2k+2+3p where k>=0, p>=0, n>=0 is the same as n=2m+3p where m>0, p>=0 and m=k+1. Using p=0 one can represent any even number, e.g. with m=10 one can represent n=20. The odd number to the left of this even number can be represented using m'=m-2, p=1, e.g. 19=2*8+3. The odd number to the right can be represented with m'=m-1, p=1, e.g. 21=2*9+3. This rule holds for m greater or equal 3, that is starting from n=5. It is easy to see that for p=0 two additional values are also possible, n=2, n=4.

Partitioning set such that cartesian product obeys constraint

I was reading this question, which describes the following problem statement:
You are given two ints: N and K. Lun the dog is interested in strings that satisfy the following conditions:
The string has exactly N characters, each of which is either 'A' or 'B'.
The string s has exactly K pairs (i, j) (0 <= i < j <= N-1) such that s[i] = 'A' and s[j] = 'B'.
If there exists a string that satisfies the conditions, find and return any such string. Otherwise, return an empty string
It occurs to me that this problem is equivalent to:
Determine whether there are any 2-partitions of 0...N-1 for which the cartesian product contains exactly K tuples (i, j) with i < j
Where the tuple elements represent assignments of the string index to the characters A and B.
This yields the very naive (but correct) implementation:
Determine all 2-partitions of the set 0...N-1
For each such partitioning, produce the cartesian product of the subsets
For each cartesian product, count the number of tuples (i, j) for which i < j
Choose any 2-partition for which this count is K
Here is an implementation in JS:
const test = ([l, r]) =>
cart(l, r).reduce((p, [li, ri]) => p + (li < ri ? 1 : 0), 0) === k
const indices = _.range(0, n)
const results = partitions(indices).filter(test)
You can test out the results in the context of the original problem here. Some example outputs for n = 13, k = 29:
"aababbbbbbbbb", "babaaabbbbbbb", "baabababbbbbb", "abbaababbbbbb", ...
The complexity for just the first step here is the number of ways to partion a set: this is the rather daunting Stirling number of the second kind S(n, k) for k = 2:
For e.g. n=13 this works out to 4095, which is not great.
Obviously if we only need a single partitioning that satisfies the requirement (which is what the original question asks for), and compute everything lazily, we will generally not go into the worst case. However in general, the approach here still seems quite wasteful, in that most of the partitions we compute never satisfy the property of having k tuples in the cartesian product for which i < j.
My question is whether there is some further abstraction or isomorphism that can be recognized to make this more efficient. E.g. is it possible to construct a subset of 2-partitions in such a way that the condition on the cartesian product is satisfied by construction?
(This is a method to algorithmically construct all solutions; you're probably looking for a more mathematical approach.)
In this answer to the linked question I give a method for finding the lexicographically smallest solution. This tells you what the smallest number of B's is with which you can construct a solution. If you turn the method on its head and start with a string of all B's and add A's from the left, you can find the highest number of B's with which you can construct a solution.
To construct all solutions for a specific number of B's in this range, you can again use a recursive method, but instead of only adding a B to the end and recursing once with N-1, you'd add B, then BA, then BAA... and recurse with all cases that will yield valid solutions. Consider again the example of N=13 and K=29, for which the minimum number of B's is 3 and the maximum is 10; you can construct all solutions for e.g. 4 B's like this:
N=13 (number of digits)
K=29 (number of pairs)
B= 4 (number of B's)
(13,29,4) =
(12,20,3) + "B"
(11,21,3) + "BA"
(10,22,3) + "BAA"
At this point you know that you've reached the end of the cases that will yield solutions, because (9/2)2 < 23. So at each level you recurse with:
N = N - length of added string
K = K - number of A's still to be added
B = B - 1
When you reach the recursion level where B is either 1 or N - 1, you can construct the string without further recursions.
Practically, what you're doing is that you start with the B's as much to the right as possible, and then one by one move them to the left while compensating this by moving other B's to the right, until you've reached the position where the B's are as much to the left as possible. See the output of this code snippet:
function ABstring(N, K, B, str) {
if ((N - B) * B < K) return;
str = str || "";
if (B <= 1 || B >= N - 1) {
for (var i = N - 1; i >= 0; i--)
str = (B == 1 && i == K || B == N - 1 && N - 1 - i != K || B == N ? "B" : "A") + str;
document.write(str + "<br>");
} else {
var prefix = "B";
--B;
while (--N) {
if (K - (N - B) >= 0 && B <= N)
ABstring(N, K - (N - B), B, prefix + str);
prefix += "A";
}
}
}
ABstring(13, 29, 4);
If you run this code for all values of B from 3 to 10, you get all 194 solutions for (N,K) = (13,29). Instead of calculating the minimum and maximum number of B's first, you can just run this algorithm for all values of B from 0 to N (and stop as soon as you no longer get solutions).
This is the pattern for (N,K,B) = (16,24,4):
Let P be function that for given AB string returns number of good pairs (i, j), s[i] = 'A', s[j] = 'B'.
First consider strings of length N where number of B's is fixed, say b. Strings that contain (N-b) A's. Call this set of string S_b. Min P on S_b is 0, with all B's on left side (call this string O). Max P on S_b is b*(N-b), with all B's are on right side. This is simple check for non-existence of s in S_b with required property.
Consider operation of swapping neighbouring BA -> AB. That operation changes P for +1. Using only that operation, starting from string O, is possible to construct every string with b B's. This gives us if b*(N-b) >= K than there is s in S_b with required property.
Rightmost B in O can move till the end of a string, N-b places. Since it is not possible to swap two B's, than B that is left of rightmost B can move as much as rightmost B, ... Number of moves B's (m_i) can make is 0 <= m_1 <= m_2 <= ... <= m_b <= N-b.
With that, finding all AB strings s of length N with b B's where P(s)=K is equivalent as finding all partition of integer K in at most b parts where part is <= N-b. To finding all strings it is needed to check all b where b*(N-b) >= K.

Can someone explain this base conversion code

var ShortURL = new function() {
var _alphabet = '23456789bcdfghjkmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ-_',
_base = _alphabet.length;
this.encode = function(num) {
var str = '';
while (num > 0) {
str = _alphabet.charAt(num % _base) + str;
num = Math.floor(num / _base);
}
return str;
};
this.decode = function(str) {
var num = 0;
for (var i = 0; i < str.length; i++) {
num = num * _base + _alphabet.indexOf(str.charAt(i));
}
return num;
};
};
I understand encode works by converting from decimal to custom base (custom alphabet/numbers in this case)
I am not quite sure how decode works.
Why do we multiply base by a current number and then add the position number of the alphabet? I know that to convert 010 base 2 to decimal, we would do
(2 * 0^2) + (2 * 1^1) + (2 * 0 ^ 0) = 2
Not sure how it is represented in that decode algorithm
EDIT:
My own decode version
this.decode2 = function (str) {
var result = 0;
var position = str.length - 1;
var value;
for (var i = 0; i < str.length; i++) {
value = _alphabet.indexOf(str[i]);
result += value * Math.pow(_base, position--);
}
return result;
}
This is how I wrote my own decode version (Just like I want convert this on paper. I would like someone to explain more in detail how the first version of decode works. Still don't get why we multiply num * base and start num with 0.
OK, so what does 376 mean as a base-10 output of your encode() function? It means:
1 * 100 +
5 * 10 +
4 * 1
Why? Because in encode(), you divide by the base on every iteration. That means that, implicitly, the characters pushed onto the string on the earlier iterations gain in significance by a factor of the base each time through the loop.
The decode() function, therefore, multiplies by the base each time it sees a new character. That way, the first digit is multiplied by the base once for every digit position past the first that it represents, and so on for the rest of the digits.
Note that in the explanation above, the 1, 5, and 4 come from the positions of the characters 3, 7, and 6 in the "alphabet" list. That's how your encoding/decoding mechanism works. If you feed your decode() function a numeric string encoded by something trying to produce normal base-10 numbers, then of course you'll get a weird result; that's probably obvious.
edit To further elaborate on the decode() function: forget (for now) about the special base and encoding alphabet. The process is basically the same regardless of the base involved. So, let's look at a function that interprets a base-10 string of numeric digits as a number:
function decode10(str) {
var num = 0, zero = '0'.charCodeAt(0);
for (var i = 0; i < str.length; ++i) {
num = (num * 10) + (str[i] - zero);
}
return num;
}
The accumulator variable num is initialized to 0 first, because before examining any characters of the input numeric string the only value that makes sense to start with is 0.
The function then iterates through each character of the input string from left to right. On each iteration, the accumulator is multiplied by the base, and the digit value at the current string position is added.
If the input string is "214", then, the iteration will proceed as follows:
num is set to 0
First iteration: str[i] is 2, so (num * 10) + 2 is 2
Second iteration: str[i] is 1, so (num * 10) + 1 is 21
Third iteration: str[i] is 4, so (num * 10) + 4 is 214
The successive multiplications by 10 achieve what the call to Math.pow() does in your code. Note that 2 is multiplied by 10 twice, which effectively multiplies it by 100.
The decode() routine in your original code does the same thing, only instead of a simple character code computation to get the numeric value of a digit, it performs a lookup in the alphabet string.
Both the original and your own version of the decode function achieve the same thing, but the original version does it more efficiently.
In the following assignment:
num = num * _base + _alphabet.indexOf(str.charAt(i));
... there are two parts:
_alphabet.indexOf(str.charAt(i))
The indexOf returns the value of a digit in base _base. You have this part in your own algorithm, so that should be clear.
num * _base
This multiplies the so-far accumulated result. The rest of my answer is about that part:
In the first iteration this has no effect, as num is still 0 at that point. But at the end of the first iteration, num contains the value as if the str only had its left most character. It is the base-51 digit value of the left most digit.
From the next iteration onwards, the result is multiplied by the base, which makes room for the next value to be added to it. It functions like a digit shift.
Take this example input to decode:
bd35
The individual characters represent value 8, 10, 1 and 3. As there are 51 characters in the alphabet, we're in base 51. So bd35 this represents value:
8*51³ + 10*51² + 1*51 + 3
Here is a table with the value of num after each iteration:
8
8*51 + 10
8*51² + 10*51 + 1
8*51³ + 10*51² + 1*51 + 3
Just to make the visualisation cleaner, let's put the power of 51 in a column header, and remove that from the rows:
3 2 1 0
----------------------------
8
8 10
8 10 1
8 10 1 3
Note how the 8 shifts to the left at each iteration and gets multiplied with the base (51). The same happens with 10, as soon as it is shifted in from the right, and the same with the 1, and 3, although that is the last one and doesn't shift any more.
The multiplication num * _base represents thus a shift of base-digits to the left, making room for a new digit to shift in from the right (through simple addition).
At the last iteration all digits have shifted in their correct position, i.e. they have been multiplied by the base just enough times.
Putting your own algorithm in the same scheme, you'd have this table:
3 2 1 0
----------------------------
8
8 10
8 10 1
8 10 1 3
Here, there is no shifting: the digits are immediately put in the right position, i.e. they are multiplied with the correct power of 51 immediately.
You ask
I would like to understand how the decode function works from logical perspective. Why are we using num * base and starting with num = 0.
and write that
I am not quite sure how decode works. Why do we multiply base by a
current number and then add the position number of the alphabet? I
know that to convert 010 base 2 to decimal, we would do
(2 * 0^2) + (2 * 1^1) + (2 * 0 ^ 0) = 2
The decode function uses an approach to base conversion known as Horner's rule, used because it is computationally efficient:
start with a variable set to 0, num = 0
multiply the variable num by the base
take the value of the most significant digit (the leftmost digit) and add it to num,
repeat step 2 and 3 for as long as there are digits left to convert,
the variable num now contains the converted value (in base 10)
Using an example of a hexadecimal number A5D:
start with a variable set to 0, num = 0
multiply by the base (16), num is now still 0
take the value of the most significant digit (the A has a digit value of 10) and add it to num, num is now 10
repeat step 2, multiply the variable num by the base (16), num is now 160
repeat step 3, add the hexadecimal digit 5 to num, num is now 165
repeat step 2, multiply the variable num by the base (16), num is now 2640
repeat step 3, add the hexadecimal digit D to num (add 13)
there are no digits left to convert, the variable num now contains the converted value (in base 10), which is 2653
Compare the expression of the standard approach:
(10 × 162) + (5 × 161) + (13 × 160) = 2653
to the use of Horner's rule:
(((10 × 16) + 5) × 16) + 13 = 2653
which is exactly the same computation, but rearranged in a form making it easier to compute. This is how the decode function works.
Why are we using num * base and starting with num = 0.
The conversion algorithm needs a start value, therefore num is set to 0. For each repetition (each loop iteration), num is multiplied by base. This only has any effect on the second iteration, but is written like this to make it easier to write the conversion as a for loop.

Max length of collatz sequence - optimisation

I'm trying to solve this MaxCollatzLength kata but I'm struggling to optimise it to run fast enough for really large numbers.
In this kata we will take a look at the length of collatz sequences.
And how they evolve. Write a function that take a positive integer n
and return the number between 1 and n that has the maximum Collatz
sequence length and the maximum length. The output has to take the
form of an array [number, maxLength] For exemple the Collatz sequence
of 4 is [4,2,1], 3 is [3,10,5,16,8,4,2,1], 2 is [2,1], 1 is [ 1 ], so
MaxCollatzLength(4) should return [3,8]. If n is not a positive
integer, the function have to return [].
As you can see, numbers in Collatz sequences may exceed n. The last
tests use random big numbers so you may consider some optimisation in
your code:
You may get very unlucky and get only hard numbers: try submitting 2-3
times if it times out; if it still does, probably you need to optimize
your code more;
Optimisation 1: when calculating the length of a
sequence, if n is odd, what 3n+1 will be ?
Optimisation 2: when looping through 1 to n, take i such that i < n/2, what
will be the length of the sequence for 2i ?
A recursive solution quickly blows the stack, so I'm using a while loop. I think I've understood and applied the first optimisation. I also spotted that for n that is a power of 2, the max length will be (log2 of n) + 1 (that only shaves off a very small amount of time for an arbirtarily large number). Finally I have memoised the collatz lengths computed so far to avoid recalculations.
I don't understand what is meant by the second optimisation, however. I've tried to notice a pattern with a few random samples and loops and I've plotted the max collatz lengths for n < 50000. I noticed it seems to roughly follow a curve but I don't know how to proceed - is this a red herring?
I'm ideally looking for a hints in the right direction so I can work towards the solution myself.
function collatz(n) {
let result = [];
while (n !== 1) {
result.push(n);
if (n % 2 === 0) n /= 2;
else {
n = n * 3 + 1;
result.push(n);
n = n / 2;
}
}
result.push(1);
return result;
}
function collatzLength(n) {
if (n <= 1) return 1;
if (!collatzLength.precomputed.hasOwnProperty(n)) {
// powers of 2 are logarithm2 + 1 long
if ((n & (n - 1)) === 0) {
collatzLength.precomputed[n] = Math.log2(n) + 1;
} else {
collatzLength.precomputed[n] = collatz(n).length;
}
}
return collatzLength.precomputed[n];
}
collatzLength.precomputed = {};
function MaxCollatzLength(n) {
if (typeof n !== 'number' || n === 0) return [];
let maxLen = 0;
let numeralWithMaxLen = Infinity;
while (n !== 0) {
let lengthOfN = collatzLength(n);
if (lengthOfN > maxLen) {
maxLen = lengthOfN;
numeralWithMaxLen = n;
}
n--;
}
return [numeralWithMaxLen, maxLen];
}
Memoization is the key to good performance here. You memoize the end results of the function that calculates the Collatz sequence. This will help you on repeated calls to maxCollatzLength, but not when you determine the length of the sequence for the first time.
Also, as #j_random_hacker mentioned, there is no need to actually create the sequence as list; it is enough to store its length. An integer result is light-weight enough to be memoized easily.
You can make use of precalculated results already when you determine the length of a Collatz sequence. Instead of following the sequence all the way down, follow it until you hit a number for which the length is known.
The other optimizations you make are micro-optimizations. I'm not sure that calculating the log for powers of two really buys you anything. It rather burdens you with an extra test.
The memoized implementation below even forgoes the check for 1 by putting 1 in the dictionary of precalculated values initially.
var precomp = {1: 1};
function collatz(n) {
var orig = n;
var len = 0;
while (!(n in precomp)) {
n = (n % 2) ? 3*n + 1 : n / 2;
len++;
}
return (precomp[orig] = len + precomp[n]);
}
function maxCollatz(n) {
var res = [1, 1];
for (var k = 2; k <= n; k++) {
var c = collatz(k);
if (c > res[1]) {
res[0] = k;
res[1] = c;
}
}
return res;
}
I haven't used node.js, but the JavaScript in my Firefox. It gives reasonable performance. I first had collatz as a recursive function, which made the implementation only slightly faster than yours.
The second optimization mentioned in the question means that if you know C(n), you also know that C(2*n) == C(n) + 1. You could use that knowledge to precalculate the values for all even n in a bottom-up approach.
It would be nice if the lengths of the Collatz sequences could be calculated from the bottom up, a bit like the sieve of Erathostenes. You have to know where you come from instead of where you go to, but it is hard to know ehen to stop, because for finding the longest sequence for n < N, you will have to calculate many sequences out of bound with n > N. As is, the memoization is a good way to avoid repetition in an otherwise straightforwad iterative approach.
In this task you are required to write a Python function,
maxLength, that returns two integers:
• First returned value: for each integer k, 1 ≤ k ≤ m, the
length of Collatz sequence for each k is computed and the
largest of these numbers is returned.
• Second returned value is the integer k, 1 ≤ k ≤ m, whose
Collatz sequence has the largest length. In case there are
several such numbers, return the first one (the smallest).
For example, maxLength(10) returns numbers
20 and 9
Which means that among the numbers 1, 2, 3,…, 10, nine has the
longest Collatz sequence, and its length is equal to 20.
In your program you may define other (auxiliary) functions with
arbitrary names, however, the solution function of this task
should be named maxLength(m).

Categories

Resources