After having a long difficult coding challenge, there was a problem that bugged me. I thought about it for an adequate time but couldn't find the way to solve it. Here, I am providing a problem and example below.
Input
v : an array of numbers.
q : 2 dimensional array with 3 elements in nested array.
Description
v is an array and q is a commands that does different thing according to its nested element.
if first element of nested array is 1 => second and third element of the nested array becomes the index and it returns sum[second:third+1] (As you can see, it is inclusive)
if first element of nested array is 2 => element of second index becomes the third. same as v[second] = third
Input example
v : [1,2,3,4,5]
q : [[1,2,4], [2,3,8], [1,2,4]]
Example
With a provided example, it goes like
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 12.
command is [2,3,8] => first element is 2. it switches v[3] to 8. (now v is [1,2,3,8,5])
command is [1,2,4] => first element is 1. it should return sum from v[2] to v[4] (inclusive) => 16, as the third index has been changed from the previous command.
So the final answer is [12, 16]
Question.
The code below is how I solved, however, this is O(n**2) complexity. I wonder how I can reduce the time complexity in this case.
I tried making a hash object, but it didn't work. I can't think of a good way to make a cache in this case.
function solution(v, q) {
let answer = [];
for (let i = 0; i < q.length; i++) {
let [a, b, c] = q[i];
if (a === 1) {
let sum = 0;
for (let i = b; i <= c; i++) {
sum += v[i];
}
answer.push(sum);
} else if (a === 2) {
v[b] = c;
}
}
return answer;
}
This type of problem can typically be solved more efficiently with a Fenwick tree
Here is an implementation:
class BinaryIndexedTree extends Array {
constructor(length) {
super(length + 1);
this.fill(0);
}
add(i, delta) {
i++; // make index 1-based
while (i < this.length) {
this[i] += delta;
i += i & -i; // add least significant bit
}
}
sumUntil(i) {
i++; // make index 1-based
let sum = 0;
while (i) {
sum += this[i];
i -= i & -i;
}
return sum;
}
}
function solution(values, queries) {
const tree = new BinaryIndexedTree(values.length);
values.forEach((value, i) => tree.add(i, value));
const answer = [];
for (const [a, b, c] of queries) {
if (a === 1) {
answer.push(tree.sumUntil(c) - tree.sumUntil(b - 1));
} else {
tree.add(b, c - values[b]);
values[b] = c;
}
}
return answer;
}
let answer = solution([1,2,3,4,5], [[1,2,4], [2,3,8], [1,2,4]]);
console.log(answer);
Time Complexity
The time complexity of running tree.add or tree.sumUntil once is O(logπ), where π is the size of the input values (values.length). So this is also the time complexity of running one query.
The creation of the tree costs O(π), as this is the size of the tree
The initialisation of the tree with values costs O(πlogπ), as really each value in the input acts as a query that updates a value from 0 to the actual value.
Executing the queries costs O(πlogπ) where π is the number of queries (queries.length)
So in total, we have a time complexity of O(π + πlogπ + πlogπ) = O((π+π)logπ)
Further reading
For more information on Fenwick trees, see BIT: What is the intuition behind a binary indexed tree and how was it thought about?
Related
I have a function that gets a number and has to calculate the multiplication for each number with the rest of the numbers in the sequence.
If the input is 10, it should calculate the multiplication between 1x1, 1x2, 1x3, .... 10x1, 10x2, 10x3, .... 10x10. (passing through all the numbers sequentially)
So I thought at first sight that I need a double loop to do all possible multiplications but for big numbers it executes following O(n*n) which is too slow.
I heard there is a way to use only one loop. Do you know any post related with this subject? The only ones I found doesn't take into count that I need to perform the calculation foreach number by the rest of the numbers of the array.
Here the code:
for(i=1;i<=n;i++){
for(j=1;j<=n;j++){
// do i*j
}
}
Here's a way to do it with one loop (map), with the help of recursion.
const order = 10;
let sequence = [];
for (let i = 0; i < order; i++) {
sequence.push(i + 1);
}
const getMulTable = (order, table = []) => {
if (order === 1) {
table.push(sequence);
return table;
}
table = getMulTable(order - 1, table);
table.push(sequence.map(el => el * order));
return table;
}
console.log(getMulTable(order));
I'm not sure if this reduces the time complexity, but this is the shortest way I know of to doing it.
Use collection methods like map.
let n = 10
let nCopy = n
let arr = []
while (nCopy > 0) {
arr.push(nCopy)
nCopy--
}
arr.sort((a, b) => a - b)
let res = []
for (let i = 1; i <= n; i++) {
res.push(arr.map(a => a * i))
}
console.log(res.flat())
I'm building an app and in one of my functions I need to generate random & unique 4 digit codes. Obviously there is a finite range from 0000 to 9999 but each day the entire list will be wiped and each day I will not need more than the available amount of codes which means it's possible to have unique codes for each day. Realistically I will probably only need a few hundred codes a day.
The way I've coded it for now is the simple brute force way which would be to generate a random 4 digit number, check if the number exists in an array and if it does, generate another number while if it doesn't, return the generated number.
Since it's 4 digits, the runtime isn't anything too crazy and I'm mostly generating a few hundred codes a day so there won't be some scenario where I've generated 9999 codes and I keep randomly generating numbers to find the last remaining one.
It would also be fine to have letters in there as well instead of just numbers if it would make the problem easier.
Other than my brute force method, what would be a more efficient way of doing this?
Thank you!
Since you have a constrained number of values that will easily fit in memory, the simplest way I know of is to create a list of the possible values and select one randomly, then remove it from the list so it can't be selected again. This will never have a collision with a previously used number:
function initValues(numValues) {
const values = new Array(numValues);
// fill the array with each value
for (let i = 0; i < values.length; i++) {
values[i] = i;
}
return values;
}
function getValue(array) {
if (!array.length) {
throw new Error("array is empty, no more random values");
}
const i = Math.floor(Math.random() * array.length);
const returnVal = array[i];
array.splice(i, 1);
return returnVal;
}
// sample code to use it
const rands = initValues(10000);
console.log(getValue(rands));
console.log(getValue(rands));
console.log(getValue(rands));
console.log(getValue(rands));
This works by doing the following:
Generate an array of all possible values.
When you need a value, select one from the array with a random index.
After selecting the value, remove it from the array.
Return the selected value.
Items are never repeated because they are removed from the array when used.
There are no collisions with used values because you're always just selecting a random value from the remaining unused values.
This relies on the fact that an array of integers is pretty well optimized in Javascript so doing a .splice() on a 10,000 element array is still pretty fast (as it can probably just be memmove instructions).
FYI, this could be made more memory efficient by using a typed array since your numbers can be represented in 16-bit values (instead of the default 64 bits for doubles). But, you'd have to implement your own version of .splice() and keep track of the length yourself since typed arrays don't have these capabilities built in.
For even larger problems like this where memory usage becomes a problem, I've used a BitArray to keep track of previous usage of values.
Here's a class implementation of the same functionality:
class Randoms {
constructor(numValues) {
this.values = new Array(numValues);
for (let i = 0; i < this.values.length; i++) {
this.values[i] = i;
}
}
getRandomValue() {
if (!this.values.length) {
throw new Error("no more random values");
}
const i = Math.floor(Math.random() * this.values.length);
const returnVal = this.values[i];
this.values.splice(i, 1);
return returnVal;
}
}
const rands = new Randoms(10000);
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
console.log(rands.getRandomValue());
Knuth's multiplicative method looks to work pretty well: it'll map numbers 0 to 9999 to a random-looking other number 0 to 9999, with no overlap:
const hash = i => i*2654435761 % (10000);
const s = new Set();
for (let i = 0; i < 10000; i++) {
const n = hash(i);
if (s.has(n)) { console.log(i, n); break; }
s.add(n);
}
To implement it, simply keep track of an index that gets incremented each time a new one is generated:
const hash = i => i*2654435761 % (10000);
let i = 1;
console.log(
hash(i++),
hash(i++),
hash(i++),
hash(i++),
hash(i++),
);
These results aren't actually random, but they probably do the job well enough for most purposes.
Disclaimer:
This is copy-paste from my answer to another question here. The code was in turn ported from yet another question here.
Utilities:
function isPrime(n) {
if (n <= 1) return false;
if (n <= 3) return true;
if (n % 2 == 0 || n % 3 == 0) return false;
for (let i = 5; i * i <= n; i = i + 6) {
if (n % i == 0 || n % (i + 2) == 0) return false;
}
return true;
}
function findNextPrime(n) {
if (n <= 1) return 2;
let prime = n;
while (true) {
prime++;
if (isPrime(prime)) return prime;
}
}
function getIndexGeneratorParams(spaceSize) {
const N = spaceSize;
const Q = findNextPrime(Math.floor(2 * N / (1 + Math.sqrt(5))))
const firstIndex = Math.floor(Math.random() * spaceSize);
return [firstIndex, N, Q]
}
function getNextIndex(prevIndex, N, Q) {
return (prevIndex + Q) % N
}
Usage
// Each day you bootstrap to get a tuple of these parameters and persist them throughout the day.
const [firstIndex, N, Q] = getIndexGeneratorParams(10000)
// need to keep track of previous index generated.
// itβs a seed to generate next one.
let prevIndex = firstIndex
// calling this function gives you the unique code
function getHashCode() {
prevIndex = getNextIndex(prevIndex, N, Q)
return prevIndex.toString().padStart(4, "0")
}
console.log(getHashCode());
Explanation
For simplicity letβs say you want generate non-repeat numbers from 0 to 35 in random order. We get pseudo-randomness by polling a "full cycle iterator"β . The idea is simple:
have the indexes 0..35 layout in a circle, denote upperbound as N=36
decide a step size, denoted as Q (Q=23 in this case) given by this formulaβ‘
Q = findNextPrime(Math.floor(2 * N / (1 + Math.sqrt(5))))
randomly decide a starting point, e.g. number 5
start generating seemingly random nextIndex from prevIndex, by
nextIndex = (prevIndex + Q) % N
So if we put 5 in we get (5 + 23) % 36 == 28. Put 28 in we get (28 + 23) % 36 == 15.
This process will go through every number in circle (jump back and forth among points on the circle), it will pick each number only once, without repeating. When we get back to our starting point 5, we know we've reach the end.
β : I'm not sure about this term, just quoting from this answer
β‘: This formula only gives a nice step size that will make things look more "random", the only requirement for Q is it must be coprime to N
This problem is so small I think a simple solution is best. Build an ordered array of the 10k possible values & permute it at the start of each day. Give the k'th value to the k'th request that day.
It avoids the possible problem with your solution of having multiple collisions.
I'm making the Freecodecamp certifications, and there's one problem I cannot see a solution to: the task is to calculate the Least Common Multiple (LCM) for an array of integers (that also means a RANGE of integers, between a min and a max value).
The snippet below gives the correct answer here on SO, on Codepen.io, on my local environment. But not on freecodecamp.org.
function smallestCommons(arr) {
// sorting and cloning the array
const fullArr = createArray(arr.sort((a, b) => a - b))
// calculating the theoretical limit of the size of the LCM
const limit = fullArr.reduce((a, c) => a * c, 1)
// setting the number to start the iteration with
let i = fullArr[0]
// iteration to get the LCM
for (i; i <= limit; i++) {
// if every number in the fullArr divide
// the number being tested (i), then it's the LCM
if (fullArr.every(e => !(i % e))) {
// stop the for loop
break
}
}
// return LCM
return i;
}
// utility function to create the fullArr const in the
// main function
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// displaying the results
console.log(smallestCommons([23, 18]));
The error what I see:
the code works correctly with 4 other arrays on freecodecamp.org
the code gives false results - or no results at all for the array [23, 18]. If I get a result, it's not consistent (like 1,000,000 once, then 3,654,236 - I made these numbers up, but the behavior is like that). The result of the [23, 18] input should be 6,056,820 (and it's that here on SO, but not on freecodecamp.org)
As this code is far from optimal I have a feeling that the code execution just runs out of resources at one point, but I get no error for that.
I read the hints on the page (yes, I tried the solutions, and they do work), but I'd like to submit my own code: I know my algorithm is (theoretically) good (although not optimal), but I'd like to make it work in practice too.
I also see that this question had caused problems to others (it's been asked on SO), but I don't feel it's a duplicate.
Does anyone have any ideas?
As it turned out it was a resource problem - the algorithm in my question was correct theoretically but wasn't optimal or effective.
Here's one that is more efficient in solving this problem:
// main function
function smallestCommons(arr) {
const fullArr = createArray(arr.sort((a, b) => a - b))
return findLcm(fullArr, fullArr.length);
}
// creating the range of numbers based on a min and a max value
function createArray([a, b]) {
const r = []
for (let i = b; i >= a; i--) {
r.push(i)
}
return r
}
// smallest common multiple of n numbers
function findLcm(arr, n) {
let ans = arr[0];
for (let i = 1; i < n; i++) {
ans = (((arr[i] * ans)) /
(gcd(arr[i], ans)));
}
return ans;
}
// greatest common divisor
function gcd(a, b) {
if (b == 0) return a;
return gcd(b, a % b);
}
console.log(smallestCommons([1, 5]));
console.log(smallestCommons([5, 1]));
console.log(smallestCommons([2, 10]));
console.log(smallestCommons([23, 18]));
This method was OK on the testing sandbox environment.
Given I have an array of numbers for example [14,6,10] - How can I find possible combinations/pairs that can add upto a given target value.
for example I have [14,6,10], im looking for a target value of 40
my expected output will be
10 + 10 + 6 + 14
14 + 14 + 6 + 6
10 + 10 + 10 + 10
*Order is not important
With that being said, this is what I tried so far:
function Sum(numbers, target, partial) {
var s, n, remaining;
partial = partial || [];
s = partial.reduce(function (a, b) {
return a + b;
}, 0);
if (s === target) {
console.log("%s", partial.join("+"))
}
for (var i = 0; i < numbers.length; i++) {
n = numbers[i];
remaining = numbers.slice(i + 1);
Sum(remaining, target, partial.concat([n]));
}
}
>>> Sum([14,6,10],40);
// returns nothing
>>> Sum([14,6,10],24);
// return 14+10
It is actually useless since it will only return if the number can be used only once to sum.
So how to do it?
You could add the value of the actual index as long as the sum is smaller than the wanted sum or proceed with the next index.
function getSum(array, sum) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40));
.as-console-wrapper { max-height: 100% !important; top: 0; }
For getting a limited result set, you could specify the length and check it in the exit condition.
function getSum(array, sum, limit) {
function iter(index, temp) {
var s = temp.reduce((a, b) => a + b, 0);
if (s === sum) result.push(temp);
if (s >= sum || index >= array.length || temp.length >= limit) return;
iter(index, temp.concat(array[index]));
iter(index + 1, temp);
}
var result = [];
iter(0, []);
return result;
}
console.log(getSum([14, 6, 10], 40, 5));
.as-console-wrapper { max-height: 100% !important; top: 0; }
TL&DR : Skip to Part II for the real thing
Part I
#Nina Scholz answer to this fundamental problem just shows us a beautiful manifestation of an algorithm. Honestly it confused me a lot for two reasons
When i try [14,6,10,7,3] with a target 500 it makes 36,783,575 calls to the iter function without blowing the call stack. Yet memory shows no significant usage at all.
My dynamical programming solution goes a little faster (or may be not) but there is no way it can do above case without exhousting the 16GB memory.
So i shelved my solution and instead started investigating her code a little further on dev tools and discoverd both it's beauty and also a little bit of it's shortcomings.
First i believe this algorithmic approach, which includes a very clever use of recursion, might possibly deserve a name of it's own. It's very memory efficient and only uses up memory for the constructed result set. The stack dynamically grows and shrinks continuoously up to nowhere close to it's limit.
The problem is, while being very efficient it still makes huge amounts of redundant calls. So looking into that, with a slight modification the 36,783,575 calls to iter can be cut down to 20,254,744... like 45% which yields a much faster code. The thing is the input array must be sorted ascending.
So here comes a modified version of Nina's algorithm. (Be patient.. it will take like 25 secs to finalize)
function getSum(array, sum) {
function iter(index, temp) {cnt++ // counting iter calls -- remove in production code
var s = temp.reduce((a, b) => a + b, 0);
sum - s >= array[index] && iter(index, temp.concat(array[index]));
sum - s >= array[index+1] && iter(index + 1, temp);
s === sum && result.push(temp);
return;
}
var result = [];
array.sort((x,y) => x-y); // this is a very cheap operation considering the size of the inpout array should be small for reasonable output.
iter(0, []);
return result;
}
var cnt = 0,
arr = [14,6,10,7,3],
tgt = 500,
res;
console.time("combos");
res = getSum(arr,tgt);
console.timeEnd("combos");
console.log(`source numbers are ${arr}
found ${res.length} unique ways to sum up to ${tgt}
iter function has been called ${cnt} times`);
Part II
Eventhough i was impressed with the performance, I wasn't comfortable with above solution for no solid reason that i can name. The way it works on side effects and the very hard to undestand double recursion and such disturbed me.
So here comes my approach to this question. This is many times more efficient compared to the accepted solution despite i am going functional in JS. We have still have room to make it a little faster with ugly imperative ways.
The difference is;
Given numbers: [14,6,10,7,3]
Target Sum: 500
Accepted Answer:
Number of possible ansers: 172686
Resolves in: 26357ms
Recursive calls count: 36783575
Answer Below
Number of possible ansers: 172686
Resolves in: 1000ms
Recursive calls count: 542657
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 6, 10, 7, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
Another important point is, if the given array is sorted descending then the amount of recursive iterations will be reduced (sometimes greatly), allowing us to squeeze out more juice out of this lemon. Compare above with the one below when the input array is sorted descending.
function items2T([n,...ns],t){cnt++ //remove cnt in production code
var c = ~~(t/n);
return ns.length ? Array(c+1).fill()
.reduce((r,_,i) => r.concat(items2T(ns, t-n*i).map(s => Array(i).fill(n).concat(s))),[])
: t % n ? []
: [Array(c).fill(n)];
};
var cnt = 0, result;
console.time("combos");
result = items2T([14, 10, 7, 6, 3], 500)
console.timeEnd("combos");
console.log(`${result.length} many unique ways to sum up to 500
and ${cnt} recursive calls are performed`);
I am trying to solve the Hackerrank problem Jesse and Cookies:
Jesse loves cookies and wants the sweetness of some cookies to be greater than value π. To do this, two cookies with the least sweetness are repeatedly mixed. This creates a special combined cookie with:
sweetness = (1 Γ Least sweet cookie + 2 Γ 2nd least sweet cookie).
This occurs until all the cookies have a sweetness β₯ π.
Given the sweetness of a number of cookies, determine the minimum number of operations required. If it is not possible, return β1.
Example
k = 9
A = [2,7,3,6,4,6]
The smallest values are 2, 3.
Remove them then return 2 + 2 Γ 3 = 8 to the array. Now A = [8,7,6,4,6].
Remove 4, 6 and return 4 + 2 Γ 6 = 16 to the array. Now A = [16,8,7,6].
Remove 6, 7, return 6 + 2 Γ 7 = 20 and A = [20,16,8,7].
Finally, remove 8, 7 and return 7 + 2 Γ 8 = 23 to A. Now A = [23,20,16].
All values are β₯ π = 9 so the process stops after 4 iterations. Return 4.
I couldn't find a JavaScript solution or a hint for this problem. My code seems to be working, except that it times out for a large array (input size > 1 million).
Is there a way to make my code more efficient? I think the time complexity is between linear and O(n log n).
My Code:
function cookies(k, A) {
A.sort((a,b)=>a-b)
let ops = 0;
while (A[0] < k && A.length > 1) {
ops++;
let calc = (A[0] * 1) + (A[1] * 2);
A.splice(0, 2);
let inserted = false
if (A.length === 0) { // when the array is empty after splice
A.push(calc);
} else {
for (var i = 0; i < A.length && !inserted; i++) {
if (A[A.length - 1] < calc) {
A.push(calc)
inserted = true
} else if (A[i] >= calc) {
A.splice(i, 0, calc);
inserted = true
}
}
}
}
if (A[0] < k) {
ops = -1;
}
return ops;
}
It is indeed a problem that can be solved efficiently with a heap. As JavaScript has no native heap, just implement your own.
You should also cope with inputs that are huge, but where most values are greater than k. Those should not have to be part of the heap -- it would just make heap operations unnecessarily slower. Also, when cookies are augmented, they only need to enter back into the heap when they are not yet good enough.
Special care needs to be taken when the heap ends up with just one value (less than k). In that case it needs to be checked whether any good cookies were created (and thus did not end up in the heap). If so, then with one more operation the solution has been found. But if not, it means there is no solution and -1 should be returned.
Here is an implementation in JavaScript:
/* MinHeap implementation without payload. */
const MinHeap = {
/* siftDown:
* The node at the given index of the given heap is sifted down in its subtree
* until it does not have a child with a lesser value.
*/
siftDown(arr, i=0, value=arr[i]) {
if (i >= arr.length) return;
while (true) {
// Choose the child with the least value
let j = i*2+1;
if (j+1 < arr.length && arr[j] > arr[j+1]) j++;
// If no child has lesser value, then we've found the spot!
if (j >= arr.length || value <= arr[j]) break;
// Move the selected child value one level up...
arr[i] = arr[j];
// ...and consider the child slot for putting our sifted value
i = j;
}
arr[i] = value; // Place the sifted value at the found spot
},
/* heapify:
* The given array is reordered in-place so that it becomes a valid heap.
* Elements in the given array must have a [0] property (e.g. arrays). That [0] value
* serves as the key to establish the heap order. The rest of such an element is just payload.
* It also returns the heap.
*/
heapify(arr) {
// Establish heap with an incremental, bottom-up process
for (let i = arr.length>>1; i--; ) this.siftDown(arr, i);
return arr;
},
/* pop:
* Extracts the root of the given heap, and returns it (the subarray).
* Returns undefined if the heap is empty
*/
pop(arr) {
// Pop the last leaf from the given heap, and exchange it with its root
return this.exchange(arr, arr.pop());
},
/* exchange:
* Replaces the root node of the given heap with the given node, and returns the previous root.
* Returns the given node if the heap is empty.
* This is similar to a call of pop and push, but is more efficient.
*/
exchange(arr, value) {
if (!arr.length) return value;
// Get the root node, so to return it later
let oldValue = arr[0];
// Inject the replacing node using the sift-down process
this.siftDown(arr, 0, value);
return oldValue;
},
/* push:
* Inserts the given node into the given heap. It returns the heap.
*/
push(arr, value) {
// First assume the insertion spot is at the very end (as a leaf)
let i = arr.length;
let j;
// Then follow the path to the root, moving values down for as long as they
// are greater than the value to be inserted
while ((j = (i-1)>>1) >= 0 && value < arr[j]) {
arr[i] = arr[j];
i = j;
}
// Found the insertion spot
arr[i] = value;
return arr;
}
};
function cookies(k, arr) {
// Remove values that are already OK so to keep heap size minimal
const heap = arr.filter(val => val < k);
let greaterPresent = heap.length < arr.length; // Mark whether there is a good cookie
MinHeap.heapify(heap);
let result = 0;
while (heap.length > 1) {
const newValue = MinHeap.pop(heap) + MinHeap.pop(heap) * 2;
// Only push result back to heap if it still is not great enough
if (newValue < k) MinHeap.push(heap, newValue);
else greaterPresent = true; // Otherwise just mark that we have a good cookie
result++;
}
// If not good cookies were created, then return -1
// Otherwise, if there is still 1 element in the heap, add 1
return greaterPresent ? result + heap.length : -1;
}
// Example run
console.log(cookies(9, [2,7,3,6,4,6])); // 4
I solved it using java. You may adapt to Javascript.
This code does not require using a heap. It just work on the same array passed. Passed all tests for me.
static int cookies(int k, int[] arr) {
/*
* Write your code here.
*/
Arrays.sort(arr);
int i = 0,
c = arr.length,
i0 = 0,
c0 = 0,
op = 0;
while( (arr[i]<k || arr[i0]<k) && (c0-i0 + c-i)>1 ) {
int s1 = i0==c0 || arr[i]<=arr[i0] ? arr[i++] : arr[i0++],
s2 = i0==c0 || (i!=c && arr[i]<=arr[i0]) ? arr[i++] : arr[i0++];
arr[c0++] = s1 + 2*s2;
op++;
if( i==c ) {
i = i0;
c = c0;
c0 = i0;
}
}
return c-i>1 || arr[i]>=k ? op : -1;
}
First of all sort array.
For newly calculated values, store them in the array[i0-c0] range, this new array does not require sorting, because it is already sorted.
When array[i-c] reaches(i==c: true) end, forget it, and work on arr[i0-c0].