I was working on a Dynamic Programming Problem and was able to code up a Javascript solution:
function howSum(targetSum,numbers,memo = {}){
//if the targetSum key already in hashmap,return its value
if(targetSum in memo) return memo[targetSum];
if(targetSum == 0) return [];
if(targetSum < 0) return null;
for(let num of numbers){
let aWay = howSum(targetSum-num,numbers,memo);
if(aWay !== null){
memo[targetSum] = [...aWay,num];
return memo[targetSum];
}
}
//no way to generate the targetSum using any elements of input array
memo[targetSum] = null;
return null;
}
Now I was thinking over how I could translate this into a CPP code.
I would have to use a reference to an unordered map for the memo object.
But how should I go about returning the empty array and null values as in the base condition?Should I return an array pointer and realloc it when inserting an element?Wouldnt that be a C way of programming it?
Also how should I go about passing the default parameter to the memo unordered map in C++?Currently I have overloaded the function which creates the memo unorderd map and passes its reference.
Any guidance will be appreciated as I can solve future questions.
I was stuck in this problem too. This is how I made it work.
// howSum function
vector<int> howSum(int target, vector<int> numbers, unordered_map<int, vector<int>> &dp ){
// base case 1 - for dp
if(dp.find(target)!=dp.end()) return dp[target];
// making a vector to return in the following base cases
vector<int> res;
// base case 2
if(target == 0) {
return res;
}
// base case 3
if(target<0) {
res.push_back(-1); // using -1 instead of NULL
return res;
}
// the actual logic for the question
for(int i=0;i<numbers.size();i++){
int remainder = target - numbers[i];
vector<int> result = howSum(remainder,numbers,dp); // recursion
// if result vector doesn't contain -1, push target to result vector
if(find(result.begin(),result.end(),-1)==result.end()){
result.push_back(numbers[i]);
dp.emplace(target,result);
return result;
}
}
res.push_back(-1);
dp.emplace(target,res);
return res;
}
// main function
int main(){
vector<int>numbers = {20,50};
unordered_map<int, vector<int>> dp;
vector<int> res = howSum(300,numbers,dp);
for(int i=0;i<res.size();i++){
cout<<res[i]<<" ";
}
cout<<endl;
}
Here is my take at it:
#include <optional>
#include <vector>
#include <unordered_map>
using Nums = std::vector<int>;
using OptNums = std::optional<Nums>;
namespace detail {
using Memo = std::unordered_map<int, OptNum>>;
OptNums const & howSum(int targetSum, Nums const & numbers, Memo & memo) {
if (auto iter = memo.find(targetSum); iter != memo.end()) {
return iter->second; // elements are std::pair<int, OptNums>
}
auto & cached = memo[targetSum]; // create an empty optional in the map
if (targetSum == 0) {
cached.emplace(); // create an empty Nums in the optional
}
else if (targetSum > 0) {
for (int num : numbers) {
if (auto const & aWay = howSum(targetSum-num, numbers, memo)) {
cached = aWay; // copy vector into optional
cached->push_back(num);
}
}
}
return cached;
}
} // detail
std::optional<Nums> howSum(int targetSum, Nums const & numbers) {
detail::Memo memo;
return detail::howSum(targetSum, numbers, memo);
}
Some comments:
using two functions, one that creates the memo and passes it into the real implementation function is a good pattern. It makes the user-facing interface clean.
the "detail" namespace is just a name, no magic meaning, but is often used to indicate implementation detail.
In the implementation, I return references to an optional. This is an optimization to avoid copying the return vectors in every call where the algorithm unwinds from the recursion. This does require some care, however, because you must be careful to return references to objects that will outlive the local scope (so no returning std::nullopt, or the reference binds to a temporary optional, for example.) That is also why I always create the element in the memo object--even in the negative case--so I can return a reference to it safely. Note, operator[] applied to an unordered_map will create the element if it does not exist, while find will not.
Since the reference returned by the detail function has a lifetime only as long as the memo declared in the caller, the caller itself must return a copy of the optional it gets back, to ensure that the data is not destroyed during the cleanup of the function call. Note, it does not return a reference.
Also, the "if" inside the for loop has a little bit going on. It declares a local reference, initializes it to the result of the recursive call. That whole expression is a reference to optional, which has an implicit conversion to bool that is true if the optional holds a value. This is a useful idiom worth pointing out, though to be more explicit this is equivalent:
if (auto const & aWay = howSum(targetSum-num, numbers, memo); aWay.has_value())
Here's a fleshed out example, with a few test cases to show it work.
https://godbolt.org/z/cWrdhvM1n
Related
I am a noobie in JavaScript algorithm and cannot understand this optimal solution of the 2-sum
function twoNumberSum(array, target) {
const nums = {};
for (const num of array) {
const potentialMatch = target - num;
console.log('potential', potentialMatch);
if (potentialMatch in nums) {
return [potentialMatch, num]
} else {
nums[num] = true;
}
}
}
So the 2-sum problem basically says "find two numbers in an array that sum to the given target, and return their index". Let's walk through this code and talk about what's happening.
First, we start the function; I'm going to assume this makes sense (a function that's called twoNumberSum that takes in two arguments; namely, array and target) - note that in JS, we don't annotate types, so there is no return type
Now, first thing we do is create a new object called nums. In JS, objects are effectively hash maps (with some very important differences - see my note below); they store a key and a corresponding value. In JS, a key can be any string or number
Next, we start our iteration. If we do for (const a of b), and b is an array, this iterates over all the values of the array, with each iteration having that value stored in a.
Next, we subtract our current value from the target. Then comes the key line: if (potentialMatch in nums). The in keyword checks for the existence of a key: 'hello' in obj returns true if obj has the key 'hello'.
In this case, if we find this potential match, then that means we have found another number that is equal to target - num, which of course means we've found the other partner for our sum! So in this case, we simply return the two numbers. If, on the other hand, we do not find this potentialMatch, that means we need to keep looking. But we do want to remember we've seen this number - thus, we add it as a key by doing nums[num] = true (this creates a new key-value pair; namely the key is num and the value is true).
As one of the comments explained, this is just trying to keep track of a list of numbers; however, the author is trying to be clever by using a Hash Table instead of a normal array. This way, lookups are O(1) instead of O(n). For eyes not used to JS semantics, another way of explaining this code is that we build up a Map of the numbers, and then we check that map for our target value.
I mentioned earlier that using objects as hash tables isn't the best idea; this is because if you aren't careful, if you use user-provided keys, you can accidentally mess with what's called the Prototype Chain. This is beyond this discussion, but a better way forward would be to use a Set:
function twoNumberSum(array, target) {
// Create a new Hash Set. Sets take in an iterable, so we could
// Do it this way. But to remain as close to your original solution
// as possible, we won't for now, and instead populate it as we go
// const nums = new Set(array);
const nums = new Set();
for (const num of array) {
const potentialMatch = target - num;
if (nums.has(potentialMatch)) {
return [potentialMatch, num];
} else {
nums.add(num);
}
}
Sometimes, the problem instead asks for you to return the indices; using a Map instead makes this relatively trivial. Just store the index as the value and you're good to go!
function twoNumberSum(array, target) {
// Create the new map instead
const nums = new Map();
for (let n = 0; n < array.length; ++n) {
const potentialMatch = target - array[n];
if (nums.has(potentialMatch)) {
return [nums.get(potentialMatch), n];
} else {
nums.set(array[n], n);
}
}
Let me explain to you what it's all is working-.
function twoNumberSum(array, target) {
// This is and object in Javascript
const nums = {};
for (const num of array) { // This is for of loop which iterates the array.
//For of Doc - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of
// Here's its calculating the potential.
const potentialMatch = target - num;
console.log('potential - ' + potentialMatch);
/**
* Nowhere `in` is used which check if any property exists in an object or not.
* in Usage - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/in
*
* It checks whether potential exists in the `nums` object, If exist it returns the array
* with potentialMatch and num to which it is matched.
*
* If the number is not there in nums object. It's setting there in else block
* to match in net iteration.
*/
if (potentialMatch in nums) {
return [potentialMatch, num]
} else {
nums[num] = true;
/**
* It forms an object when the potential match doesn't exist in nums for checking in the next iteration
* {
* 1: true,
* 2: true
* }
*/
}
console.log(nums)
}
}
console.log(twoNumberSum([1, 2, 4, 5, 6, 7, 8], 3))
You can also Run it from JSBin
I have a general question which is about whether it is possible to make zero-allocation iterators in Javascript. Note that by "iterator" I am not married to the current definition of iterator in ECMAScript, but just a general pattern for iterating over user-defined ranges.
To make the problem concrete, say I have a list like [5, 5, 5, 2, 2, 1, 1, 1, 1] and I want to group adjacent repetitions together, and process it into a form which is more like [5, 3], [2, 2], [1, 4]. I then want to access each of these pairs inside a loop, something like "for each pair in grouped(array), do something with pair". Furthermore, I want to reuse this grouping algorithm in many places, and crucially, in some really hot inner loops (think millions of loops per second).
Question: Is there an iteration pattern to accomplish this which has zero overhead, as if I hand-wrote the loop myself?
Here are the things I've tried so far. Let's suppose for concreteness that I am trying to compute the sum of all pairs. (To be clear I am not looking for alternative ways of writing this code, I am looking for an abstraction pattern: the code is just here to provide a concrete example.)
Inlining the grouping code by hand. This method performs the best, but obscures the intent of the computation. Furthermore, inlining by hand is error-prone and annoying.
function sumPairs(array) {
let sum = 0
for (let i = 0; i != array.length; ) {
let elem = array[i++], count = 1
while (i < array.length && array[i] == elem) { i++; count++; }
// Here we can actually use the pair (elem, count)
sum += elem + count
}
return sum
}
Using a visitor pattern. We can write a reduceGroups function which will call a given visitor(acc, elem, count) for each pair (elem, count), similar to the usual Array.reduce method. With that our computation becomes somewhat clearer to read.
function sumPairsVisitor(array) {
return reduceGroups(array, (sofar, elem, count) => sofar + elem + count, 0)
}
Unfortunately, Firefox in particular still allocates when running this function, unless the closure definition is manually moved outside the function. Furthermore, we lose the ability to use control structures like break unless we complicate the interface a lot.
Writing a custom iterator. We can make a custom "iterator" (not an ES6 iterator) which exposes elem and count properties, an empty property indicating that there are no more pairs remaining, and a next() method which updates elem and count to the next pair. The consuming code looks like this:
function sumPairsIterator(array) {
let sum = 0
for (let iter = new GroupIter(array); !iter.empty; iter.next())
sum += iter.elem + iter.count
return sum
}
I find this code the easiest to read, and it seems to me that it should be the fastest method of abstraction. (In the best possible case, scalar replacement could completely collapse the iterator definition into the function. In the second best case, it should be clear that the iterator does not escape the for loop, so it can be stack-allocated). Unfortunately, both Chrome and Firefox seem to allocate here.
Of the approaches above, the custom-defined iterator performs quite well in most cases, except when you really need to put the pedal to the metal in a hot inner loop, at which point the GC pressure becomes apparent.
I would also be ok with a Javascript post-processor (the Google Closure Compiler perhaps?) which is able to accomplish this.
Check this out. I've not tested its performance but it should be good.
(+) (mostly) compatible to ES6 iterators.
(-) sacrificed ...GroupingIterator.from(arr) in order to not create a (imo. garbage) value-object. That's the mostly in the point above.
afaik, the primary use case for this is a for..of loop anyways.
(+) no objects created (GC)
(+) object pooling for the iterators; (again GC)
(+) compatible with controll-structures like break
class GroupingIterator {
/* object pooling */
static from(array) {
const instance = GroupingIterator._pool || new GroupingIterator();
GroupingIterator._pool = instance._pool;
instance._pool = null;
instance.array = array;
instance.done = false;
return instance;
}
static _pool = null;
_pool = null;
/* state and value / payload */
array = null;
element = null;
index = 0;
count = 0;
/* IteratorResult interface */
value = this;
done = true;
/* Iterator interface */
next() {
const array = this.array;
let index = this.index += this.count;
if (!array || index >= array.length) {
return this.return();
}
const element = this.element = array[index];
while (++index < array.length) {
if (array[index] !== element) break;
}
this.count = index - this.index;
return this;
}
return() {
this.done = true;
// cleanup
this.element = this.array = null;
this.count = this.index = 0;
// return iterator to pool
this._pool = GroupingIterator._pool;
return GroupingIterator._pool = this;
}
/* Iterable interface */
[Symbol.iterator]() {
return this;
}
}
var arr = [5, 5, 5, 2, 2, 1, 1, 1, 1];
for (const item of GroupingIterator.from(arr)) {
console.log("element", item.element, "index", item.index, "count", item.count);
}
As part of the precourse for a coding bootcamp, we have to create a simpler version of the underscore JS library. I am struggling with creating the _.first function, which:
Returns an array with the first n elements of an array.
If n is not provided it returns an array with just the first element.
This is what I've got so far:
_.first = function(array, n) {
if (!Array.isArray(array)) return [];
if (typeof n != "number" || n <= 0) return [].slice.call(array, 0, 1);
return n >= array.length ? array : [].slice.call(array, 0, n);
};
It passes all test except one: "It must work on an arguments object"
I know the arguments object passes an array with all the arguments passed and it has a length property but Im struggling to work with it.
Any help would be much appreciated.
The arguments object is just that, a variable defined implicitly on each function scope that acts like an array. Has a length property and you can access the elements by using number properties like a normal array:
var _ = {};
_.first = function() {
if (arguments.length == 0) { // If there's no arguments
return [];
} else { // When there's 1 or more arguments
var array = arguments[0];
var n = arguments.length > 1 ? arguments[1] : 1; // If there's only the "array" argument ("n" is not provided), set "n" to 1
// And now your code, which has nice checks just in case the values are invalid
if (!Array.isArray(array)) {
return [];
}
if (typeof n != "number" || n <= 0) {
n = 1;
}
return [].slice.call(array, 0, n); // Don't worry if slice is bigger than the array length. It will just work, and also always return a copy of the array instead of the array itself.
}
};
console.log( _.first() );
console.log( _.first([0,1,2]) );
console.log( _.first([0,1,2], 2) );
console.log( _.first([0,1,2], 10) );
I would add something to the first answer. The arguments object is something that is normally created implicitly by JavaScript and made available inside the function body. In order to write a unit test for "It must work on an arguments object", they must explicitly define an arguments object and pass it in. This is a bad unit test because it is testing the internal working of your function. You should be free to write the function any way you like, and a unit test should test the external behaviour of the function (return value and/or side effects, based on the arguments passed).
So imo your original solution is good and the test is designed to force you to use a certain syntax for the sake of learning, but this is misleading.
Indexing (maintaining indices) in an array makes Array.prototype.shift and Array.prototype.unshift O(N) instead of O(1).
However, if we just want to use pop() / push() / shift() and unshift() and never use indices for lookup, is there a way to implement a JavaScript array that omits indexing?
I can't think of a way to do it.
The only way I can think of doing it would be with arrays, and only using pop() / push() (since those are O(1)) ... but even with multiple arrays, not sure if it's possible.
Looking to do this w/o a linked-list if possible. I implemented a solution to this with a doubly linked list, but wondering if it's possible to do this w/o a linked-list.
End goal: trying to create a FIFO queue where all operations are in constant time, without using a linked-list.
How about an ES2015 Map that you index with integers?
Let's call the map myFIFOMap.
You keep a first and last integer member as part of your FIFO class. Start them both at zero.
Every time you want to push() into your FIFO queue, you call myFIFOMap.set(++last,item). And pop() looks something like:
const item = myFIFOMap.get(first);
myFIFOMap.delete(first++);
return item;
Should be O(1) to push or pop.
Don't forget to check for boundary conditions (e.g., don't let them pop() when first===last).
Given that JavaScript actually uses double precision floating point, you should be able to run ~2^53 objects through your FIFO before you have problems with the integer precision. So if you run 10,000 items through your FIFO per second, that should be good for around 28,000 years of run time.
If the data you are storing is primitive (string, integers, floats, or combinations of primitives), you can use a JavaScript TypedArray, cast it into an appropriate typed array view, load it with data, and then keep track of the offset(s) yourself.
In your example, pop, shift, and unshift can all implemented by incrementing/decrementing an integer index. push is more difficult, because a TypedArray is a fixed size: if the ArrayBuffer is full, the only two options are to truncate the data, or allocate a new typed array, since JS cannot store pointers.
If you are storing homogeneous objects (they have the same properties), you can save each value into a TypedArray using different views and offsets to mimic a C struct (see the MDN example), and then use a JS function to serialize/unserialize them from the TypedArray, basically converting the data from a binary representation, into a full-fledged JS object.
Going with #SomeCallMeTim 's answer, which I think is on the right track, I have this:
export class Queue {
lookup = new Map<number, any>();
first = 0;
last = 0;
length = 0;
elementExists = false; // when first === last, and item exists there
peek() {
return this.lookup.get(this.first);
}
getByIndex(v: number) {
return this.lookup.get(v);
}
getLength() {
return this.length;
}
pop() {
const last = this.last;
if (this.elementExists && this.first === this.last) {
this.length--;
this.elementExists = false;
}
else if (this.last > this.first) {
this.length--;
this.last--;
}
const v = this.lookup.get(last);
this.lookup.delete(last);
return v;
}
shift() {
const first = this.first;
if (this.elementExists && this.first === this.last) {
this.length--;
this.elementExists = false;
}
else if (this.first < this.last) {
this.length--;
this.first++;
}
const v = this.lookup.get(first);
this.lookup.delete(first);
return v;
}
push(v: any) {
this.length++;
if (this.elementExists && this.first === this.last) {
this.last++;
}
else if (this.first === this.last) {
this.elementExists = true;
}
else {
this.last++;
}
return this.lookup.set(this.last, v);
}
enq(v: any) {
return this.push.apply(this, arguments);
}
enqueue(v: any) {
return this.push.apply(this, arguments);
}
deq() {
return this.shift.apply(this, arguments);
}
dequeue() {
return this.shift.apply(this, arguments);
}
unshift(v: any) {
this.length++;
if (this.elementExists && this.first === this.last) {
this.first--;
}
else if (this.first === this.last) {
this.elementExists = true;
}
else {
this.first--;
}
return this.lookup.set(this.first, v);
}
addToFront(v: any){
return this.unshift.apply(this,arguments);
}
removeAll() {
return this.clear.apply(this, arguments);
}
clear(): void {
this.length = 0;
this.elementExists = false;
this.first = 0;
this.last = 0;
this.lookup.clear();
}
}
takeaways:
it turns out, you can call getByIndex(), as Tim's suggestion points out.
Using Map is surprisingly ~10% faster than POJSO, possibly only because with a POJSO the integers need to get converted to strings for lookup.
The Map implementation is about 20% faster than doubly-linked list, so a doubly-linked list is not that much slower. It's probably slower mostly because we must create a container object with next/prev pointers for each item in the queue, whereas with the non-linked list implementation, we can insert primitives in the queue, etc.
The doubly-linked list allows us to remove/insert items from the middle of the queue in constant time; we cannot do the same with the non-linked list implementation as is.
All of the above are orders of magnitude more performant than a plain array when operating on an array with more than 10,000 elements or so.
I have some constant time queue implementations here:
https://github.com/ORESoftware/linked-queue
Tim had a good suggestion, to make getByIndex() easier to use - we can do this:
getByIndex(v: number) {
if(!Number.isInteger(v)){
throw new Error('Argument must be an integer.');
}
return this.lookup.get(v + this.first);
}
that way to get the 5th element in the queue, all we need to do is:
getByIndex(4);
I have a function that computes product of numbers in an array. The function should work like this
function prod (array){
//compute and return product
}
var arr = [1,2,3,0,4,5,0,6,7,8,0,9];
the function call:
prod(arr); //should return 6
prod(arr); //should return 20
prod(arr); //should return 336 (6*7*8)
prod(arr); //should return 9
prod(arr); //should return 0
prod(arr); //should return 0
prod(arr); //should return 0
In scheme, this is done with continuations, by storing previous state of the function (state of the function is captured just before its exit point) see this
So, in short, I want the javascript function return different values at different times with same parameter passed everytime.
JavaScript is a well designed language, so I hope there must be something which can emulate this. If there happens to be nothing in JS to do it, I do not mind to conclude with failure and move on. So, feel free to say its impossible.
Thanks.
JavaScript is not capable of supporting continuations: it lacks tail-calls.
Generally I would write this to use a "queue" of sorts, although CPS is also do-able (just have a finite stack :-) Note that other state can also be captured in the closure, making it an "explicit continuation" of sorts ... in a very gross sense.
Example using a closure and a queue:
function prodFactory (array){
// dupe array first if needed, is mutated below.
// function parameters are always locally scoped.
array.unshift(undefined) // so array.shift can be at start
// also, perhaps more closured state
var otherState
// just return the real function, yippee!
return function prod () {
array.shift()
// do stuff ... e.g. loop array.shift() and multiply
// set otherState ... eat an apple or a cookie
return stuff
}
}
var prod = prodFactory([1,2,3,0,4,5,0,6,7,8,0,9])
// array at "do stuff", at least until "do stuff" does more stuff
prod() // [1,2,3,0,4,5,0,6,7,8,0,9]
prod() // [2,3,0,4,5,0,6,7,8,0,9]
prod() // [3,0,4,5,0,6,7,8,0,9]
Happy coding.
"Finished implementation". Although this particular problem can avoid array mutation and just use an index: the same concepts apply. (Well, slightly different. With just an index the closed over variable would be altered, whereas with this approach an object is mutated.)
function prodFactory (array) {
array = array.slice(0)
return function prod () {
var p = 1
for (var n = array.shift(); n; n = array.shift()) {
p *= n
}
return p
}
}
var prod = prodFactory([1,2,3,0,4,5,0,6,7,8,0,9])
prod() // 6
prod() // 20
prod() // 336
You can give the function a property that will be remembered between calls:
function prod (array){
if (typeof prod.index === "undefined" || prod.currentArray != array) {
prod.currentArray = array;
prod.index = 0;
}
if (prod.index >= array.length)
return 0;
//compute and return product
var p = 1,
c;
while (prod.index < array.length) {
c = array[prod.index++];
if (c === 0)
return p;
p *= c;
}
return p;
}
I'm just guessing from your description of what should be returned that on an individual call to the function it should take the product of all of the numbers up to but not including the next zero or the end of the array. Calls after the end of the array should return 0? I may have the algorithm wrong for that, but you get the idea for what I'm suggesting to remember the function state between calls.
I've added a property to remember the current array being processed. As long as you keep passing the same array in to the function it will continue with the next elements, but if you pass a different array it will reset...
you can try something like
var index = 0;
function prod (array){
if(index < array.length){
var prod=1;
for(int i=index;i<array.length;i++){
if(array[i] != 0){
prod = prod * array[i];
}
else{
index = i+1;
return prod;
}
}
}
return 0;
}
this will update the global variable index everytime the function is called.
What you're looking for here are generators. As of 1.7, JavaScript supports them.