Javascript map with composite keys - javascript

In JavaScript I want to store values to compound keys, similar to a C# dictionary with a tuple as key. This is where I came across the Map class. However, it does not seem to work quite as well as I would like it to. Here's my current approach:
var test = new Map();
test.set({a: 1, b: 1}, 'Bla');
test.set({a: 5, b: 7}, 'Blub');
test.get({a: 1, b: 1}); // ==> Returns undefined; would expect 'Bla'
I guess, that this has something to do that both objects with {a: 1, b: 1} have a different memory address and therefore are the same, but not identical. The Dictionary class in c# uses a hash function in background. Is there something similar in JS? Or a much easier approach?
My real key object consistst of three strings.

Your analysis is correct. It works like this because in Javascript you usually operate primitive objects that don't have all this hashing behavior attached to them out of the box. Nothing stops you from implementing your own Dictionary with hash function in background though
class Dictionary {
map = {}
constructor(hashFunction) {
this.hashFunction = hashFunction
}
set(key, item) {
this.map[this.hashFunction(key)] = item
}
get(key) {
return this.map[this.hashFunction(key)]
}
delete(key) {
delete this.map[this.hashFunction(key)]
}
}
const dict = new Dictionary((keyObject) => JSON.stringify(keyObject))
dict.set({ a: 1, b: 2 }, 'hello')
console.log(dict.get({ a: 1, b: 2 })) // hello
As to what to use, Map or object, the difference between Map and object is simply that object only supports string keys (also Symbols but irrelevant right now) while Map supports any value at a cost of using more resources, less compatibility with old browsers, and it's generally less handy to use than object (and also stops GC from cleaning out those objects you use as keys). That said, object is your choice here

{} this operator will create a new object every time; and a new object will have a different object refenece each time; if you save the object reference and use that for multiple operation its ok; but since you are trying to use a new object refence every time it won't work; you may either use a primitive type as key, or same object reference like the snippet below
//approach 1 using same object reference
var test = new Map();
var obj = {a: 1, b: 1};
test.set(obj, 'Bla');
test.set({a: 5, b: 7}, 'Blub');
let result = test.get(obj);
console.log(result);
// aproach 2 using JSON.stringify
test = new Map();
test.set(JSON.stringify({a: 1, b: 1}), 'Bla');
test.set({a: 5, b: 7}, 'Blub');
result = test.get(JSON.stringify({a: 1, b: 1}));
console.log(result)

Related

Difference between fill and fill map

I came across some code which was filling an array of objects like so:
const getObj = () => {
return {a: 1, b: 2, c: 3};
}
const arr = Array(3).fill(null).map(getObj);
console.log(arr);
However, I'm wondering what the main purpose of fill(null).map(getObj) is? It seems redundant as I can simply write the following and get the same resulting array:
const getObj = () => {
return {a: 1, b: 2, c: 3};
}
const arr = Array(3).fill(getObj());
console.log(arr);
So, I'm wondering if these two lines of code do exactly the same thing or if there is something I'm missing?
The resulting arrays (top array first method with fill + map bottom array is only using map):
Array(3).fill(getObj()) will fill your array with references to the same object, Array(3).fill(null).map(getObj) will create object per element. See the example below:
const getObj = () => {
return {a: 1, b: 2, c: 3};
}
const arr = Array(3).fill(null).map(getObj);
arr[0].b=4;
console.log(JSON.stringify(arr));
const arr1 = Array(3).fill(getObj());
arr1[0].b=4;
console.log(JSON.stringify(arr1))
When it comes to Array.fill it is stated in the documentation that:
When fill gets passed an object, it will copy the reference and fill
the array with references to that object.
So using a Array.fill with objects has somewhat limited application unless you really want to have multiple objects pointing to the same reference. In more than few use cases however that would lead to bugs if not understood.
For the 2nd case where you do Array(3).fill(null).map(getObj) this is one of the ways to create a new array based on a given arbitrary size and at the same time fill it with new objects.
The real need for the fill(null) is due to the fact that calling Array(3) would only do one thing. Create a new array and set its length property to 3. That is it!
let arr = Array(3) // returns new array with its "length" property set to 3
console.log(arr) // [empty × 3] <-- browser console
So that array now has only length and bunch of empty elements. You can't do much with it until it actually has values. Therefore the need for fill so that you give it any value and then map through it to set the values you actually want. Using Array.map and calling each iteration your function guarantees you do not have same references. You could have skipped the fill step and done something like this:
const getObj = () => ({a: 1, b: 2, c: 3})
// using array destructuring
let arr = [...Array(3)].map(getObj)
arr[0].a = 3
console.log(arr)
// using Array.from
let arr2 = Array.from(Array(3)).map(getObj)
arr2[0].a = 3
console.log(arr2)
There are somewhat shorter and get you the exact same result of filling the array with specified length with objects and not references to the same object.
The trick here is that both would "fill" the array after it is defined with undefined values instead, after which the map would fill it with the values we want.

Replace an element of an object by one of its own sub-elements

Let's say I have:
let list = [{a: {b: 'foo'}}, {a: {b: 'bar'}}]
I want to end up with:
list = [{a: 'foo'}, {a: 'bar'}]
This works:
list = list.map(d => {d.a = d.a.b; return d})
But I have a bad feeling that changing the value in place is a bad idea.
Is there a cleaner way? is my solution actually valid?
You could use Array#forEach and change the object in situ, because you need not to return a new array, while you already mutate the original object of the array.
let list = [{ a: { b: 'foo' } }, { a: { b: 'bar' } }];
list.forEach(d => d.a = d.a.b);
console.log(list);
It is not changing the value in place.
map method only creates a new array by applying a callback provided function for every item in the array.
The map() method creates a new array with the results of calling a
provided function on every element in the calling array.
For changing the value in place you can use forEach method.

how does destructuring array get length property

I came across this destructuring expression in an article.
const words = ['oops', 'gasp', 'shout', 'sun'];
let { length } = words;
console.log(length); // 4
How does length get the value of 4? I know .length is a property of the array, but how does this syntax work? It seems to be doing let length = words.length; and in fact in babel does output it as such. But my question is what is the logic behind it?
What is confusing me is the mix of an array of values and the the use of {length}.
I have read MDN 's description but can't see this example explained.
Intro
I had the same question so I read the docs and it finally clicked for me that the variable (length) is just being assigned the Object’s value at the key with the same name as the variable (words[length]).
That may not make sense, so I’m going to start by explaining this type of destructuring in 2 steps and then show how it applies in this situation.
I’ll then provide one last (cool) example which confused me initially and led me to research this topic. It’s also the exact problem described in a duplicate question.
Destructuring
This syntax is called Object Destructuring (MDN):
let a, b;
({a, b} = {a: 1, b: 2});
a; // 1
b; // 2
({b, a} = {c: 3, b: 2, d: 4, a: 1});
a; // 1
b; // 2
Same result – order doesn't matter!
The variables on the left (a & b) are assigned to the value of their corresponding key's value on the Object (right).
const obj = {a: 1, b: 2};
let {a, b} = obj;
a; // 1
b; // 2
We can store the object on the right into a variable (obj in this case) and then use the same syntax (without parens).
Applied to your Example (Array)
Finally, let's show the words array as an Object (arrays are just Objects under the hood).
Here's what you'll see if you type ['oops', 'gasp', 'shout', 'sun'] into Chrome's console:
const words = {0: 'oops', 1: 'gasp', 2: 'shout', 3: 'sun', length: 4};
let { length } = words;
console.log(length); // 4
Just like above, it's going to set the length variable (left) to the value of the corresponding key in the words Object/array (right). words[length] has a value of 4 so the length variable (left) now has a value of 4 as well.
Example Where Destructuring is Useful
From Wes Bos's Blog:
Given a person Object, how do you create global variables referring to its properties?
const person = {
first: 'Wes',
last: 'Bos',
country: 'Canada',
city: 'Hamilton',
twitter: '#wesbos'
};
Old School:
const first = person.first;
const last = person.last;
The power of destructuring!
const { first, last } = person;
BONUS: Cool Usage w/ Arrow Functions (MDN)
Challenge: return new array with the lengths of the respective elements in the input array.
This example is shown as a way to use arrow functions. All three solutions solve the problem, they’re just showing the evolution to finally arrive at a simple one-liner.
var materials = [
'Hydrogen',
'Helium',
'Lithium',
'Beryllium'
];
materials.map(function(material) {
return material.length;
}); // [8, 6, 7, 9]
materials.map((material) => {
return material.length;
}); // [8, 6, 7, 9]
materials.map(({length}) => length); // [8, 6, 7, 9]
On each iteration of the input array passed to map, we are setting the {length} parameter to the current element of materials that is passed in as an argument:
{length} = 'Hydrogen';
This sets the length variable to the length property of the current string element (more on that below) and then simply returns the value of length to the map function which eventually returns a new array with all of the elements from the original array's lengths as its elements.
Supplement: String (primitive) vs. Array (Object)
"strings" are "primitives", not objects, so they don't have properties BUT when you try to call a property such as .length on a string, the primitive is coerced (changed) into a String Object.
Here's what a String Object looks like in the Chrome console. Notice how it's practically the same as the Array Object. String (function) is a constructor, so calling new will create a new Object constructed from that function with String (Object) as its prototype (which is what __proto__ refers to):
Think of the code as being
const words = {0:'oops', 1:'gasp', 2:'shout', 3:'sun', length:4};
let { length } = words;
console.log(length);
Which it essentially is (nevermind all the other stuff arrays come with)
Does it make sense now?
If you add a property inside the { and } that belongs to the Array, it's value is copied.
Here we check for the property constructor. Will log constructor function to console.
IF you add a property not belongs to an array, will return undefined
Another Example
const words = ['oops', 'gasp', 'shout', 'sun'];
let { constructor } = words;
console.log(constructor);
We are testing for something will return undefined
const words = ['oops', 'gasp', 'shout', 'sun'];
let { something } = words;
console.log(something);

Functional Programming: Sum of properties

I am trying to implement a function in JS using Ramda that takes a list of objects and returns the sum of specific properties. E.g.
var l = [
{a: 1, b: 2, c: 0},
{a: 1, b: 3, c: -1},
{a: 1, b: 4, c: 0},
]
func(['a', 'b'], l)
-> {a: 3, b: 9}
In principle, I would need a function like this:
R.map(R.props($1, _), $2)
What is the most elegant way to implement something like this in functional programming? R.map(R.props) does not work for obvious reasons. I tried to use some combinations with R.compose or R.pipe but I had no luck
I would break this into two parts:
const fnOverProp = R.curry((fn, prop, list) => fn(R.pluck(prop, list)));
const fnOverProps = R.curry((fn, props, list) =>
R.fromPairs(R.zip(props)(R.map(fnOverProp(fn, __, list), props))));
(I'm sorry, I've got a creative block on naming here. These names are pretty awful.)
You could use it like this:
fnOverProp(R.sum, 'b', list); //=> 9
fnOverProps(R.sum, ['a', 'b'], list); //=> {a: 3, b: 9}
const sumOverProps = fnOverProps(R.sum);
sumOverProps(['a', 'c'], list); //=> {a: 3, c: -1}
Note first that I generalize your idea to make sum a parameter. It just made sense to me that this was not the only thing one might want to do with such a function.
Then I break it into a function that operates on a single property name. This strikes me as quite useful on its own. You might not need to do this for a whole list of them, and this function is worth using on its own.
Then I wrap this in a function that maps the first function over a list of properties. Note that this is really a fairly simple function:
(fn, props, list) => R.map(fnOverProp(fn, R.__, list), props)
wrapped inside two wrappers to convert the flat list of results into the object output you're looking for. R.zip(props)(<thatFn>, props) turns [3, -1] into [['a', 3], ['c', -1]] and then R.fromPairs turns that into {a: 3, c: -1}.
This does not give you your single-line implementation you say you want. Obviously you could fold the definition of the first function into the second, but I don't think that gains you much. And even if it could be made points-free, I would expect that would simply reduce readability in this case.
You can see this in action in the Ramda REPL.
This can also be defined as a reducer over the list of objects. Given some initial state of the result, we want a function that can sum the results of the two object's properties, where props is the list of properties we are interested in:
reduce(useWith(mergeWith(add, [identity, pick(props)]))
You then have two options as to whether the list is potentially non-empty or is guaranteed to have at lest one object. If the list is guaranteed to be non-empty, the initial value of the reducer can simply be the head of the list, serving the tail as the list to iterate over.
const func = (props, objs) =>
reduce(useWith(mergeWith(add), [identity, pick(props)]), pick(props, head(objs)), tail(objs))
If however the list could potentially be empty, the reduce function must be initialised with the empty values (zero in this case).
const func = (props, objs) =>
reduce(useWith(mergeWith(add), [identity, pick(props)]), pick(props, map(flip(objOf)(0), props)), objs)
You may try this:
function add(a, b){
return Object.keys(a).reduce(function(p, c){
p[c] += a[c];
return p;
}, b);
}
console.log(R.reduce(add, {a:0, b:0, c:0}, l))
This is another approach :
R.pipe(R.map(R.props(['a', 'b', 'c'])), R.transpose, R.map(R.sum))(l);

point free where arguments are in the wrong order

I want to write a function using Ramda's standard function set that given a dictionary and a key, it will increment the value for the key. Example
fn('foo', {}) // => {foo: 1}
fn('foo', {foo: 1}) // => {foo: 2}
I've gotten pretty close but am missing a way to curry properly.
I have a method that takes a key and an object and returns one more:
// count :: Any -> Number
var count = R.compose(R.inc, R.defaultTo(0))
// countProp :: String -> Object -> Number
var countProp = R.curry(R.compose(count, (R.prop(R.__))))
countProp('foo', {foo:1}) // 2
countProp('foo', {}) // 1
Now I want to return a new data structure
// accum :: String -> Object -> Object
var accum = R.curry(function(key, obj){
return R.assoc(key, countProp(key, obj), obj)
})
accum('foo', {foo: 1}) // => {foo: 2}
But the issue is that in order to make this point free, I have to figure out how to get the values in the functions setup to get curried in the proper order. What am I doing wrong? Should I set up this function differently? I tried to set it up so both dependent functions would both take the key first, then the object, but I'm missing something. Should I be considering a specific Functor for this?
Thanks!
Several points:
First, if #davidchambers' solution does what you need, that's great. It will be even better when the next version of Ramda is released and lensProp is added, which will make this just
var fooLens = R.lensProp('foo');
fooLens.map(R.inc, {foo: 1, bar: 2}); // => {foo: 2, bar: 2}
Second, there is a difference between your original function and either lens version:
accum('foo', {bar: 1}); //=> {"bar":1,"foo":1}
fooLens.map(R.inc, {bar: 1}); //=> {"bar":1,"foo":null}
Third, regardless of all this, if you are interested in determining how to wrap your function up in a points-free manner, Ramda has several functions that will help. There is one helper function nthArg which does nothing but return a function that return the nth argument of the outer function in which it's called. Then there are several functions that act as extended versions of compose including useWith and converge.
You can use them like this:
var accum = R.converge(R.assoc, R.nthArg(0), countProp, R.nthArg(1));
accum('foo', {foo: 1, bar: 2}); // => {foo: 2, bar: 2}
accum('foo', {bar: 2}); // => {foo: 1, bar: 2}
In this code, converge passes the arguments (key and obj) to each of the functions passed as parameters, except for the first one, then passes the results of each of those to the that first function.
Finally, although this shows a way to write this code points-free, and it's not in the end too horrible, it's arguably less clear than your earlier version that isn't points-free. I love points-free code. But sometimes we make a fetish of it, making code points-free for no good reason. If you can't in the end use a lens version, you might want to think carefully whether a points-free solution is actually clearer than the alternative.
You could use R.lens:
const fooLens = R.lens(R.prop('foo'), R.assoc('foo'));
fooLens.map(R.inc, {foo: 1, bar: 2}); // => {foo: 2, bar: 2}
fooLens.map(R.inc, {foo: 2, bar: 2}); // => {foo: 3, bar: 2}
fooLens.map(R.inc, {foo: 3, bar: 2}); // => {foo: 4, bar: 2}
Lenses make it possible to create a succession of values without undermining the integrity of a succeeded value by mutating it.

Categories

Resources