I know that WeakMap and WeakSet are not iterable for security reasons, that is, “to prevent attackers from seeing the garbage collector’s internal behavior,” but then, this means you cannot clone a WeakMap or WeakSet the way you clone a Map or Set, cloned_map = new Map(existing_map), cloned_set = new Set(existing_set).
How do I clone a WeakMap or WeakSet in Javascript? By cloning, I mean creating another WeakMap or WeakSet with the same weak references.
Why are WeakMap/WeakSet not "cloneable"?
WeakMaps and WeakSets are not "cloneable" for the same reason as why you can't iterate them.
Namely to avoid exposing the latency between the time the key becomes inaccessible and when it is removed from the WeakMap / WeakSet. (The reasoning for this is already covered your linked question )
ECMAScript 2023 Language Specification, 24.3 WeakMap Objects
An implementation may impose an arbitrarily determined latency between the time a key/value pair of a WeakMap becomes inaccessible and the time when the key/value pair is removed from the WeakMap. If this latency was observable to ECMAScript program, it would be a source of indeterminacy that could impact program execution. For that reason, an ECMAScript implementation must not provide any means to observe a key of a WeakMap that does not require the observer to present the observed key.
How iterability and clonability are linked
Think about how new WeakMap(existingWeakMap) would need to be implemented.
To create a new WeakMap from an existing one would require iterating over its elements and copying them over to the new one.
And depending on how many elements there are in the WeakMap, this operation would take a varying amount of time (it would take a lot longer to copy a WeakMap with 100'000 entries than to copy one with none).
And that gives you an attack vector: You can guesstimate the number of key-value pairs within the WeakMap by measuring how long it takes to clone it.
Here's a runnable snippet that uses this technique to guess to number of entries within a Map (could be easily used against WeakMap, if it were clonable):
Note that due to Spectre mitigations performance.now() in browsers is typically rounded, so a larger margin of error in the guesses should be expected.
function measureCloneTime(map) {
const begin = performance.now();
const cloneMap = new Map(map);
const end = performance.now();
return end-begin;
}
function measureAvgCloneTime(map, numSamples = 50) {
let timeSum = 0;
for(let i = 0; i < numSamples; i++) {
timeSum += measureCloneTime(map);
}
return timeSum / numSamples;
}
function makeMapOfSize(n) {
return new Map(Array(n).fill(null).map(() => [{}, {}]));
}
// prime JIT
for(let i = 0; i < 10000; i++) {
measureAvgCloneTime(makeMapOfSize(50));
}
const avgCloneTimes = [
{size: 2**6, time: measureAvgCloneTime(makeMapOfSize(2**6))},
{size: 2**7, time: measureAvgCloneTime(makeMapOfSize(2**7))},
{size: 2**8, time: measureAvgCloneTime(makeMapOfSize(2**8))},
{size: 2**9, time: measureAvgCloneTime(makeMapOfSize(2**9))},
{size: 2**10, time: measureAvgCloneTime(makeMapOfSize(2**10))},
{size: 2**11, time: measureAvgCloneTime(makeMapOfSize(2**11))},
{size: 2**12, time: measureAvgCloneTime(makeMapOfSize(2**12))},
{size: 2**13, time: measureAvgCloneTime(makeMapOfSize(2**13))},
{size: 2**14, time: measureAvgCloneTime(makeMapOfSize(2**14))},
];
function guessMapSizeBasedOnCloneSpeed(map) {
const cloneTime = measureAvgCloneTime(map);
let closestMatch = avgCloneTimes.find(e => e.time > cloneTime);
if(!closestMatch) {
closestMatch = avgCloneTimes[avgCloneTimes.length - 1];
}
const sizeGuess = Math.round(
(cloneTime / closestMatch.time) * closestMatch.size
);
console.log("Real Size: " + map.size + " - Guessed Size: " + sizeGuess);
}
guessMapSizeBasedOnCloneSpeed(makeMapOfSize(1000));
guessMapSizeBasedOnCloneSpeed(makeMapOfSize(4000));
guessMapSizeBasedOnCloneSpeed(makeMapOfSize(6000));
guessMapSizeBasedOnCloneSpeed(makeMapOfSize(10000));
On my machine (Ubuntu 20, Chrome 107) i got the following output (YMMV):
Real Size: 1000 - Guessed Size: 1037
Real Size: 4000 - Guessed Size: 3462
Real Size: 6000 - Guessed Size: 6329
Real Size: 10000 - Guessed Size: 9889
As you can see it is incredibly easy to guess the size of a Map just by cloning it. (by refining the algorithm / taking more samples / using a more accurate time source it could be made even more accurate)
And that's why you can't clone WeakMap / WeakSet.
A possible alternative
If you need a clonable / iterable WeakMap / WeakSet you could build your own by using WeakRef and FinalizationRegistry.
Here's an example how you could build an iterable WeakMap:
class IterableWeakMap {
#weakMap = new WeakMap();
#refSet = new Set();
#registry = new FinalizationRegistry(this.#cleanup.bind(this));
#cleanup(value) {
this.#refSet.delete(value);
}
constructor(iterable) {
if(iterable) {
for(const [key, value] of iterable) {
this.set(key, value);
}
}
}
get(key) {
return this.#weakMap.get(key)?.value;
}
has(key) {
return this.#weakMap.has(key);
}
set(key, value) {
let entry = this.#weakMap.get(key);
if(!entry) {
const ref = new WeakRef(key);
this.#registry.register(key, ref, key);
entry = {ref, value: null};
this.#weakMap.set(key, entry);
this.#refSet.add(ref);
}
entry.value = value;
return this;
}
delete(key) {
const entry = this.#weakMap.get(key);
if(!entry) {
return false;
}
this.#weakMap.delete(key);
this.#refSet.delete(entry.ref);
this.#registry.unregister(key);
return true;
}
clear() {
for(const ref of this.#refSet) {
const el = ref.deref();
if(el !== undefined) {
this.#registry.unregister(el);
}
}
this.#weakMap = new WeakMap();
this.#refSet.clear();
}
*entries() {
for(const ref of this.#refSet) {
const el = ref.deref();
if(el !== undefined) {
yield [el, this.#weakMap.get(el).value];
}
}
}
*keys() {
for(const ref of this.#refSet) {
const el = ref.deref();
if(el !== undefined) {
yield el;
}
}
}
*values() {
for(const ref of this.#refSet) {
const el = ref.deref();
if(el !== undefined) {
yield this.#weakMap.get(el).value;
}
}
}
forEach(callbackFn, thisArg) {
for(const [key, value] of this.entries()) {
callbackFn.call(thisArg, value, key, this);
}
}
[Symbol.iterator]() {
return this.entries();
}
get size() {
let size = 0;
for(const key of this.keys()) {
size++;
}
return size;
}
static get [Symbol.species]() {
return IterableWeakMap;
}
}
// Usage Example:
let foo = {foo: 42};
let bar = {bar: 42};
const map = new IterableWeakMap([
[foo, "foo"],
[bar, "bar"]
]);
const clonedMap = new IterableWeakMap(map);
console.log([...clonedMap.entries()]);
It can be done, trusting that you run your code before whatever is making a weakmap by tracking WeakMap.prototype.set and WeakMap.prototype.delete
However creating a clone needs me to keep my own view of things so this might result in no weakmap ever being collected by garbage ;-;
//the code you run first
(()=>{
let MAPS=new Map()
let DELETE=WeakMap.prototype.delete, SET=WeakMap.prototype.set
let BIND=Function.prototype.call.bind(Function.prototype.bind)
let APPLY=(FN,THIS,ARGS)=>BIND(Function.prototype.apply,FN)(THIS,ARGS)
WeakMap.prototype.set=
function(){
let theMap=MAPS.get(this)
if(!theMap){
theMap=new Map()
MAPS.set(this,theMap)
}
APPLY(theMap.set,theMap,arguments)
return APPLY(SET,this,arguments)
}
WeakMap.prototype.delete=
function(){
let theMap=MAPS.get(this)
if(!theMap){
theMap=new Map()
MAPS.set(this,theMap)
}
APPLY(theMap.delete,theMap,arguments)
return APPLY(DELETE,this,arguments)
}
function cloneWM(target){
let theClone=new WeakMap()
MAPS.get(target).forEach((value,key)=>{
APPLY(SET,theClone,[key,value])
})
return theClone
}
window.cloneWM=cloneWM
})()
//the example(go on devtools console to see it properly)
let w=new WeakMap()
w.set({a:1},'f')
w.set({b:2},'g')
w.set(window,'a')
w.delete(window)
console.log([w,cloneWM(w)])
console.log("go on devtools console to see it properly")
Related
I'm writing a game in typescript and I have a few recursive generators that essentially run map operations on game objects.
I want to avoid creating arrays for the game objects, since there can be upwards of a few thousand objects inside the view.
Here's a few samples of the code so you can understand the basic structure.
*getBoxOverlappingValues(box: BoundingBox): Generator<V, void, void> {
const nodesGenerator = this.tree.getOverlappingNodes(box);
for (const node of nodesGenerator) {
if (node.value === undefined) {
throw new Error('Overlapping node does not contain a value');
}
yield node.value;
}
}
getOverlappingObjects(box: BoundingBox): Generator<number, void, void> {
return this.boundingBoxRepository.getBoxOverlappingValues(box);
}
const viewableObjectIds = this.collisionService.getOverlappingObjects(box);
const viewableObjects = this.gameObjectService.getMultipleObjects(viewableObjectIds);
The issue is that, in my renderer service, I need to pass over the objects multiple times, and render what is supposed to be rendered on each pass.
This means that I need to filter the objects and render them at the same time.
*renderObjectsPrepare(objects: Iterable<GameObject>): Generator<GameObject, void, void> {
for (const object of objects) {
const renderer = this.getObjectRenderer(object);
renderer.update(this.scale);
if (renderer.isRenderable()) {
yield object;
}
}
}
*renderObjectsPass(objects: Iterable<GameObject>, pass: number): Generator<GameObject, void, void> {
for (const object of objects) {
const renderer = this.getObjectRenderer(object);
const objectRelativeX = Math.floor(object.position.x) - this.canvasX;
const objectRelativeY = Math.floor(object.position.y) - this.canvasY;
const objectDrawX = objectRelativeX * this.scale;
const objectDrawY = objectRelativeY * this.scale;
const moreToRender = renderer.renderPass(this.context, pass,
objectDrawX, objectDrawY, this.showInvisible);
if (moreToRender) {
yield object;
}
}
}
renderObjectsOver(objects: Iterable<GameObject>): void {
let renderObjects = this.renderObjectsPrepare(objects);
for (let pass = 0; pass < RenderPass.MAX; pass++) {
renderObjects = this.renderObjectsPass(renderObjects, pass);
}
let renderObject = renderObjects.next();
while (!renderObject.done) {
renderObject = renderObjects.next();
}
}
This obviously renders my objects in a DFS way, so the passes don't actually have any effect.
If I consume the generator after each iteration of the for loop, the iterable is completed so I don't have any other objects left for the next iteration.
Is there any way I could make this work without allocating any arrays?
In ES6, is there any possible to clone an iterator states?
var ma=[1,2,3,4];
var it=ma[Symbol.iterator]();
it.next();
if I want to remember here the it states how should I do in javascritp?
what is remebered in it?
since the
JSON.stringify(it) //it would just return {}
You can’t clone an arbitrary iterator, but you can create many distinct iterators from one by holding onto some state:
function tee(iterable) {
const source = iterable[Symbol.iterator]();
const buffers = [[], []]; // substitute in queue type for efficiency
const DONE = Object.create(null);
const next = i => {
if (buffers[i].length !== 0) {
return buffers[i].shift();
}
const x = source.next();
if (x.done) {
return DONE;
}
buffers[1 - i].push(x.value);
return x.value;
};
return buffers.map(function* (_, i) {
for (;;) {
const x = next(i);
if (x === DONE) {
break;
}
yield x;
}
});
}
Usage:
const [a, b] = tee(iterator);
assert(a.next().value === b.next().value);
It's not possible to clone an iterator. Iterator state is basically completely arbitrary and any given iterator may require or produce side effects (e.g. reading from or writing to a network stream) which are not repeatable on demand.
I built a library that allows you to fork an iterator here: https://github.com/tjenkinson/forkable-iterator
Means you can do something like:
import { buildForkableIterator, fork } from 'forkable-iterator';
function* Source() {
yield 1;
yield 2;
return 'return';
}
const forkableIterator = buildForkableIterator(Source());
console.log(forkableIterator.next()); // { value: 1, done: false }
const child1 = fork(forkableIterator);
// { value: 2, done: false }
console.log(child1.next());
// { value: 2, done: false }
console.log(forkableIterator.next());
// { value: 'return', done: true }
console.log(child1.next());
// { value: 'return', done: true }
console.log(forkableIterator.next());
If you no longer need to keep consuming from a fork providing you loose references to it there also shouldn’t be a memory leak.
It's not official yet, but I think there might be a solution in a stage 2 proposal for Iterator Helpers. If these methods don't affect the original iterator, then doing something like iter.take(Infinity) or iter.drop(0) would have the same effect as cloning.
In the notes it states:
The iterable protocol allows JavaScript objects to define or customize
their iteration behavior, such as what values are looped over in a
for..of construct.
I don’t see what benefit this has when I can already use: Object.degineProperty to make something enumerable.
function withValue(value) {
var d = withValue.d || (
withValue.d = {
enumerable: false,
writeable: false,
configuration: false,
value: null
}
)
// other code;
}
What benefit do these protocols have? If this is just some new syntax to appease the new for…of loop, what benefit does it have other than simply checking the length and seeing if its ran out of items in the “list”.
Think of Iterable as an interface. You can be assured implementations contain an Symbol.iterator property, which implements a next() method. If you implement yourself, you can produce the values you want to iterate over at runtime. As a simple example, produce a list and decide later how many (or which, or whatever criteria) you would like to iterate over:
function List (...args) {
this.getOnly = function (limit) (
const effectiveLimit = Math.min(args.length, limit + 1);
const iterable = {
[Symbol.iterator]() {
let count = 0;
const iterator = {
next() {
if (count < effectiveLimit) {
return { value: args[count++] };
} else {
return { done: true };
}
}
};
return iterator;
}
}
return iterable;
};
}
const list = List(0, 1, 2, 3, 4);
for (const x of list.getOnly(3)) {
console.log(x);
}
// returns 0, 1, 2
If you use a Generator function, which implements the Iterable interface, the same gets really simple:
function List (...args) {
this.getOnly = function* (limit) {
const effectiveLimit = Math.min(args.length, limit + 1);
for (let count = 0; count < effectiveLimit; count++) {
yield args[count];
}
}
}
More examples of what you can do with Iterables are listed here.
Say we have a Map: let m = new Map();, using m.values() returns a map iterator.
But I can't use forEach() or map() on that iterator and implementing a while loop on that iterator seems like an anti-pattern since ES6 offer functions like map().
So is there a way to use map() on an iterator?
The simplest and least performant way to do this is:
Array.from(m).map(([key,value]) => /* whatever */)
Better yet
Array.from(m, ([key, value]) => /* whatever */))
Array.from takes any iterable or array-like thing and converts it into an array! As Daniel points out in the comments, we can add a mapping function to the conversion to remove an iteration and subsequently an intermediate array.
Using Array.from will move your performance from O(1) to O(n) as #hraban points out in the comments. Since m is a Map, and they can't be infinite, we don't have to worry about an infinite sequence. For most instances, this will suffice.
There are a couple of other ways to loop through a map.
Using forEach
m.forEach((value,key) => /* stuff */ )
Using for..of
var myMap = new Map();
myMap.set(0, 'zero');
myMap.set(1, 'one');
for (var [key, value] of myMap) {
console.log(key + ' = ' + value);
}
// 0 = zero
// 1 = one
You could define another iterator function to loop over this:
function* generator() {
for (let i = 0; i < 10; i++) {
console.log(i);
yield i;
}
}
function* mapIterator(iterator, mapping) {
for (let i of iterator) {
yield mapping(i);
}
}
let values = generator();
let mapped = mapIterator(values, (i) => {
let result = i*2;
console.log(`x2 = ${result}`);
return result;
});
console.log('The values will be generated right now.');
console.log(Array.from(mapped).join(','));
Now you might ask: why not just use Array.from instead? Because this will run through the entire iterator, save it to a (temporary) array, iterate it again and then do the mapping. If the list is huge (or even potentially infinite) this will lead to unnecessary memory usage.
Of course, if the list of items is fairly small, using Array.from should be more than sufficient.
Other answers here are... Weird. They seem to be re-implementing parts of the iteration protocol. You can just do this:
function* mapIter(iterable, callback) {
for (let x of iterable) {
yield callback(x);
}
}
and if you want a concrete result just use the spread operator ....
[...mapIter([1, 2, 3], x => x**2)]
This simplest and most performant way is to use the second argument to Array.from to achieve this:
const map = new Map()
map.set('a', 1)
map.set('b', 2)
Array.from(map, ([key, value]) => `${key}:${value}`)
// ['a:1', 'b:2']
This approach works for any non-infinite iterable. And it avoids having to use a separate call to Array.from(map).map(...) which would iterate through the iterable twice and be worse for performance.
There is a proposal, that brings multiple helper functions to Iterator: https://github.com/tc39/proposal-iterator-helpers (rendered)
You can use it today by utilizing core-js-pure:
import { from as iterFrom } from "core-js-pure/features/iterator";
// or if it's working for you (it should work according to the docs,
// but hasn't for me for some reason):
// import iterFrom from "core-js-pure/features/iterator/from";
let m = new Map();
m.set("13", 37);
m.set("42", 42);
const arr = iterFrom(m.values())
.map((val) => val * 2)
.toArray();
// prints "[74, 84]"
console.log(arr);
You could retrieve an iterator over the iterable, then return another iterator that calls the mapping callback function on each iterated element.
const map = (iterable, callback) => {
return {
[Symbol.iterator]() {
const iterator = iterable[Symbol.iterator]();
return {
next() {
const r = iterator.next();
if (r.done)
return r;
else {
return {
value: callback(r.value),
done: false,
};
}
}
}
}
}
};
// Arrays are iterable
console.log(...map([0, 1, 2, 3, 4], (num) => 2 * num)); // 0 2 4 6 8
Take a look at https://www.npmjs.com/package/fluent-iterable
Works with all of iterables (Map, generator function, array) and async iterables.
const map = new Map();
...
console.log(fluent(map).filter(..).map(..));
You could use itiriri that implements array-like methods for iterables:
import { query } from 'itiriri';
let m = new Map();
// set map ...
query(m).filter([k, v] => k < 10).forEach([k, v] => console.log(v));
let arr = query(m.values()).map(v => v * 10).toArray();
In case someone needs the typescript version:
function* mapIter<T1, T2>(iterable: IterableIterator<T1>, callback: (value: T1) => T2) {
for (let x of iterable) {
yield callback(x);
}
}
Based on the answer from MartyO256 (https://stackoverflow.com/a/53159921/7895659), a refactored typescript approach could be the following one:
function mapIterator<TIn, TOut>(
iterator: Iterator<TIn>,
callback: (input: TIn) => TOut,
): Iterator<TOut> {
return {
next() {
const result: IteratorResult<TIn> = iterator.next();
if (result.done === true) {
return result;
} else {
return {
done: false,
value: callback(result.value),
};
}
},
};
}
export function mapIterable<TIn, TOut>(
iterable: Iterable<TIn>,
callback: (input: TIn) => TOut,
): Iterable<TOut> {
const iterator: Iterator<TIn> = iterable[Symbol.iterator]();
const mappedIterator: Iterator<TOut> = mapIterator(iterator, callback);
return {
[Symbol.iterator]: () => mappedIterator,
};
}
I am stumped with this memoize problem. I need to create a function that will check to see if a value has already been calculated for a given argument, return the previous result, or run the calculation and return that value.
I have spent hours on this and while I am new to JS. I cannot get my head around how to do this. I cannot use any built in functions and would really like to understand what I need to do.
Here is what I have so far, which is so wrong it feels like pseudo-code at this point. I have searched existing memoize questions out here but I cannot seem to make any solution work yet. Any help is much appreciated.
myMemoizeFunc = function(passedFunc) {
var firstRun = passedFunc;
function check(passedFunc){
if(firstRun === undefined){
return passedFunc;
}else{return firstRun;}
}
};
Sorry, I should have been more clear. Here are my specific requirements:
myMemoizeFunc must return a function that will check if the calculation has already been calculated for the given arg and return that val if possible. The passedFunc is a function that holds the result of a calculation.
I understand this may seem like a duplicate, but I am marking as not so, as I am having some serious difficulty understanding what I should do here, and need further help than is given in other posts.
This is what my thought process is bringing me towards but again, I am way off.
myMemoizeFunc = function(passedFunc) {
var allValues = [];
return function(){
for(var i = 0; i < myValues.length; i++){
if(myValues[i] === passedFunc){
return i;
}
else{
myValues.push(passedFunc);
return passedFunc;
}
}
}
};
I should not be returning i or passedFunc here, but what else could I do within the if/else while checking for a value? I have been looking at this problem for so long, I am starting to implement code that is ridiculous and need some fresh advice.
I think the main trick for this is to make an object that stores arguments that have been passed in before as keys with the result of the function as the value.
For memoizing functions of a single argument, I would implement it like so:
var myMemoizeFunc = function (passedFunc) {
var cache = {};
return function (x) {
if (x in cache) return cache[x];
return cache[x] = passedFunc(x);
};
};
Then you could use this to memoize any function that takes a single argument, say for example, a recursive function for calculating factorials:
var factorial = myMemoizeFunc(function(n) {
if(n < 2) return 1;
return n * factorial(n-1);
});
Consider this an extension on the answer of Peter Olson.
For a variable number of arguments you could use something like this.
Note: This example is not optimal if you intent to pass complex arguments (arrays, objects, functions). Be sure to read further and not copy/paste blindly.
function memo(fn) {
const cache = {};
function get(args) {
let node = cache;
for (const arg of args) {
if (!("next" in node)) node.next = new Map();
if (!node.next.has(arg)) node.next.set(arg, {});
node = node.next.get(arg);
}
return node;
}
return function (...args) {
const cache = get(args);
if ("item" in cache) return cache.item;
cache.item = fn(...args);
return cache.item;
}
}
This builds the following cache tree structure:
const memoizedFn = memo(fn);
memoizedFn();
memoizedFn(1);
memoizedFn(1, 2);
memoizedFn(2, 1);
cache = {
item: fn(),
next: Map{ // <- Map contents depicted as object
1: {
item: fn(1),
next: Map{
2: { item: fn(1, 2) }
}
},
2: {
next: Map{
1: { item: fn(2, 1) }
}
}
}
}
This solution leaks memory when passing complex arguments (arrays, object, functions) that are no longer referenced afterwards.
memoizedFn({ a: 1 })
Because { a: 1 } is not referenced after the memoizedFn call it would normally be garbage collected. However now it can't be because cache still holds a reference. It can only be garbage collected once memoizedFn itself is garbage collected.
I showed the above first because it shows the base concept and demonstrates the cache structure in a somewhat simple form. To clean up cache that would normally be garbage collected we should use a WeakMap instead of a Map for complex objects.
For those unfamiliar with WeakMap, the keys are a "weak" reference. This means that the keys do not count towards active references towards an object. Once an object is no longer referenced (not counting weak references) it will be garbage collected. This will in turn remove the key/value pair from the WeakMap instance.
const memo = (function () {
const primitives = new Set([
"undefined",
"boolean",
"number",
"bigint",
"string",
"symbol"
]);
function typeOf(item) {
const type = typeof item;
if (primitives.has(type)) return "primitive";
return item === null ? "primitive" : "complex";
}
const map = {
"primitive": Map,
"complex": WeakMap
};
return function (fn) {
const cache = {};
function get(args) {
let node = cache;
for (const arg of args) {
const type = typeOf(arg);
if (!(type in node)) node[type] = new map[type];
if (!node[type].has(arg)) node[type].set(arg, {});
node = node[type].get(arg);
}
return node;
}
return function (...args) {
const cache = get(args);
if ("item" in cache) return cache.item;
cache.item = fn(...args);
return cache.item;
}
}
})();
const fib = memo((n) => {
console.log("fib called with", n);
if (n == 0) return 0;
if (n == 1) return 1;
return fib(n - 1) + fib(n - 2);
});
// heavy operation with complex object
const heavyFn = memo((obj) => {
console.log("heavyFn called with", obj);
// heavy operation
return obj.value * 2;
});
// multiple complex arguments
const map = memo((iterable, mapFn) => {
console.log("map called with", iterable, mapFn);
const result = [];
for (const item of iterable) result.push(mapFn(item));
return result;
});
console.log("### simple argument demonstration ###");
console.log("fib(3)", "//=>", fib(3));
console.log("fib(6)", "//=>", fib(6));
console.log("fib(5)", "//=>", fib(5));
console.log("### exlanation of when cache is garbage collected ###");
(function () {
const item = { value: 7 };
// item stays in memo cache until it is garbade collected
console.log("heavyFn(item)", "//=>", heavyFn(item));
console.log("heavyFn(item)", "//=>", heavyFn(item));
// Does not use the cached item. Although the object has the same contents
// it is a different instance, so not considdered the same.
console.log("heavyFn({ value: 7 })", "//=>", heavyFn({ value: 7 }));
// { value: 7 } is garbade collected (and removed from the memo cache)
})();
// item is garbade collected (and removed from memo cache) it is no longer in scope
console.log("### multiple complex arguments demonstration ###");
console.log("map([1], n => n * 2)", "//=>", map([1], n => n * 2));
// Does not use cache. Although the array and function have the same contents
// they are new instances, so not considdered the same.
console.log("map([1], n => n * 2)", "//=>", map([1], n => n * 2));
const ns = [1, 2];
const double = n => n * 2;
console.log("map(ns, double)", "//=>", map(ns, double));
// Does use cache, same instances are passed.
console.log("map(ns, double)", "//=>", map(ns, double));
// Does use cache, same instances are passed.
ns.push(3);
console.log("mutated ns", ns);
console.log("map(ns, double)", "//=>", map(ns, double));
The structure stays essentially the same, but depending on the type of the argument it will look in either the primitive: Map{} or complex: WeakMap{} object.
const memoizedFn = memo(fn);
memoizedFn();
memoizedFn(1);
memoizedFn(1, 2);
memoizedFn({ value: 2 }, 1);
cache = {
item: fn(),
primitive: Map{
1: {
item: fn(1),
primitive: Map{
2: { item: fn(1, 2) }
}
}
},
complex: WeakMap{
{ value: 2 }: { // <- cleared if { value: 2 } is garbage collected
primitive: Map{
1: { item: fn({ value: 2 }, 1) }
}
}
}
}
This solution does not memoize any errors thrown. Arguments are considered equal based on Map key equality. If you also need to memoize any errors thrown I hope that this answer gave you the building blocks to do so.
There are a number of memoization libraries available. Doing memoization efficiently is not as straight forward as it seems. I suggest a library be used. Two of the fastest are:
https://github.com/anywhichway/iMemoized
https://github.com/planttheidea/moize
See here for a comprehensive(-ish) list of memoization libraries: https://stackoverflow.com/a/61402805/2441655