I have a memoized fn, where the fn is memoized by two input references:
let NewRefCursor = memoized(
(deref, swap) => refToHash(deref) + refToHash(swap), // memoizer identity fn
(deref, swap) => new RefCursor(deref, swap)); // the function we are memoizing
The behavior I need is NewRefCursor(a, b) === NewRefCursor(a, b)
. When a
or b
gets garbage collected, the cursor should also be garbage collected.
refToHash
is another memoized function that uses ES6 WeakMap
so that references seen are still allowed to GC.
NewRefCursor
is much trickier to memoize because it uses two parameters to determine cache hit, thus isn't compatible with WeakMap
, thus will prevent any reference seen from ever being GC'ed. I am open to any manner of trickery, attaching private fields to input objects, probabilistic data structures. This leak needs to be solved. The only solution I have so far is to add a parameter to memoize that bounds the cache size, and tune that parameter on a per-app basis. gross.
If you create a two level weak-map (store weakmaps on weakmaps), whenever a obj on first level is gced, you lose the whole second level (when a
is gced, you lose b
). If b
is gced, you will still have a weakmap for a
, which will only be there while there is another pair (a
, something). Not the best implementation, but I think it suffices:
function BiWeakMap() {
this._map = new WeakMap();
}
BiWeakMap.prototype.set = function(key1, key2, value) {
if (!this._map.has(key1)) {
this._map.set(key1, new WeakMap());
}
this._map.get(key1).set(key2, value);
return this;
};
BiWeakMap.prototype.has = function(key1, key2) {
return this._map.has(key1) && this._map.get(key1).has(key2);
};
BiWeakMap.prototype.get = function(key1, key2) {
return this._map.get(key1) && this._map.get(key1).get(key2);
};
This concept can be extended to a n-level solution. Does this solve your problem or am I missing something?