Unexpected Set trap behavior in ES6 Proxy - javascript

let ar = [];
let p = new Proxy(new Map(), {
get: (o, k) => {
ar.push(1)
return Reflect.get(o, k).bind(o)
},
set: (o, k, v) => {
ar.push(2)
return Reflect.set(o, k, v)
}
});
p.set(1, 2)
p.get(1)
console.log(ar) //Outputs [1,1]
I am trying to intercept both set and get operations on a Map object. I am in no way trying to extend/subclass a Map.
In the process of proxying said Map object, I came across this weird unexpected behavior, the set trap isn't being fired in the above code, but instead the get trap gets fired twice!
I further proceeded to log the k(key) values from the get trap in the following way;
//same code above
get: (o, k) => {
console.log(k) //Logs set and then get!
return Reflect.get(o, k).bind(o)
}
//same code below
The behavior I expect is for the array to be [2,1] and console.log(k) at the get trap to actually output the value of the key.
I wish to know why this happens as so, I've gone through a couple problems like this in here related to proxyifying maps, none of them lead to any sensible reasoning as to why this is happening.
My end goal is to fire an event at the set trap. Am I using Proxy for something it is meant to be used? If not, what approach should I take? Should I abandon using a Map to an Object Literal even though it will bring all the cons of using one? ex: no length property, string-only properties, no forced-unique keys etc.
UPDATE: The more and more I dig into this proxified Map, the more issues I keep coming across. It now seems to me as if the ES6 Proxy API treats Maps the same way as it does an ordinary object. The answer by Vill and my digging backs this up. Why do I say this? Read the following;
//same code above
get: (o, k) => {
if(k === 'tray') return ']'
return Reflect.get(o, k).bind(o)
}
//same code below
p.tray //returns ]
The above code shouldn't theoretically succeed, right? It is as if Proxy is using the same logic to intercept object operations when intercepting Maps as well! As;
///--While the following---//
let m = new Map();
m.set('tray', ']')
m.tray //undefined
Vill's answer says that the Proxy identifies Map.prototype.set as first reading set and invoking it as a function next.
Doesn't that mean in the set trap I've written in the original code(on the very top) doesn't intercept the modification/setting a property of the Map and in fact the implicit/native to Map-Map.prototype.set is used instead of the Reflect.set which we grant through the Proxy?
Doesn't all this further enforce the fact that Proxy and Maps don't mix together well? Am I heading down the wrong way? What am I misunderstanding if so? Are Proxies supposed to treat Maps like just any other object?

It is not bug it is feature (joke).
You should understand what exactly proxy's .get and .set do. Get will intercept any reading try of the object. Lets take your example:
p.set(1,2)
p.get(1)
On the first line we: read from object property with name set and then try to invoke it as function
On the second line we read from object property with name get and then try to invoke it as a function.
If you will try this code:
p.val = 5;
then you will try to set 5 to the target with name val and setter will fire.
This how proxy's setter and getter work.
So, to achive desired behavior, check the property name and return the function with some additional implementation. And do not forget to call original function.
Somth like this:
get: (o, k) => {
if (k==='get') return Reflect.get(o, k).bind(o);
if (k==='set') return function(v){Reflect.set(o, k, v)}
}
Hope this helps.
Update:
let obj = {
take: (v) => v
};
let handler = {
get: (target, key) => {
if (key === 'take') {
return function(v) {
console.log(`logger says: function ${key} was called with argument: ${v}`);
}
return target[key];
}
}
};
let proxy = new Proxy(obj, handler);
proxy.take(5);
proxy.take(3);

Related

Keeping nested prototype methods from object transfering from WebWorker

I need to reserialize an object from a WebWorker sharing the same definitions.
Upon reception of the message, I'm losing all the prototype functions.
Doing
worker.onmessage = ({ data }) => {
this.programParser = Object.assign(new ProgramModel(), data);
}
Works only for first level prototype functions, but I need a solution for all the nested classes (this object has a large inheritance and dependency tree).
What could I do to do that ?
Current data model looks like :
(note the Object prototypes)
Using flatted and custom revive function (I added the class name in each object from the serializing function), I can achieve a much closer model to what I would need, but some nested references aren't treated as original class objects.
The flatted revive function is the following :
const model = parse(data, (key, val) => {
if (val !== null && val.className && val.className in models) {
if (val.className === "DataProvider") {
console.log(val)
return new DataProvider(val.providerData)
}
return Object.assign(new (<any>models)[val.className](), val)
}
return val;
})
The flatted library is used to keep clear of circular issues of JSON serialization : https://github.com/WebReflection/flatted
Minimal exemple of the issue : https://codesandbox.io/s/zealous-https-n314w?file=/src/index.ts (Object is lost after 2nd level referencing)
I don't want to take any credit from this previous answer, but while the explanation/process through is nice and sound, I believe the proposed solution is not the best one, and btw, I am the author of the flatted library that actually explained, and helped out, in this issue filed against such library, without even knowing there was a discussion here ... sorry I am late ...
The missing piece of the previous answer is that the same instance gets updated over and over while reviving the whole structure, but nothing like that is actually needed, because either Set or WeakSet can help speeding up the process, avoiding upgrading what's been upgraded already, over and over.
const {setPrototypeOf} = Reflect;
const upgraded = new Set;
const ret = parse(str, (_, v) => {
if (v && v.className && models[v.className] && !upgraded.has(v)) {
upgraded.add(v);
setPrototypeOf(v, models[v.className].prototype);
}
return v;
});
This change doesn't strictly improve the reason it either works or it is the best solution, compared to substitution, but it takes into account performance and redundant upgrades, 'cause unnecessary setPrototypeOf calls, might not be desired at all 😉
This is a classical problem with tying the knot on a circular structure. When deserialising, the flatted library has to start somewhere, passing in the original not-yet-revived object as the argument. It could in theory pass a proxy (or an object with getters) that parse the involved objects on-demand in the right order, but if all objects in the circle need to be revived this would lead to a stack overflow, as JS doesn't use lazy evaluation where you can reference the result of a call before having evaluated it.
A pure approach would actually need a reviver function that does support such a lazy approach, not accessing the object passed to it until after the deserialisation has finished:
const cache = new WeakMap();
const ret = parse(str, (k, v) => {
if (cache.has(v)) return cache.get(v);
if (v && v.className && (<any>models)[v.className]) {
const instance = new (<any>models)[v.className]();
cache.set(v, instance);
for (const p in instance) {
Object.defineProperty(instance, p, {
set(x) { v[p] = x; }, // not necessary if your models were immutable
get() { return v[p]; },
enumerable: true,
});
}
return instance;
}
return v;
});
This essentially makes the instance a proxy over v, getting all its values from there. When flatted does tie the knot by assigning revived values to properties of v, they'll also become available on instance. Apart from the .className, no properties of v are accessed during the reviver call. Only when you access ret.something, the object properties will be read, and will contain the revived objects by then.
The downside of this approach is that a) all models will need to declare and initialise their properties upfront (which your B for example doesn't do) so that the for (const p in instance) loop works, and b) all your model properties will be replaced with accessors, which is inefficient and may conflict with the internal implementation.
An alternative is to forward the property assignments that flatted does on the original object to the newly constructed instance, even if they happen after the reviver call:
const cache = new WeakMap();
const ret = parse(str, (k, v) => {
if (cache.has(v)) return cache.get(v);
if (v && v.className && (<any>models)[v.className]) {
const instance = new (<any>models)[v.className]();
cache.set(v, instance);
Object.assign(instance, v);
for (const p in v) {
Object.defineProperty(v, p, {
set(x) { instance[p] = x; },
get() { return instance[p]; }, // not truly necessary but helps when `v` is logged
});
}
return instance;
}
return v;
});
This basically reflects how you originally constructed the circular reference using a.setB(new B(a)), except by directly assigning the property instead of using the model's setB method, which the reviver doesn't know about. When using this approach, make sure to explicitly document that all your models must support constructor calls without arguments, and direct property assignment (be it via Object.assign or instance[p] = x). If setters are necessary, use accessor properties (set syntax) instead of set…() methods.
The simplest and probably fastest approach would be to not return a new object from the reviver at all, but keep its identity:
const ret = parse(str, (k, v) => {
if (v && v.className) {
const model = (<any>models)[v.className];
if (model && Object.getPrototypeOf(v) != model.prototype) {
Object.setPrototypeOf(v, model.prototype);
}
}
return v;
});
This restores the prototype methods by simply swapping out the prototype of the revived object for the expected one. However, this means that your models are instantiated without a constructor call, they may not use enumerable getters, and they cannot maintain private state even if they expose it through .toJSON correctly.

Default parameter in function to a reference in c++ unordered_map

I am a JS dev and to implement this function header,I do :
function gridTraveler(m,n,memo={})
Here,I know that the object would be passed by reference in JS.
So in C++,I decided to do:
int gridTraveler(int m,int n,std::unordered_map<int,int>& memo = {})
But I get an error : "initial reference to a non const must be an lvalue".
How should I pass the default parameter to the function as well as pass the hasmap by reference?
NOTE:I would be editing the memo object by adding in keys and value in the function body
What you meant can probably be achieved by a two-arg overload that takes care of the unordered map lifetime.
int gridTraveler(int m,int n,std::unordered_map<int,int>& memo) {
//your implementation here
}
int gridTraveler(int m, int n) {
std::unordered_map<int,int> memo = {};
return gridTraveler(m, n, memo);
}
Consider explicitly telling it what you really want or creating multiple functions- one that takes a map or one that doesn't.
Possibility 1: You want to sometimes provide a map, but the map serves no operational value; it's just some optional output. C++ does its best to prevent NULL values as references so use a pointer. Adding values to a throwaway map would also be silly if you didn't really need to.
int gridTraveler(int m,int n,std::unordered_map<int,int>* const memo = nullptr)
{
//code stuff for operation
if(memo) //optional output related operations occur if it was provided
{
//populate memo if it is available
}
}
Possibility 2: You want to sometimes output a map that is internally used in operations. When it is not provided, you want to use an empty map as the value and throw it away after. You need a map for your operation but it's a byproduct and you only sometimes care about its values.
int gridTraveler(int m,int n,std::unordered_map<int,int>& memo)
{
//do your thing...
}
int gridTraveler(int m,int n)
{
std::unordered_map<int,int> memo;
const int returnval = gridTraveler(m,n,memo);
//last chance to do something with memo here before it gets thrown out
return returnval;
}
There are a lot of other possibilities- I can think of scenarios where the map would be best implemented class member with gridTraveler being a function of the class and where the map would be in a global scope, only initialized once to empty map per process.
You can not make a reference on a tempory value.
Here: std::unordered_map<int, int>& memo = {} //The "{}" is a tempory value
Use const std::unordered_map<int, int>& memo = {} or use a value without the "&".

Is it possible to alter behavior of any object's method in advance?

I need to achieve the following: if one accesses method foo() of object obj (performs obj.foo()), then what happens instead is obj.bar("foo").
In this actual scenario I could do this by simply defining a getter for obj.foo like:
Object.defineProperty(obj, "foo", {
get: () => obj.bar("foo")
});
But is it possible to do that for every method in advance (even ones that are not defined yet)? I mean without providing getters for every single method.
The best solution that I can see for now is iterating all the methods of obj after its full definition:
for (let methodName in obj)
if (typeof obj[methodName] === "function" && methodName !== "bar")
Object.defineProperty(obj, methodName, {
get: () => obj.bar(methodName)
});
Another one is starting every method with one repeated line of code – something unpleasant.
Are there solutions without iterating or self-repeating?
The way to do this in modern Javascript is with a Proxy object.
This is able (among other thing) to forward all the property accesses to an object to a handler.
So we can do something like this:
let origin = {
bar(word) {
console.log('bar: ', word);
}
};
// wrap the origin object in a Proxy handler
let p = new Proxy(origin, {
get(target, prop) {
// wrap the value in a function
return () => target.bar(prop);
}
});
p.foo(); // outputs "bar: foo"
Note that this presumes you want to do p.foo(), rather than just p.foo. It wraps the value you want in a function. It will also forward the property access even if the property actually exists.

Using both `get` and `apply` Proxy traps on plain JS object

The following code is a simple proxy that logs out the "gets" that were trapped:
var p = new Proxy({}, {
get: function(target, property, receiver) {
console.log("getting: ", property);
return target[property];
}
});
When I coerce this into a String with "hello " + p, I get the following output in the console:
getting: Symbol(Symbol.toPrimitive)
getting: valueOf
getting: toString
getting: Symbol(Symbol.toStringTag)
"hello [object Object]"
Everything is fine so far, but let's do something a little sneaky and proxy a function, but actually still use it as a proxy to our plain object we used in the last example. The reason I want this is because I'd like to be able to capture both gets and applys on this obj.
Notice the return target.obj part - we're really using this to proxy obj still - it's just that we're doing it via fn:
var fn = function(){};
fn.obj = {};
var p = new Proxy(fn, {
get: function(target, property, receiver) {
console.log("getting: ", property);
return target.obj[property];
}
});
Now, I'd have thought this would produce exactly the same output as the last example for "hello " + p, but I was wrong:
getting: Symbol(Symbol.toPrimitive)
getting: valueOf
getting: toString
getting: Symbol(Symbol.toStringTag)
"hello [object Function]"
Notice that it has resulted in a Function string tag rather than an Object one. What's going on here? It's as if toString is being called on fn rather than obj. (Edit: But we can add fn.toString = function(){ return "fn"; } and it doesn't change the output, so maybe it's not fn that is being stringified here?)
If you pop a debugger statement in there, you'll see it's actually returning fn.obj.toString as you'd expect, but for some reason the final output is a function rather than an object (though I'm not entirely sure which function). Thanks for your help!
P.S. I haven't explained the full context of my use case (short version: it's for a DSL, so bending "good practice" is fine), and so suggesting alternative patterns to achieve both get and apply traps on an object (in effect) may not be relevant to my particular case. I'd really just like to understand why the above approach isn't working like I expect it to, but would also like to ensure the question is broad enough to help future readers in a similar situation.
I think I've found the bug. When we return a function, it looks like we need to bind it to target.obj, otherwise it's being bound to some function somewhere. I'm not completely up to scratch on this stuff, but I think it makes sense. So here's the updated, working code:
var fn = function(){};
fn.obj = {};
fn.toString = function(){ return "fn"; }
var p = new Proxy(fn, {
get: function(target, property, receiver) {
console.log("getting: ", property);
let result = target.obj[property];
if(typeof result === 'function') {
result = result.bind(target.obj);
}
return result;
}
});

What is the appropriate / recommended way to use hasOwnProperty?

Provided that the object MAY contain own property called "hasOwnProperty":
> a={abc: 123};
{ abc: 123 }
> a.hasOwnProperty("abc");
true
> a['hasOwnProperty'] = 1;
1
> a.hasOwnProperty("abc");
TypeError: a.hasOwnProperty is not a function
...
This works, kinda ugly interface, if you think about Object.keys(), Object.assign() ETC.. So, is there a better way?
> Object.hasOwnProperty.call(a, "abc");
true
> Object.hasOwnProperty.call(a, "hasOwnProperty");
true
And why shouldn't the solution be the only recommended way? Using methods directly from an object seems like a recipe for a failure, especially if it is containing external data (not in one's control)
The appropriate/recommended way to use hasOwnProperty is as a filter, or a means to determine whether an object... well, has that property. Just they way you are using it in your second command a.hasOwnProperty('abc').
By overwriting the Object hasOwnProperty property with a['hasOwnProperty'] = 1, while it's safe and valid, just removes the ability to use the hasOwnProperty function on that Object.
Am I missing your true question here? It seems like you already knew this from your example.
By
'using methods directly from an object seems like a recipe for a failure
are you referring to something like this:
> dog = {speak: function() {console.log('ruff! ruff!')}};
> dog.speak(); // ruff! ruff!
Because that is extremely useful in many ways as you can imagine.
If you can use ECMAScript 2015 you can try Reflect.getOwnPropertyDescriptor.
It returns a property descriptor of the given property if it exists on the object, undefined otherwise.
To simplify you can create this function:
var hasOwnProp = (obj, prop) => Reflect.getOwnPropertyDescriptor(obj, prop) !== undefined;
var obj = new Object();
obj.prop = 'exists';
console.log('Using hasOwnProperty')
console.log('prop: ' + obj.hasOwnProperty('prop'));
console.log('toString: ' + obj.hasOwnProperty('toString'));
console.log('hasOwnProperty: ' + obj.hasOwnProperty('hasOwnProperty'));
var hasOwnProp = (obj, prop) => Reflect.getOwnPropertyDescriptor(obj, prop) !== undefined;
console.log('Using getOwnPropertyDescriptor')
console.log('prop: ' + hasOwnProp(obj, 'prop'));
console.log('toString: ' + hasOwnProp(obj, 'toString'));
console.log('hasOwnProperty: ' + hasOwnProp(obj, 'hasOwnProperty'));
obj['hasOwnProperty'] = 1;
console.log('hasOwnProperty: ' + hasOwnProp(obj, 'hasOwnProperty'));
Any built-in can be overridden in JS - it's generally considered best practice to avoid overriding any native methods where possible. If the original functionality is preserved it's OK as it will still behave as expected and even could possibly extended further if overridden correctly again.
As that's considered best practice I recommend either remapping the keys to avoid overriding them. If remapping the keys is not an option then you can maybe make it feel a little less messy by either locally referencing/wrapping Object.hasOwnProperty or Object.prototype.hasOwnProperty. In the case of hasOwnProperty you could possibly implement an iterator (as iterating over enumerable non-inherited properties is a very common use of hasOwnProperty) method to reduce the likelihood of its use. There's always still the risk of someone less familiar with your object attempting to directly iterate so I really feel that key mapping is the safer bet even if it does cause a slight difference in between server-side keys and local ones.
A key mapping could be as simple as a suffix using hasOwnProperty_data instead of hasOwnProperty this would mean objects would behave as expected and your IDE's autocomplete likely will still be close enough to know what the property represents.
A mapping function might look like the following:
function remapKeys(myObj){
for(var key in myObj){
if(Object.prototype.hasOwnProperty.call(myObj, key)){
if((key in Object) && Object[key] !== myObj[key]){ // Check key is present on Object and that it's different ie an overridden property
myObj[key + "_data"] = myObj[key];
delete myObj[key]; // Remove the key
}
}
}
return myObj; // Alters the object directly so no need to return but safer
}
// Test
var a = {};
a.hasOwnProperty = function(){ return 'overridden'; };
a.otherProp = 'test';
remapKeys(a);
console.log(a); // a { hasOwnProperty_data : function(){ return 'overridden';}, otherProp: 'test' }
console.log(a.hasOwnProperty('otherProp')); // true

Categories

Resources