Objects are equal even though properties are not - javascript

I've recently ran into quite an interesting problem with my Typescript + React code. The problem I am running into is as follows:
I trigger a method on my Trip object called updateStop when a new stop is received from a websocket connection. This method has the following implementation:
public updateStop(stop: PassTimeUpdate): Trip {
const stops = this.getStops();
const stopToUpdate = stops.find(s => s.userStopCode === stop.UserStopCode);
if(!stopToUpdate)
return this;
stopToUpdate.updateStop(stop);
return this;
}
The getStops method has the following implementation:
public getStops(): IteneraryStop[] {
return this.itenaries.flatMap(i => i.stops);
}
And the updateStop method has the following implementation:
public updateStop(updatedStopData: PassTimeUpdate) {
console.log(`Updating stop: ${this.name} from ${this.tripStopStatus} to ${updatedStopData.TripStopStatus}`);
this.tripStopStatus = updatedStopData.TripStopStatus;
}
The hierarchy is as follows:
Trip has Iteneraries has Stops.
Now my problem comes when I want to compare the previous version of the trip to the new version of the trip.
if(oldTrip.getStops() === updatedTrip.getStops())
This results in false, obviously, as the stops have been updated and thus are not the same. However, the following:
if(oldTrip === updatedTrip)
Results in true. I've looked on the internet for some way to "clone" a object, as I think the problem is with that the signature of the objects matches, even though their content isn't the same?
Due to the oldTrip and updatedTrip matching, react doesn't trigger the useEffect hook when calling setTrip(updatedTrip) as it thinks oldTrip and updatedTrip are the same.
What would be a good way to force the trips to be "different"? I have tried having some kind of internalID string which I update every time the updateStop method is executed, however this still results in the same behaviour.

I'm assuming that oldTrip and updatedTrip are instances of Trip. It sounds like you are probably trying to manipulate an instance which is stored at different points into oldTrip and updatedTrip and then are thinking that the comparison of those variables will not be the same as it was before.
What you are actually doing it comparing the same instance of Trip with itself. When you put the instance into a var, it didn't copy it. It just made another pointer to the same one.
The comparison is true, since even oldTrip is just a pointer to that same instance. Whenever you manipulated the instance using then methods, oldTrip would of changed as well, as it's just a pointer to the instance. And when you compare them the JS evaluator asks "are these the same instance" which is true.
In addition, when you do a comparison operator on any object (that includes arrays, instances of classes, etc) like this, what it does not do is say "are the contents different". In the first example, it will return false every time since in JS
[1,2,3] == [1,2,3] will return false since again, it is comparing pointers and here there are two new arrays initialized on both sides. It does not implicitly compare the contents.
One reason people fall into this trap is that comparing primitives like strings, numbers will work fine. But those are not objects and so you aren't comparing a reference, but the real underlying value. more info here.

Related

WeakSet: garbage collection doesn't work? [duplicate]

The WeakSet is supposed to store elements by weak reference. That is, if an object is not referenced by anything else, it should be cleaned from the WeakSet.
I have written the following test:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset);
numbers = undefined;
console.log(weakset);
Even though my [1, 2, 3] array is not referenced by anything, it's not being removed from the WeakSet. The console prints:
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
WeakSet {[1, 2, 3], Object {name: "Charlie"}}
Why is that?
Plus, I have one more question. What is the point of adding objects to WeakSets directly, like this:
weakset.add({name: "Charlie"});
Are those Traceur's glitches or am I missing something?
And finally, what is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
it's not being removed from the WeakSet. Why is that?
Most likely because the garbage collector has not yet run. However, you say you are using Traceur, so it just might be that they're not properly supported. I wonder how the console can show the contents of a WeakSet anyway.
What is the point of adding objects to WeakSets directly?
There is absolutely no point of adding object literals to WeakSets.
What is the practical use of WeakSet if we cannot even iterate through it nor get the current size?
All you can get is one bit of information: Is the object (or generically, value) contained in the set?
This can be useful in situations where you want to "tag" objects without actually mutating them (setting a property on them). Lots of algorithms contain some sort of "if x was already seen" condition (a JSON.stringify cycle detection might be a good example), and when you work with user-provided values the use of a Set/WeakSet would be advisable. The advantage of a WeakSet here is that its contents can be garbage-collected while your algorithm is still running, so it helps to reduce memory consumption (or even prevents leaks) when you are dealing with lots of data that is lazily (possibly even asynchronously) produced.
This is a really hard question. To be completely honest I had no idea in the context of JavaScript so I asked in esdiscuss and got a convincing answer from Domenic.
WeakSets are useful for security and validation reasons. If you want to be able to isolate a piece of JavaScript. They allow you to tag an object to indicate it belongs to a special set of object.
Let's say I have a class ApiRequest:
class ApiRequest {
constructor() {
// bring object to a consistent state, use platform code you have no direct access to
}
makeRequest() {
// do work
}
}
Now, I'm writing a JavaScript platform - my platform allows you to run JavaScript to make calls - to make those calls you need a ApiRequest - I only want you to make ApiRequests with the objects I give you so you can't bypass any constraints I have in place.
However, at the moment nothing is stopping you from doing:
ApiRequest.prototype.makeRequest.call(null, args); // make request as function
Object.create(ApiRequest.prototype).makeRequest(); // no initialization
function Foo(){}; Foo.prototype = ApiRequest.prototype; new Foo().makeRequest(); // no super
And so on, note that you can't keep a normal list or array of ApiRequest objects since that would prevent them from being garbage collected. Other than a closure, anything can be achieved with public methods like Object.getOwnPropertyNames or Object.getOwnSymbols. So you one up me and do:
const requests = new WeakSet();
class ApiRequest {
constructor() {
requests.add(this);
}
makeRequest() {
if(!request.has(this)) throw new Error("Invalid access");
// do work
}
}
Now, no matter what I do - I must hold a valid ApiRequest object to call the makeRequest method on it. This is impossible without a WeakMap/WeakSet.
So in short - WeakMaps are useful for writing platforms in JavaScript. Normally this sort of validation is done on the C++ side but adding these features will enable moving and making things in JavaScript.
(Of course, everything a WeakSet does a WeakMap that maps values to true can also do, but that's true for any map/set construct)
(Like Bergi's answer suggests, there is never a reason to add an object literal directly to a WeakMap or a WeakSet)
By definition, WeakSet has only three key functionalities
Weakly link an object into the set
Remove a link to an object from the set
Check if an object has already been linked to the set
Sounds more pretty familiar?
In some application, developers may need to implement a quick way to iterate through a series of data which is polluted by lots and lots of redundancy but you want to pick only ones which have not been processed before (unique). WeakSet could help you. See an example below:
var processedBag = new WeakSet();
var nextObject = getNext();
while (nextObject !== null){
// Check if already processed this similar object?
if (!processedBag.has(nextObject)){
// If not, process it and memorize
process(nextObject);
processedBag.add(nextObject);
}
nextObject = getNext();
}
One of the best data structure for application above is Bloom filter which is very good for a massive data size. However, you can apply the use of WeakSet for this purpose as well.
A "weak" set or map is useful when you need to keep an arbitrary collection of things but you don't want their presence in the collection from preventing those things from being garbage-collected if memory gets tight. (If garbage collection does occur, the "reaped" objects will silently disappear from the collection, so you can actually tell if they're gone.)
They are excellent, for example, for use as a look-aside cache: "have I already retrieved this record, recently?" Each time you retrieve something, put it into the map, knowing that the JavaScript garbage collector will be the one responsible for "trimming the list" for you, and that it will automatically do so in response to prevailing memory conditions (which you can't reasonably anticipate).
The only drawback is that these types are not "enumerable." You can't iterate over a list of entries – probably because this would likely "touch" those entries and so defeat the purpose. But, that's a small price to pay (and you could, if need be, "code around it").
WeakSet is a simplification of WeakMap for where your value is always going to be boolean true. It allows you to tag JavaScript objects so to only do something with them once or to maintain their state in respect to a certain process. In theory as it doesn't need to hold a value it should use a little less memory and perform slightly faster than WeakMap.
var [touch, untouch] = (() => {
var seen = new WeakSet();
return [
value => seen.has(value)) || (seen.add(value), !1),
value => !seen.has(value) || (seen.delete(value), !1)
];
})();
function convert(object) {
if(touch(object)) return;
extend(object, yunoprototype); // Made up.
};
function unconvert(object) {
if(untouch(object)) return;
del_props(object, Object.keys(yunoprototype)); // Never do this IRL.
};
Your console was probably incorrectly showing the contents due to the fact that the garbage collection did not take place yet. Therefore since the object wasn't garbage collected it would show the object still in weakset.
If you really want to see if a weakset still has a reference to a certain object then use the WeakSet.prototype.has() method. This method, as the name implies returns a boolean indicating wether the object still exists in the weakset.
Example:
var weakset = new WeakSet(),
numbers = [1, 2, 3];
weakset.add(numbers);
weakset.add({name: "Charlie"});
console.log(weakset.has(numbers));
numbers = undefined;
console.log(weakset.has(numbers));
Let me answer the first part, and try to avoid confusing you further.
The garbage collection of dereferenced objects is not observable! It would be a paradox, because you need an object reference to check if it exists in a map. But don't trust me on this, trust Kyle Simpson:
https://github.com/getify/You-Dont-Know-JS/blob/1st-ed/es6%20%26%20beyond/ch5.md#weakmaps
The problem with a lot of explanations I see here, is that they re-reference a variable to another object, or assign it a primitive value, and then check if the WeakMap contains that object or value as a key. Of course it doesn't! It never had that object/value as a key!
So the final piece to this puzzle: why does inspecting the WeakMap in a console still show all those objects there, even after you've removed all of your references to those objects? Because the console itself keeps persistent references to those Objects, for the purpose of being able to list all the keys in the WeakMap, because that is something that the WeakMap itself cannot do.
While I'm searching about use cases of Weakset I found these points:
"The WeakSet is weak, meaning references to objects in a WeakSet are held weakly.
If no other references to an object stored in the WeakSet exist, those objects can be garbage collected."
##################################
They are black boxes: we only get any data out of a WeakSet if we have both the WeakSet and a value.
##################################
Use Cases:
1 - to avoid bugs
2 - it can be very useful in general to avoid any object to be visited/setup twice
Refrence: https://esdiscuss.org/topic/actual-weakset-use-cases
3 - The contents of a WeakSet can be garbage collected.
4 - Possibility of lowering memory utilization.
Refrence: https://www.geeksforgeeks.org/what-is-the-use-of-a-weakset-object-in-javascript/
##################################
Example on Weakset: https://exploringjs.com/impatient-js/ch_weaksets.html
I Advice you to learn more about weak concept in JS: https://blog.logrocket.com/weakmap-weakset-understanding-javascript-weak-references/

What are the harmful effects of manipulating an object passed in as a parameter? [duplicate]

Eclipse has an option to warn on assignment to a method's parameter (inside the method), as in:
public void doFoo(int a){
if (a<0){
a=0; // this will generate a warning
}
// do stuff
}
Normally I try to activate (and heed) almost all available compiler warnings, but in this case I'm not really sure whether it's worth it.
I see legitimate cases for changing a parameter in a method (e.g.: Allowing a parameter to be "unset" (e.g. null) and automatically substituting a default value), but few situations where it would cause problems, except that it might be a bit confusing to reassign a parameter in the middle of the method.
Do you use such warnings? Why / why not?
Note:
Avoiding this warning is of course equivalent to making the method parameter final (only then it's a compiler error :-)). So this question Why should I use the keyword "final" on a method parameter in Java? might be related.
The confusing-part is the reason for the warning. If you reassign a parameter a new value in the method (probably conditional), then it is not clear, what a is. That's why it is seen as good style, to leave method-params unchanged.
For me, as long as you do it early and clearly, it's fine. As you say, doing it buried deep in four conditionals half-way into a 30-line function is less than ideal.
You also obviously have to be careful when doing this with object references, since calling methods on the object you were given may change its state and communicate information back to the caller, but of course if you've subbed in your own placeholder, that information is not communicated.
The flip side is that declaring a new variable and assigning the argument (or a default if argument needs defaulting) to it may well be clearer, and will almost certainly not be less efficient -- any decent compiler (whether the primary compiler or a JIT) will optimize it out when feasible.
Assigning a method parameter is not something most people expect to happen in most methods. Since we read the code with the assumption that parameter values are fixed, an assignment is usually considered poor practice, if only by convention and the principle of least astonishment.
There are always alternatives to assigning method parameters: usually a local temporary copy is just fine. But generally, if you find you need to control the logic of your function through parameter reassignment, it could benefit from refactoring into smaller methods.
Reassigning to the method parameter variable is usually a mistake if the parameter is a reference type.
Consider the following code:
MyObject myObject = new myObject();
myObject.Foo = "foo";
doFoo(myObject);
// what's the value of myObject.Foo here?
public void doFoo(MyObject myFoo){
myFoo = new MyObject("Bar");
}
Many people will expect that at after the call to doFoo, myObject.Foo will equal "Bar". Of course, it won't - because Java is not pass by reference, but pass by reference value - that is to say, a copy of the reference is passed to the method. Reassigning to that copy only has an effect in the local scope, and not at the callsite. This is one of the most commonly misunderstood concepts.
Different compiler warnings can be appropriate for different situations. Sure, some are applicable to most or all situations, but this does not seem to be one of them.
I would think of this particular warning as the compiler giving you the option to be warned about a method parameter being reassigned when you need it, rather than a rule that method parameters should not be reassigned. Your example constitutes a perfectly valid case for it.
I sometimes use it in situations like these:
void countdown(int n)
{
for (; n > 0; n--) {
// do something
}
}
to avoid introducing a variable i in the for loop. Typically I only use these kind of 'tricks' in very short functions.
Personally I very much dislike 'correcting' parameters inside a function this way. I prefer to catch these by asserts and make sure that the contract is right.
I usually don't need to assign new values to method parameters.
As to best-practices - the warning also avoids confusion when facing code like:
public void foo() {
int a = 1;
bar(a);
System.out.println(a);
}
public void bar(int a) {
a++;
}
You shoud write code with no side effect : every method shoud be a function that doesn't change . Otherwise it's a command and it can be dangerous.
See definitions for command and function on the DDD website :
Function :
An operation that computes and returns a result without observable side effects.
Command : An operation that effects some change to the system (for
example, setting a variable). An
operation that intentionally creates a
side effect.

Javascript persistent-sorting Object "class"

Attending to it's specification, JSON elements (and javascript objects) are unordered so, even in almost all cases, when you iterate over a javascript object, you get elements in the same order they was defined; you definitively cannot trust in that order because engine is allowed to alter it.
This is extremely rare. I have been able to observe it one time, but I don't find that code right now and I don't remember the exact version of JS engine (but it was node). If I manage to find it, I will add it to this post.
That being said, the point is that code relying in this behaviour can (and should) be considered buggy because, even it will work as expected in most engines, it may fail typically because of internal engine optimisations.
For example:
"use strict";
var x = {
b: 23,
a: 7
};
function someSorting(input) {
var output = {};
Object.keys(input).sort().map(
function(k){
output[k] = input[k];
}
);
return output;
};
x = someSorting(x);
// Some smart engine could notice that input and output objects have the
// exact same properties and guess that, attending the unordered nature of
// javascript Object, there is no need to actually execute someSorting();
console.log(x);
// Usually will display: { a: 7, b: 23 }
// But perfectly we could got: { b: 23, a: 7 }
I know there is too many literature (even StackOverflow questions) about this (NON-) issue and "workarrounds" to achieve the expected behaviour by sorting keys in a separate array.
But doing so code goes too messy compared in simply trusting in key order.
I'm pretty sure that this can be achieved in a more elegant fashion by implementing a so-called "sObject" alternative having native Object as its prototype but overloading it's native iterator and setter so that:
When any new property is added, it's key is appended to an Array index mantained under the hood.
When an sObject instance is iterated, our customized iterator uses that index to retrieve elements in the right order.
In summary: Actual Object specification is right because, in most cases, properties order doesn't care. So I think that engine optimisations that could mess it are wonderfull.
But it would be also wonderful to have an alternative sObject with which we could do something like:
var x = new sObject({b: 23, a: 7});
...and trust that we could iterate it in the same exact order or, also / at least, do some sorting task over it and trust that this will not be altered.
Of course!! I'm initalyzing it with a native javascript Object so, in fact, theoretically we can't trust that it will be populated right (even I can't imagine why any engine optimisation should alter it before any operation).
I used that notation for brevity (and, I confess) because I expect that, in that case should work always (even I'm not really sure). However we even could sort it later (which, in most cases we will do that way) or use other kind of initialization like providing a JSON string or an array of objects (or arrays) with single key and value pairs.
My concern is: Such a thing exists yet? I wasn't able to find it. But sure I'm not the first guy thinking in that...
I can try to implement it (I'm thinking about that). I think it's possible and that I could achieve it. But it's not as simple so first I want to be sure that I'm not reinventing the wheel...
So any comments, suggestions, etc... will be welcome.
Sure, you could do all this. You will need some machinery such as Object.observer, which is currently only available in Chrome. We can define this as the following:
function myObject(object) {
// if the object already has keys, bring them in in whatever order.
var keys = Object.keys(object);
// Override Object.keys to return our list of keys.
Object.defineProperty(object, 'keys', { get: function() { return keys; });
// Watch the object for new or deleted properties.
// Add new ones at the end, to preserve order.
Object.observe(object, function(changes) {
changes.forEach(function(change) {
if (change.type === 'add') keys.push(change.name);
if (change.type === 'delete') keys = keys.filter(function(key) {
return key === change.name;
});
});
});
return object;
}
Note that Object.observe is asynchronous, so even after you add a property to the object, it won't be reflected in the custom keys property until after a tick of the clock, although in theory you could use Object.deliverChangedRecords.
The above approach uses a function which adds the new ordered key functionality to an existing object. Of course there are other ways to design this.
This "solution" obviously cannot control the behavior of for...in loops.

Detecting if an object is changed in javascript

I have a function that is called every 1 second.
var latestObject; //this updated separately, it depends on user input so it may not be different every second
var previousObject;
function Tick(object) {
if (latestObject !== previousObject) { //Problem is here
previousObject = latestObject; //or here
//do stuff with latestObject;
}
}
However when latestObject is updated it's properties are changed, the variable is not set to a different object. So previousObject and latestObject are always equal and the do stuff never happens.
I could do:
function Tick(object) {
var latestObjectString = JSON.stringify(latestObject);
if (latestObjectString !== previousObject) { //Problem is here
previousObject = latestObjectString; //or here
//do stuff with latestObject;
}
}
But then I'm doing JSON.stringify once every second, this seems inefficient, especially as latestObject is quite big, and quite deep.
Wouldn't it be better set previousObject to be a copy of latestObject, so that when properties on latestObject are changed, previousObject stays the same, and then this only happens when the objects are different which is less often than every second? But wouldn't there be a problem as copyOfObject == Object would never be true?
(the object is mostly properties, but has a few functions that don't ever change).
(No jQuery)
Description of the problem
The problem here is indeed related to the fact, that the same object is assigned to two different variables. Even if you change it in one place, the other changes it also.
This example shows you what really happens (jsFiddle: http://jsfiddle.net/tadeck/4hFC2/):
var objA = {'a':10, 'b': 20};
var objB = objA; // same instance assigned to both names
objB.a = 30; // instance is modified, its "a" property is changed
// now, both objA.a and objB.a show "30", as objA and objB is the same instance
However, having two different objects is not so ideal either, as comparing them is non-trivial (proof here: http://jsfiddle.net/tadeck/GN2m4/).
Solution no. 1. for comparing the objects
To solve this problem:
You need to use two different objects (eg. by using some solution similar to jQuery's .extend() to construct new object from existing object). You currently achieve that part using unnecessary serialization.
You need to compare them in a little more complex way (pretty universal solution for that is here: https://stackoverflow.com/a/1144249/548696).
In comparison to this, your solution may look less complex (at least in terms of code). I suggest using some JS performance tests to find out, which is more reasonable. JSON.stringify() is not always natively supported, so it may be doing things similarly complex (and resource-consuming), as the alternative solution I mentioned.
Solution no. 2. for solving the overall issue of detecting the changes
The other option is to rebuild your script and use eg. flags for marking the object as changed by user input. That would save you the processing of whole objects each second and may result in large efficiency gains.
The things you need to do in this case, are:
In your user-input handlers set the flag whenever user changes some part of the object,
Optionally, you could first compare the specific value with the original object (if user has changed it quickly and then reverted the change, just mark the value as not changed),
To limit the processing of the changed object, you could even mark, which properties were changed (so you process only these properties, nothing else),
To achieve part of this solution, you could even use JavaScript setters and getters, as described by John Resig.
But, as I mentioned, it may require rebuilding your script (which we haven't seen, so we cannot say if it is necessary or it can be applied rather easily).

Using the Javascript slice() method with no arguments

I'm currently reading through this jquery masking plugin to try and understand how it works, and in numerous places the author calls the slice() function passing no arguments to it. For instance here the _buffer variable is slice()d, and _buffer.slice() and _buffer seem to hold the same values.
Is there any reason for doing this, or is the author just making the code more complicated than it should be?
//functionality fn
function unmaskedvalue($input, skipDatepickerCheck) {
var input = $input[0];
if (tests && (skipDatepickerCheck === true || !$input.hasClass('hasDatepicker'))) {
var buffer = _buffer.slice();
checkVal(input, buffer);
return $.map(buffer, function(element, index) {
return isMask(index) && element != getBufferElement(_buffer.slice(), index) ? element : null; }).join('');
}
else {
return input._valueGet();
}
}
The .slice() method makes a (shallow) copy of an array, and takes parameters to indicate which subset of the source array to copy. Calling it with no arguments just copies the entire array. That is:
_buffer.slice();
// is equivalent to
_buffer.slice(0);
// also equivalent to
_buffer.slice(0, _buffer.length);
EDIT: Isn't the start index mandatory? Yes. And no. Sort of. JavaScript references (like MDN) usually say that .slice() requires at least one argument, the start index. Calling .slice() with no arguments is like saying .slice(undefined). In the ECMAScript Language Spec, step 5 in the .slice() algorithm says "Let relativeStart be ToInteger(start)". If you look at the algorithm for the abstract operation ToInteger(), which in turn uses ToNumber(), you'll see that it ends up converting undefined to 0.
Still, in my own code I would always say .slice(0), not .slice() - to me it seems neater.
array.slice() = array shallow copy and is a shorter form of array.slice()
Is there any reason for doing this, or is the author just making the code more complicated than it should be?
Yes there may be a reason in the following cases (for which we do not have a clue, on whether they apply, in the provided code):
checkVal() or getBufferElement() modify the content of the arrays passed to them (as second and first argument respectively). In this case the code author wants to prevent the global variable _buffer's content from being modified when calling unmaskedvalue().
The function passed to $.map runs asynchronously. In this case the code author wants to make sure that the passed callback will access the array content as it was during unmaskedvalue() execution (e.g. Another event handler could modify _buffer content after unmaskedvalue() execution and before $.map's callback execution).
If none of the above is the case then, yes, the code would equally work without using .slice(). In this case maybe the code author wants to play safe and avoid bugs from future code changes that would result in unforeseen _buffer content modifications.
Note:
When saying: "prevent the global variable _buffer's content from being modified" it means to achieve the following:
_buffer[0].someProp = "new value" would reflect in the copied array.
_buffer[0] = "new value" would not reflect in the copied array.
(For preventing changes also in the first bullet above, array deep clone can be used, but this is out of the discussed context)
Note 2:
In ES6
var buffer = _buffer.slice();
can also be written as
var buffer = [..._buffer];

Categories

Resources