So I have concerns about accessing a class from within a class. The pseudocode below is not identical but is similar to the structure I have.
class Parent{
constructor(name, children){
this.name = name
this.children = children;
}
AddChild(Child){
this.children.push(Child);
}
RemoveChild(Child){
this.children = this.children.filter(child => child != Child);
}
}
class Child{
constructor(name, age){
this.name = name;
this.age = age;
}
cycle(){
setInterval(() => {
if(this.age >= 18){
//Remove this child from parent
}
this.age++;
}, 10000);
}
}
For this example, I need to remove child from parent when the child is above the age of 18. However, I cannot access the parent from within the Child. I have tried referancing the parent as a property of child but have never used circular referancing before and dont know how safe it is to do so.
My questions are as follows:
For this example, what is a way of accessing the parent function RemoveChild() from the child class.
Is circular referancing a safe partise to use for this example.
In your axample, as you pointed out, you for sure need a reference to the Parent in the Child class, otherwise you cannot call a method of it, because you don't have an instance of it.
About circular references, you also pointed out that it can be something dangerous. If a Child can live without a Parent, you should always check when accessing its parent field that is a set to a value (not null).
The biggest problem about circular references are memory leaks. A garbage collector (for language having one, such as JS) or a smart pointer (for languages like C++ or Rust, not having garbage collection) basically looks if the object is still referenced somewhere to know when to free its memory. If a Parent references a Child and vice-versa, but nowhere else any variable references them, they are left alone in the wild, being never freed because each one of them is referenced by the other, but any other place doesn't. So you cannot access and use them anymore, but your program still has the memory allocated for them. Memory leaks can make a computer run out of memory on the long term. This is the main danger to care about.
Related
I'm playing around with ES6 classes in JavaScript and I'm wondering if it's possible for a child class to inherit private properties/methods from a superclass, while allowing the subclass to mutate this private property without using any "set" methods (to keep it read-only).
For example, say I want to create a static private property called #className in my superclass that is read-only, meaning you can only read from it using a method called getClassName() and you cannot access this property by doing class.className. Now, I make a new child class that extends this superclass. The child class will inherit #className and getClassName(), but, I would like #className to be initialized with a different value in the child class than in the superclass. So, in the superclass you could have: #className = 'Parent' but in the child class you would have #className = 'Child'.
My only problem is, it seems like you can't really do this. If I try declaring a new #className in the child class, the child class's getClassName() method still refers to the #className from the superclass. Here's the code I was playing with:
class Parent {
#className = 'Parent' // create private className
constructor() {}
getClassName() {
return this.#className; // return className from object
}
}
class Child extends Parent {
#className = 'Child' // re-define className for this child class
constructor() { super(); } // inherit from Parent class
}
new Child().getClassName() // --> this prints 'Parent' when I want it to print 'Child'
Does anyone have a solution to this? Or an alternative that achieves a similar affect?
JavaScript does not support directly accessing private properties inherited from another class, which is how private members are supposed to work. You seem to want the functionality of protected properties. As of 2022, JavaScript does not support protected properties or members of any kind. Why that is, I can't imagine, since other OOP languages have allowed said functionality since time immemorial.
If you have control over the code of the parent class, you can simulate protected properties by using symbols.
const className = Symbol();
class Parent {
[className] = 'Parent'; // create protected [className]
getClassName() {
return this[className]; // return [className] from object
}
}
class Child extends Parent {
[className] = 'Child'; // re-define [className] for this child class
}
console.log(new Child().getClassName()); // --> this prints 'Child'
I'm not sure why this snippet fails in the preview even with Babel. That exact code appears works in the console of every major browser I've tried.
The reason this works is that a Symbol in JavaScript is a primitive type that's guaranteed to be unique. Unlike other primitives, when used as a key in an object, it cannot be [easily] accessed or iterated over, effectively making it protected.
See: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol
Whether a property is truly "protected" is primarily determined by the scope within which you define the symbol (or whether you pass it around).
For instance, if the above code is module-scoped then that symbol will only be accessible to anything that's within that scope. As with any variable in JavaScript, you can scope a symbol within a function definition, or an if-statement, or any block. It's up to you.
There are a few cases where those symbols can be accessed. This includes the use of Object.getOwnPropertySymbols, Object.getOwnPropertyDescriptors, to name a few. Thus, it's not a perfect system, but it's a step above the old-school way of creating "private" members in JavaScript by prefixing names with an underscore.
In my own work, I use this technique to avoid using private class member syntax because of the gotcha you describe. But that's because I have never cared about preventing that capability in subclasses. The only drawback to using symbols is it requires a bit more code. Since a symbol is still a value that can be passed around, that makes it possible to create "loopholes" in your code for testing purposes or the occasional edge-case.
To truly eliminate leakage of conceptually protected properties, a WeakMap within the scope of the class definitions can be used.
const protectedClassNames = new WeakMap();
class Parent {
constructor() {
protectedClassNames.set(this, 'Parent');
}
getClassName() {
return protectedClassNames.get(this);
}
}
class Child extends Parent {
constructor() {
super();
protectedClassNames.set(this, 'Child'); // re-define className for this child class
}
}
console.log(new Child().getClassName()); // --> this prints 'Child'
A WeakMap is a key-value store that takes an object as a key and will dereference the object when said object has been garbage collected.
As long as the protectedClassNames WeakMap is only scoped to the class definitions that need it, then its values won't be possibly leaked elsewhere. However, the downside is that you run into a similar problem to the original issue if a class that's out of scope of the weak map tries to inherit from one of the classes that uses it.
WeakMaps aren't strictly necessary for this, but its function is important for managing memory.
Unfortunately, there appears to be no proposal in progress for adding protected members to the JavaScript standard. However, decorator syntax combined with either of the approaches described here may be a convenient way of implementing protected members for those willing to use a JavaScript transpiler.
Stylistically, I prefer this structure:
var Filter = function( category, value ){
this.category = category;
this.value = value;
// product is a JSON object
Filter.prototype.checkProduct = function( product ){
// run some checks
return is_match;
}
};
To this structure:
var Filter = function( category, value ){
this.category = category;
this.value = value;
};// var Filter = function(){...}
Filter.prototype.checkProduct = function( product ){
// run some checks
return is_match;
}
Functionally, are there any drawbacks to structuring my code this way? Will adding a prototypical method to a prototype object inside the constructor function's body (i.e. before the constructor function's expression statement closes) cause unexpected scoping issues?
I've used the first structure before with success, but I want to make sure I'm not setting myself for a debugging headache, or causing a fellow developer grief and aggravation due to bad coding practices.
Functionally, are there any drawbacks to structuring my code this way?
Will adding a prototypical method to a prototype object inside the
constructor function's body (i.e. before the constructor function's
expression statement closes) cause unexpected scoping issues?
Yes, there are drawbacks and unexpected scoping issues.
Assigning the prototype over and over to a locally defined function, both repeats that assignment and creates a new function object each time. The earlier assignments will be garbage collected since they are no longer referenced, but it's unnecessary work in both runtime execution of the constructor and in terms of garbage collection compared to the second code block.
There are unexpected scoping issues in some circumstances. See the Counter example at the end of my answer for an explicit example. If you refer to a local variable of the constructor from the prototype method, then your first example creates a potentially nasty bug in your code.
There are some other (more minor) differences. Your first scheme prohibits the use of the prototype outside the constructor as in:
Filter.prototype.checkProduct.apply(someFilterLikeObject, ...)
And, of course, if someone used:
Object.create(Filter.prototype)
without running the Filter constructor, that would also create a different result which is probably not as likely since it's reasonable to expect that something that uses the Filter prototype should run the Filter constructor in order to achieve expected results.
From a run-time performance point of view (performance of calling methods on the object), you would be better off with this:
var Filter = function( category, value ){
this.category = category;
this.value = value;
// product is a JSON object
this.checkProduct = function( product ){
// run some checks
return is_match;
}
};
There are some Javascript "experts" who claim that the memory savings of using the prototype is no longer needed (I watched a video lecture about that a few days ago) so it's time to start using the better performance of methods directly on the object rather than the prototype. I don't know if I'm ready to advocate that myself yet, but it was an interesting point to think about.
The biggest disadvantage of your first method I can think of is that it's really, really easy to make a nasty programming mistake. If you happen to think you can take advantage of the fact that the prototype method can now see local variables of the constructor, you will quickly shoot yourself in the foot as soon as you have more than one instance of your object. Imagine this circumstance:
var Counter = function(initialValue){
var value = initialValue;
// product is a JSON object
Counter.prototype.get = function() {
return value++;
}
};
var c1 = new Counter(0);
var c2 = new Counter(10);
console.log(c1.get()); // outputs 10, should output 0
Demonstration of the problem: http://jsfiddle.net/jfriend00/c7natr3d/
This is because, while it looks like the get method forms a closure and has access to the instance variables that are local variables of the constructor, it doesn't work that way in practice. Because all instances share the same prototype object, each new instance of the Counter object creates a new instance of the get function (which has access to the constructor local variables of the just created instance) and assigns it to the prototype, so now all instances have a get method that accesses the local variables of the constructor of the last instance created. It's a programming disaster as this is likely never what was intended and could easily be a head scratcher to figure out what went wrong and why.
While the other answers have focused on the things that are wrong with assigning to the prototype from inside the constructor, I'll focus on your first statement:
Stylistically, I prefer this structure
Probably you like the clean encapsulation that this notation offers - everything that belongs to the class is properly "scoped" to it by the {} block. (of course, the fallacy is that it is scoped to each run of the constructor function).
I suggest you take at the (revealing) module patterns that JavaScript offers. You get a much more explicit structure, standalone constructor declaration, class-scoped private variables, and everything properly encapsulated in a block:
var Filter = (function() {
function Filter(category, value) { // the constructor
this.category = category;
this.value = value;
}
// product is a JSON object
Filter.prototype.checkProduct = function(product) {
// run some checks
return is_match;
};
return Filter;
}());
The first example code kind of misses the purpose of the prototype. You will be recreating checkProduct method for each instance. While it will be defined only on the prototype, and will not consume memory for each instance, it will still take time.
If you wish to encapsulate the class you can check for the method's existence before stating the checkProduct method:
if(!Filter.prototype.checkProduct) {
Filter.prototype.checkProduct = function( product ){
// run some checks
return is_match;
}
}
There is one more thing you should consider. That anonymous function's closure now has access to all variables inside the constructor, so it might be tempting to access them, but that will lead you down a rabbit hole, as that function will only be privy to a single instance's closure. In your example it will be the last instance, and in my example it will be the first.
Biggest disadvantage of your code is closing possibility to override your methods.
If I write:
Filter.prototype.checkProduct = function( product ){
// run some checks
return different_result;
}
var a = new Filter(p1,p2);
a.checkProduct(product);
The result will be different than expected as original function will be called, not my.
In first example Filter prototype is not filled with functions until Filter is invoked at least once. What if somebody tries to inherit Filter prototypically? Using either nodejs'
function ExtendedFilter() {};
util.inherit(ExtendedFilter, Filter);
or Object.create:
function ExtendedFilter() {};
ExtendedFilter.prototype = Object.create(Filter.prototype);
always ends up with empty prototype in prototype chain if forgot or didn't know to invoke Filter first.
Just FYI, you cannot do this safely either:
function Constr(){
const privateVar = 'this var is private';
this.__proto__.getPrivateVar = function(){
return privateVar;
};
}
the reason is because Constr.prototype === this.__proto__, so you will have the same misbehavior.
I wonder which way is better in real run-time about memory footprint and release time.
outside class
const foo = {
...
};
class Base {
// use foo below as const variables
}
bind to this as property
class Base {
constructor() {
this.bar = {
...
}
}
// use this.bar as property below
}
which way above will occupy more memory?
if I follow the first one that define const variables outside class, what time which be released from memory?
thanks for your time :)
In the first case a single foo object will be hold in memory until the realm is shut down.
In the second case it will be an instance of this.bar per Base.
Keeping in mind the Garbage Collector:
Is not standardised and not even required for an implementation
Is not deterministic (can run at any point in time, or never)
Based on that we can speculatively state that:
First case is "better"* if at least one instance of Base is created.
And now addressing your very questions:
if I follow the first one that define const variables outside class, what time which be released from memory?
An object would become garbage collectible when it's not reachable. So, when all references to it are lost. Then, the memory is "released" after the GC is triggered.
But practically, you must follow the performance optimisation rule of thumb: measure.
If something is a problem for you, first take debugger/profiler and prove scientifically it actually is what you think it is.
*It all really depends
Say,
function Person(name) {
this.name = name;
}
Person.prototype.share = [];
Person.prototype.printName = function() {
alert(this.name);
}
var person1 = new Person('Byron');
var person2 = new Person('Frank');
person1.share.push(1);
person2.share.push(2);
console.log(person2.share); //[1,2]
In the above, share can be used by all instances. I am wondering if this is specific to Javascript? anything similar for other OOP language like C++, C# or Java?
Because that's how prototypes work.
In classical OOP (Java, C# etc.), classes are just "templates", and inheritance is just combining the "templates" for instances to be created from. In prototypal inheritance, an instance is an object whose parent is a live object. They don't just share definition, they share the same live instance of a parent.
So why does it work? That's because of how prototype lookups work. If some property isn't found in the instance, the engine looks for it in the parent. If still not there, it looks higher up in the prototype chain until it reaches the root object. If still not there, the engine can declare it undefined (if it's a property lookup) or throw an error (if you called a method).
instance -> parent -> parent -> parent -> Object
That's how JS "inherits". It basically just looks for the ancestor that has that something you want, and operate at that level. This is why instances of Person can push to share, because they have a common parent object.
person1
\
> Person.prototype (which has share[])
/
person2
In order to prevent such sharing, one can override it by declaring a share property in the constructor (which when instantiated, creates one per instance). Another is just putting a share property on the instance. Descendant properties take precedence over ancestor properties, since lookup starts from the bottom.
function Person(name) {
this.name = name;
this.share = []; // Instances will have share, but not shared anymore
}
// or
person1.share = []; // Creates a share for person1 only
In my web application I have a custom object which I've defined with an object function constructor and applied various shared properties through the constructors prototype.
MyObject = function (Name) {
this.Name = Name;
this.ListItem1 = document.createElement('li');
this.ListItem1.onclick = this.SetActive;
this.ListItem2 = document.createElement('li');
this.ListItem2.onclick = this.SetActive;
}
MyObject.prototype.SetActive = function () {
alert('SetActive: ' + this.Name);
}
Now this is a simplified example, my actual object has many more DOM elements attached to it, and each of those DOM elements have many different event listeners. My object also has many other properties and methods as well. I could also potentially have thousands of instances of this object, so code efficiency is important.
My issue right now is that when a DOM event is triggered, the event handler's 'this' property is set to the actual DOM element, not my object instance.
For example:
var HelloObj = new MyObject('Hello');
HelloObj.SetActive();
//This alerts 'SetActive: Hello'
HelloObj.ListItem1.click();
//This alerts 'SetActive: undefined' because 'this' in SetActive
//becomes ListItem1 and obviously ListItem1.Name is undefined
So how can I set the DOM element's event handlers to a function pointer (not a new function instance for each event handler which would be inefficient when there's a large number of object instances) but still retain the context of the object instance itself in regards to 'this'?
Try out bind() like this:
this.ListItem1.onclick = this.SetActive.bind(this);
//and so on
The solution I've come up with and am using for now is to attach a reference to the object instance to the DOM element and then using a wrapper function for the event handlers to call the desired function through that reference:
MyObject = function (Name) {
this.Name = Name;
this.ListItem1 = document.createElement('li');
this.ListItem1.Context = this;
this.ListItem1.onclick = this.SetActiveHandler;
this.ListItem2 = document.createElement('li');
this.ListItem2.Context = this;
this.ListItem2.onclick = this.SetActiveHandler;
}
MyObject.prototype.SetActiveHandler = function () {
this.Context.SetActive();
}
MyObject.prototype.SetActive = function () {
alert('SetActive: ' + this.Name);
}
This solves all of my problems, although I'm not completely satisfied as there are a few pitfalls. Visually it's just not as pretty and is more convoluted. Programmatically it creates a circular reference which I know is generally frowned upon, although in my situation I don't think it should cause any issues as I'm only worried about modern browsers, and I believe the circular reference is fully contained such that if all of the DOM elements have been removed from the document and I delete my reference to the instance of MyObject itself everything should be fully garbage collected (some feedback on this would be greatly appreciated). And I know it's also considered bad practice to attach custom properties to a DOM element, although I don't know if that's still an issue with modern browsers (again, some feedback would be great). It also breaks the object-oriented flow a little bit having to go though a second function, I would have liked to attach the event handler directly to SetActive().
Does anybody have any other solutions or could shed some light on whether or not I'm actually creating more problems than I'm solving?