I'm trying to refactor code that uses let to declare a module scoped Auth instance which is later reassigned (in the same module) due to a configuration change. The original implementation looks like this.
let client = new Auth({ config });
// ...later in the same module
export const updateConfig = (_config) => { client = new Auth(_config) };
My first question, is the original Client released after updateConfig(). How would you prove that?
Are there any drawbacks to this approach?
My proposed refactor aims to make this a little less magical, by wrapping the Auth module in a singleton with an implicit constructor. However, it requires a getter for the instance. But, in essence it does the same thing by re-assigning a reference when a new configuration is applied.
function Client(options) {
if (Client._instance) {
return Client._instance;
}
if (!(this instanceof Client)) {
return new Client(options);
}
this._instance = new Auth(options);
}
Client.prototype.getInstance = function() { return this._instance };
Client.prototype.updateConfig = function(opts) {
this._instance = new Client(opts).getInstance();
}
// usage
const client = new Client({ foo: 'bar'});
client.updateConfig({bar: 'baz'});
console.log(client.getInstance()); // eg. { bar: 'baz' } works!
Same questions apply, from a code safety and memory management perspective, which solution is more appropriate? These are Authentication classes, so I want to make sure they are collected properly and not potentially abused.
My first question, is the original Client released after updateConfig()?
Maybe, if client is the only variable that references it.
How would you prove that?
Make a memory dump in the console and search for those client objects.
Are there any drawbacks to this approach?
No, as long as no one is referencing the client which you expect to update:
const referenced = client;
updateConfig();
console.log(referenced === client); // false
My proposed refactor aims to make this a little less magical ... but, in essence it does the same thing
Why is it "less magical" if you hide that change behind 20 lines of code? If I would be the reviewer, I would reject this change, because it introduces some unexpected behaviour, and provides no benefit whatsoever:
console.log(new Client === new Client); // true, wtf
How I would refactor that (good comments are underestimated):
// Note: client gets re-set in updateConfig(), do not reference this!
let client = new Auth({ config });
From a code safety and memory management perspective, which solution is more appropriate?
"but, in essence it does the same thing ". Wise words.
When we call new on a constructor function, it will always return a new object which means that when client was mutated later, it now definitely holds the new value. That is one thing.
The other thing is that javascript runtime environment garbage collector and looks for the objects which are in memory but are not reference from any variable and if found remove them.
So basically when I do this
let obj = {name: 'Hello'}
obj referes to some object with 2ABCS memory address and when I mutate it
let obj = {name: 'World'}
It now refers to object with address 1ABCS which makes 2ABCS orphan which means it will be removed by garbage collector
For more read https://javascript.info/garbage-collection
In Javascript, GC is not a big concern for potentially abusing the information available in objects. It is the objects themselves. With modern day developer tools, one can easily get into any part of front end code and make sense out of it unless it is obfuscated. IMO, Obfuscation is pretty much necessary these days. One it reduces the file size and second makes it bit difficult to nerds using the code in production.
Now coming to the actual question. Once a new instance new Auth is assigned to client, the old instance is no more hard referenced by client so it is eligible to garbage collection provided no other references are held. There is no guarantee on how quickly the memory is reclaimed.
And advantage of using let is it's scope. It is restricted to its block. However, it is not uncommon to have huge blocks. Compared to global vars, let offers you a small scope and hence may get released soon after the block ends. It may also be the case that Javascript runtime may utilize method stack for let variables and as soon as block ends (method), it will drop the stack and hence the references are also dropped.
Finally, it is absolutely fine to have the way it is and your implementation does not offer any advantage over the previous one.
Related
I am making a node.js web server that handles large payloads, computations and copies, for example I need to with a deep copy of a large object:
const largeObject = { bla: "bla" } // ...
class Example {
constructor() {
this.copy = JSON.loads(JSON.stringify(largeObject))
this.copy.bla = "blo" // in reality the changes will different per request
}
doStuff(args) {
// do stuff with the deep copy
}
}
now this works fine and with every request context I can create 1 new deep copy and work with that in the class. But my class is becoming big and unstructured so I want so split them up in different classes. I've thought of implementing a base class with a static deep copy, so that every request i can change the copy on the base class and implement that class in my other classes.
const largeObject = { bla: "bla" } // ...
class Example {
static copy;
constructor() {
Example.copy = JSON.loads(JSON.stringify(largeObject))
Example.copy.bla = "blo" // in reality the changes will different per request
}
}
class DoWork {
constructor(someValue) {
this.someValue = someValue
}
doStuff(args) {
// do stuff Example.copy
}
}
I want to deep copy the object only once per request for performance reasons, there is no reason to deep copy the object on every class initialization. But I'm scared that with using a "global" variable that technically outlives the request context I will get issues with race conditions and overlapping contexts. Is this a real problem or Is the single threaded environment of node.js safe enough to handle this.
That code won't run into threading issues, no.
Node.js isn't single-threaded, but unless you create additional JavaScript threads via the workers module, your code runs on a single thread, and even if you do create worker threads those threads run isolated from one another and from the main thread (they don't share a global environment; but they can communicate via messsaging and share memory in a very specific, bounded way via SharedArrayBuffer).
Side note: Using JSON to deep copy objects is not best practice. It's lossy (any non-enumerable property is dropped, any property whose value is undefined or a function is dropped, any property named with a Symbol is dropped, any inherited property is dropped, and prototypes are not maintained), it will fail for anything with circular references, and it makes an unnecessary round-trip through text. See this question's answers for various approaches to doing deep copy in JavaScript.
Here's a basic example of what I'm trying to do:
ModuleA.js
module.exports = {
doX () {
console.log(data['a']);
}
}
ModuleB.js
module.exports = {
doX () {
console.log(data['b']);
}
}
server.js
let data = { a:'foo', b:'bar' };
let doX = {};
doX['a'] = require('./ModuleA.js').doX;
doX['b'] = require('./ModuleB.js').doX;
doX['a'](); // Should print 'foo'
doX['b'](); // Should print 'bar'
In the actual implementation there would be many more variables to pass in than just data, so passing that to the functions isn't a viable solution.
This almost works, but the functions in the modules need access to functions and variables at the top level of the server file. I know I could global.variable all of my variables and functions but I'd rather not, as I've only seen people recommend against that. Of course I could pass every single variable and function in each function call, but that would look ridiculous and brings up way too many potential problems. I was hoping I could pass a reference to the server's namespace, by passing this or something, but that didn't work. I could register every function and variable on some object and pass that around, but that's inconvenient and I'm trying to refactor for convenience and organization. I think I could read in the module files and eval them, as seen here, but I would much rather use the standard module.exports system if possible.
I'll summarize my comments into an answer.
Your data variable is local to server.js and is not accessible to your other two modules. I'd suggest you pass it to them when you load those modules as a means of sharing it with them. That design pattern is typically called a "module constructor" if you want to read more about it.
Passing data from one module to another is how you achieve shared data with separate modules without using globals. That's how you do it. Since you've now rejected the usual design pattern, there's not much else we can do without understanding a lot more about the real problem so we can go further outside your box and suggest a better design than the path you're down.
Abstracting hardware to have a common set of methods sounds like a perfect fit for subclasses where each piece of hardware has its own subclass, all with the same interface. Shared data could be in the base class.
You can pass a lot of variables at once if you make them properties of an object and pass just the object. Then, both places can reference the same properties on the same object and you can pass an infinite number of properties by passing one object. There is no way to pass a modules namespace. You have to create your own object with properties on it and pass that. You can create such an object and then set that object into the base class and then all your derived classes can have access to that object.
In short:
module.exports = {
doX () {
console.log(data['a']);
^^^^ this variable is not available here. You should pass it as argument to make it available.
}
}
Hopefully this question won't be flagged as too subjective but I'm newish to OOP and struggling a bit when it come to sharing data between parts of my code that I think should be separated to some extent.
I'm building a (non-geo) map thing (using leaflet.js which is superduper) which has a map (duh) and a sidebar that basically contains a UI (toggling markers both individually and en masse, searching said marker toggles as well as other standard UI behaviour). Slightly confused about organisation too (how modular is too modular but I can stumble through that myself I guess). I am using a simple JSON file for my settings for the time being.
I started with static methods stored in objects which is essentially unusable or rather un-reusable so I went for nested constructors (kinda) so I could pass the parent scope around for easier access to my settings and states properties:
function MainThing(settings) {
this.settings = options;
this.states = {};
}
function SubthingMaker(parent) {
this.parent = parent;
}
SubthingMaker.prototype.method = function() {
var data = this.parent.settings.optionOne;
console.log(data);
this.parent.states.isVisible = true;
};
MainThing.prototype.init = function() {
this.subthing = new SubthingMaker(this);
// and some other fun stuff
};
And then I could just create and instance of MainThing and run MainThing.init() and it should all work lovely. Like so:
var options = {
"optionOne": "Hello",
"optionTwo": "Goodbye"
}
var test = new MainThing(options);
test.init();
test.subthing.method();
Should I really be nesting in this manner or will it cause me problems in some way? If this is indeed okay, should I keep going deeper if needed (maybe the search part of my ui wants its own section, maybe the map controls should be separate from DOM manipulation, I dunno) or should I stay at this depth? Should I just have separate constructors and store them in an object when I create an instance of them? Will that make it difficult to share/reference data stored elsewhere?
As regards my data storage, is this an okay way to handle it or should I be creating a controller for my data and sending requests and submissions to it when necessary, even if that data is then tucked away in simple JSON format? this.parent does really start to get annoying after a while, I suppose I should really be binding if I want to change my scope but it just doesn't seem to be an elegant way to access the overall state data of the application especially since the UI needs to check the state for almost everything it does.
Hope you can help and I hope I don't come across as a complete idiot, thanks!
P.S. I think the code I posted works but if it doesn't, its the general idea I was hoping to capture not this specific example. I created a much simpler version of my actual code because I don't want incur the wrath of the SO gods with my first post. (Yes, I did just use a postscript.)
An object may contain as many other objects as are appropriate for doing it's job. For example, an object may contain an Array as part of its instance data. Or, it may contain some other custom object. This is normal and common.
You can create/initialize these other objects that are part of your instance data in either your constructor or in some other method such as a .init() method whichever is more appropriate for your usage and design.
For example, you might have a Queue object:
function Queue() {
this.q = [];
}
Queue.prototype.add = function(item) {
this.q.push(item);
return this;
}
Queue.prototype.next = function() {
return this.q.shift();
}
var q = new Queue();
q.add(1);
q.add(2);
console.log(q.next()); // 1
This creates an Array object as part of its constructor and then uses that Array object in the performance of its function. There is no difference here whether this creates a built-in Array object or it calls new on some custom constructor. It's just another Javascript object that is being used by the host object to perform its function. This is normal and common.
One note is that what you are doing with your MainThing and SubthingMaker violates OOP principles, because they are too tightly coupled and have too wide access to each other internals:
SubthingMaker.prototype.method = function() {
// it reads something from parent's settings
var data = this.parent.settings.optionOne;
console.log(data);
// it changes parent state directly
this.parent.states.isVisible = true;
};
While better idea could be to make them less dependent.
It is probably OK for the MainThing to have several "subthings" as your main thing looks like a top-level object which will coordinate smaller things.
But it would be better to isolate these smaller things, ideally they should work even there is no MainThing or if you have some different main thing:
function SubthingMaker(options) {
// no 'parent' here, it just receives own options
this.options = options;
}
SubthingMaker.prototype.method = function() {
// use own options, instead of reading then through the MainThing
var data = this.options.optionOne;
console.log(data);
// return the data from the method instead of
// directly modifying something in MainThing
return true;
this.parent.states.isVisible = true;
};
MainThing.prototype.doSomething = function() {
// MainThing calls the subthing and modifies own data
this.parent.states.isVisible = this.subthing.method();
// and some other fun stuff
};
Also to avoid confusion, it is better not to use parent / child terms in this case. What you have here is aggregation or composition of objects, while parent / child are usually used to describe the inheritance.
When doing extract class refactorings the new sub- or helper-class requires a backreference to its creator and the creator needs a reference to its helper to make it accessible.
The issue with that structure is, that all those references have to be destructed manually which easily leads to memory leaks when one circular reference is forgotten to destruct.
Simplified Example:
function MasterClass(name) {
this.name = name;
this.extension = new MasterClassExtension(this);
}
function MasterClassExtension(masterClass) {
this.masterClass = masterClass;
}
MasterClassExtension.prototype.beautifiedName = function () {
return 'Beautiful ' + this.masterClass.name;
}
Usage:
new MasterClass('Tux').extension.beautifiedName(); // Returns "Beautiful Tux".
I know "Don't do work in constructor." but chose it for the sake of simplicity. Anyway, in an environment like PHP this does not matter as the process shuts down after the request is processed, but it does for continously running server structures like in Node.js or single page web apps.
Solution 1
Passing the reference any time.
var masterClass = new MasterClass('Tux');
MasterClassExtension.beautifiedName(masterClass); // Using singleton.
Pro:
No circular reference problem.
Contra:
Requires singletons (may cause unit testing issues).
The extension methods have to pass the reference and parameters forth to each other when there are more complex tasks, resulting in ugly function signatures.
Solution 2
Destructing.
var masterClass = new MasterClass('Tux');
masterClass.extension.beautifiedName();
masterClass.destruct(); // Sets this.extension = null;
masterClass = null;
Pro:
As handy to use as intended, except for the destructor part.
Contra:
The destruction is not enforced and can easily be forgotten causing memory leaks.
Solution 3 [???]
Is there a better solution / pattern / approach for solving this problem?
Probably many of you tried to achieve encapsulation in JavaScript. The two methods known to me are:
a bit more common I guess:
var myClass(){
var prv //and all private stuff here
//and we don't use protoype, everything is created inside scope
return {publicFunc:sth};
}
and second one:
var myClass2(){
var prv={private stuff here}
Object.defineProperty(this,'prv',{value:prv})
return {publicFunc:this.someFunc.bind(this)};
}
myClass2.prototype={
get prv(){throw 'class must be created using new keyword'},
someFunc:function(){
console.log(this.prv);
}
}
Object.freeze(myClass)
Object.freeze(myClass.prototype)
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case? I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
but only in case of sloppily executed callbacks (synchronously inside caller chain). Calling them with setTimeout(cb,0) breaks chain and disallows to get arguments as well as just returning value synchronously. At least as far as i know.
Did I miss anything? It's kind of important as code will be used by external, untrusted user provided code.
I like to wrap my prototypes in a module which returns the object, this way you can use the module's scope for any private variables, protecting consumers of your object from accidentally messing with your private properties.
var MyObject = (function (dependency) {
// private (static) variables
var priv1, priv2;
// constructor
var module = function () {
// ...
};
// public interfaces
module.prototype.publicInterface1 = function () {
};
module.prototype.publicInterface2 = function () {
};
// return the object definition
return module;
})(dependency);
Then in some other file you can use it like normal:
obj = new MyObject();
Any more 'protecting' of your object is a little overkill for JavaScript imo. If someone wants to extend your object then they probably know what they're doing and you should let them!
As redbmk points out if you need private instance variables you could use a map with some unique identifier of the object as the key.
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case?
Hm, it doesn't really use the prototype. There's no reason to "encapsulate" anything here, as the prototype methods will only be able to use public properties - just like your untrusted code can access them. A simple
function myClass2(){
var prv = // private stuff here
Object.defineProperty(this, 'prv', {value:prv})
// optionally bind the publicFunc if you need to
}
myClass2.prototype.publicFunc = function(){
console.log(this.prv);
};
should suffice. Or you use the factory pattern, without any prototypes:
function myClass2(){
var prv = // private stuff here
return {
prv: prv,
publicFunc: function(){
console.log(this.prv); // or even just `prv`?
}
};
}
I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
Simply use strict mode, this "feature" is disallowed there.
It's kind of important as code will be used by external, untrusted user provided code.
They will always get your secrets if the code is running in the same environment. Always. You might want to try WebWorkers instead, but notice that they're still CORS-privileged.
To enforcing encapsulation in a language that doesn't properly support private, protected and public class members I say "Meh."
I like the cleanliness of the Foo.prototype = { ... }; syntax. Making methods public also allows you to unit test all the methods in your "class". On top of that, I just simply don't trust JavaScript from a security standpoint. Always have security measures on the server protecting your system.
Go for "ease of programming and testing" and "cleanliness of code." Make it easy to write and maintain, so whichever you feel is easier to write and maintain is the answer.