Assigning dependencies in controllers using the ES6 class syntax - javascript

In a video from ng-conf 2015 (Angular 1.3 meets Angular 2.0), the syntax for using ES6 classes as controllers is shown as:
class UnicornHype {
constructor(unicornHorn, $q) {
this.$q = $q;
this.horn = unicornHorn
}
findUnicorn {
return this.$q((resolve, reject) => {
...
this.horn.thrust();
...
});
}
}
I see that the injected dependencies are assigned as instance properties and I'm wondering if that's a good way to do that. Since the controller's dependencies are usually singleton services, shouldn't they be shared by the instances?
The reason they've done it like this is that methods that were previously on $scope (and therefore in the constructor function's body) are now on the object's shared prototype. John Papa's style guide actually assigns them directly to this (though he's not using ES6 classes - but that shouldn't really matter since they're just syntactic sugar on the constructor function prototype stuff). Is that a good idea?
Another approach would be to keep methods on the prototype but assign the dependencies to local variables (assuming each controller is in its own module file). Something like:
var q, horn;
class UnicornHype {
constructor(unicornHorn, $q) {
[q, horn] = [$q, unicornHorn];
}
findUnicorn {
return q(...);
}
}
Is this better? If yes, would const actually be better than var here? Does this approach have any drawbacks?
A third method (using WeakMaps) is described here: Writing AngularJS Apps Using ES6. Should I forget everything I said above and do it this way?

I don't really understand why they use Weakmaps.
I quote:
Reason behind choosing WeakMap is, the entries of WeakMap that have objects as keys are removed once the object is garbage collected.
But aren't services long-lived? So why would you need to ensure garbage collection?
In javascript all non-primitives are pointers to the original instance, so the dependencies are always shared. So why would the instance-variable approach not be a good idea?
Anyway, I think the instance-variable approach seems the most future-proof way to go.

Related

Allow modules access to functions/variables defined in server that requires them?

Here's a basic example of what I'm trying to do:
ModuleA.js
module.exports = {
doX () {
console.log(data['a']);
}
}
ModuleB.js
module.exports = {
doX () {
console.log(data['b']);
}
}
server.js
let data = { a:'foo', b:'bar' };
let doX = {};
doX['a'] = require('./ModuleA.js').doX;
doX['b'] = require('./ModuleB.js').doX;
doX['a'](); // Should print 'foo'
doX['b'](); // Should print 'bar'
In the actual implementation there would be many more variables to pass in than just data, so passing that to the functions isn't a viable solution.
This almost works, but the functions in the modules need access to functions and variables at the top level of the server file. I know I could global.variable all of my variables and functions but I'd rather not, as I've only seen people recommend against that. Of course I could pass every single variable and function in each function call, but that would look ridiculous and brings up way too many potential problems. I was hoping I could pass a reference to the server's namespace, by passing this or something, but that didn't work. I could register every function and variable on some object and pass that around, but that's inconvenient and I'm trying to refactor for convenience and organization. I think I could read in the module files and eval them, as seen here, but I would much rather use the standard module.exports system if possible.
I'll summarize my comments into an answer.
Your data variable is local to server.js and is not accessible to your other two modules. I'd suggest you pass it to them when you load those modules as a means of sharing it with them. That design pattern is typically called a "module constructor" if you want to read more about it.
Passing data from one module to another is how you achieve shared data with separate modules without using globals. That's how you do it. Since you've now rejected the usual design pattern, there's not much else we can do without understanding a lot more about the real problem so we can go further outside your box and suggest a better design than the path you're down.
Abstracting hardware to have a common set of methods sounds like a perfect fit for subclasses where each piece of hardware has its own subclass, all with the same interface. Shared data could be in the base class.
You can pass a lot of variables at once if you make them properties of an object and pass just the object. Then, both places can reference the same properties on the same object and you can pass an infinite number of properties by passing one object. There is no way to pass a modules namespace. You have to create your own object with properties on it and pass that. You can create such an object and then set that object into the base class and then all your derived classes can have access to that object.
In short:
module.exports = {
doX () {
console.log(data['a']);
^^^^ this variable is not available here. You should pass it as argument to make it available.
}
}

NG1: Class controller constructor vs $onInit for variable initialization and method bindings

before marking this question as a duplicate...I know what you are thinking, this has been asked countless times, but not exactly.
I know from various sources during my research (including official docs, and Angular Guru's and evangelists) that the $onInit block is commonly reserved for initialization work/logic that relies on angular having finished all of it's bindings.
However, variable initialization does not really fit this "work/logic" definition. Specially variables which don't have any angular logic in them. For that reason, the ES6 constructor seems to be a better fit for variable initialization. Same goes for method bindings that require lexicaly bound scope for callbacks like so:
class myController() {
constructor() {
this.myVariableOne = 1,
this.myVariableTwo = 2,
this.myVariableThree = 3;
this.myMethod = this.myMethod.bind(this);
}
$onInit() { }
myMethod() {
console.log(this.myVariableOne, this.myVariableTwo, this.myVariableThree);
}
}
And while this looks good at following "the angular way" of doings things as far as only using the $onInit block for initialization work/logic, I've also seen plenty of people that say that angular controller class constructors should only be used for Dependency Injection setup.
So, this has me confused. The constructor seems to be the best suited block for variable initialization and method bindings, and $onInit seems like it doesn't really fit that role, but it really isn't clear what I should use then. Can someone please help me figure out where I should be placing my variable definition and method bindings?
This totally depends on what are these properties. For initial static values (like in the code above) a constructor is the proper place.
$onInit is intended for DOM and data binding initialization code, it is a direct counterpart of pre-1.5 pre-link function. Other initialization code can be placed for testability reasons in $onInit, too.
Considering that there is some instance (not prototype) method that is called on initialization:
constructor() {
this.method = () => ...;
}
$onInit() {
this.method();
}
It can be tested like
const ctrl = $controller('...');
spyOn(ctrl, 'method').and...;
ctrl.$onInit();
expect(ctrl.method).toHaveBeenCalled();
It wouldn't be possible to spy or mock it if it were called in constructor.
This concern affects non-modular ES5 apps to a greater degree, because their methods are usually defined as this.method = ..., and controller prototype can't be easily reached because there's no way to import controller constructor.
I agree with your general assessment. I keep my constructors pretty light, but if I am doing things at instantiation that aren't really angular related, I've been putting them into the constructor. I haven't had any issues with them. I just looked at a dozen or so of them and I am basically not doing anything but initializing properties and assigning dependency injections to properties. I only have one controller where it calls any external code at all.
Writing about angular 1.5 is very sparse. If you haven't already seen this: https://toddmotto.com/rewriting-angular-styleguide-angular-2 I think it's the best style guide out there for "modern angularjs."

Injected dependencies not accessible in class methods for AngularJS

I'm using ES6 classes for Angular controllers and I'm running into a slight annoyance with dependency injection. Take this controller, for example (obviously not a real class).
class PersonController {
constructor(MyDependency) {
MyDependency.test(); // this works
}
sendEvent() {
MyDependency.test(); // MyDependency is undefined
}
}
PersonController.$inject = ['MyDependency'];
export default PersonController;
When the constructor is run, the dependency is found fine. However if I call the sendEvent method (or any other class method), the dependency is no longer defined. I've gotten around this before by just attaching it to this in the constructor e.g. this.MyDependency = MyDependency;.
Is there any other way that this can be achieved without hanging the dependency off this?
It is because myDependency is not accessible in your methods. Do this.myDependency = myDependency in the constructor and then do this.my dependency.test() in your method.
EDIT: Below is an alternative approach:-
let _myDependency;
class PersonController {
constructor(MyDependency) {
_myDependency = MyDependency;
_myDependency.test(); // this works
}
sendEvent() {
_myDependency.test(); // this will work
}
}
Another ways to emulate private properties are :- using Symbols/weakmaps or creating a closure around the whole class and store MyDependency within this closure.
Seems like there is no other way to do this, so I will stick to using this!

Encapsulation in JavaScript with protoypes

Probably many of you tried to achieve encapsulation in JavaScript. The two methods known to me are:
a bit more common I guess:
var myClass(){
var prv //and all private stuff here
//and we don't use protoype, everything is created inside scope
return {publicFunc:sth};
}
and second one:
var myClass2(){
var prv={private stuff here}
Object.defineProperty(this,'prv',{value:prv})
return {publicFunc:this.someFunc.bind(this)};
}
myClass2.prototype={
get prv(){throw 'class must be created using new keyword'},
someFunc:function(){
console.log(this.prv);
}
}
Object.freeze(myClass)
Object.freeze(myClass.prototype)
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case? I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
but only in case of sloppily executed callbacks (synchronously inside caller chain). Calling them with setTimeout(cb,0) breaks chain and disallows to get arguments as well as just returning value synchronously. At least as far as i know.
Did I miss anything? It's kind of important as code will be used by external, untrusted user provided code.
I like to wrap my prototypes in a module which returns the object, this way you can use the module's scope for any private variables, protecting consumers of your object from accidentally messing with your private properties.
var MyObject = (function (dependency) {
// private (static) variables
var priv1, priv2;
// constructor
var module = function () {
// ...
};
// public interfaces
module.prototype.publicInterface1 = function () {
};
module.prototype.publicInterface2 = function () {
};
// return the object definition
return module;
})(dependency);
Then in some other file you can use it like normal:
obj = new MyObject();
Any more 'protecting' of your object is a little overkill for JavaScript imo. If someone wants to extend your object then they probably know what they're doing and you should let them!
As redbmk points out if you need private instance variables you could use a map with some unique identifier of the object as the key.
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case?
Hm, it doesn't really use the prototype. There's no reason to "encapsulate" anything here, as the prototype methods will only be able to use public properties - just like your untrusted code can access them. A simple
function myClass2(){
var prv = // private stuff here
Object.defineProperty(this, 'prv', {value:prv})
// optionally bind the publicFunc if you need to
}
myClass2.prototype.publicFunc = function(){
console.log(this.prv);
};
should suffice. Or you use the factory pattern, without any prototypes:
function myClass2(){
var prv = // private stuff here
return {
prv: prv,
publicFunc: function(){
console.log(this.prv); // or even just `prv`?
}
};
}
I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
Simply use strict mode, this "feature" is disallowed there.
It's kind of important as code will be used by external, untrusted user provided code.
They will always get your secrets if the code is running in the same environment. Always. You might want to try WebWorkers instead, but notice that they're still CORS-privileged.
To enforcing encapsulation in a language that doesn't properly support private, protected and public class members I say "Meh."
I like the cleanliness of the Foo.prototype = { ... }; syntax. Making methods public also allows you to unit test all the methods in your "class". On top of that, I just simply don't trust JavaScript from a security standpoint. Always have security measures on the server protecting your system.
Go for "ease of programming and testing" and "cleanliness of code." Make it easy to write and maintain, so whichever you feel is easier to write and maintain is the answer.

Closures in Typescript (Dependency Injection)

I'm getting my butt kicked trying to use TypeScript in a functional style with dependencies. Let's say I want to make a module that depends on another module.
If I wasn't using Dependency Injection it would look like this (in node).
SomeOtherModule = require("SomeOtherModule")
exports.doSomething = function() {
SomeOtherModule.blah()
}
This is how I do it with Dependency Injection
module.exports = function(SomeOtherModule) {
function doSomething() {
SomeOtherModule.blah()
}
return {doSomething: doSomething};
}
In typescript if you define a concrete class or module you can just type the functions as you export them or include them in the class. It's all right next to each other.
But since I can't define a module inside the DI function, the only way to do this that I can see would be to define an interface for the object I'm returning separately, which is annoying, because I want to have the type annotations in line with the definitions.
What's a better way to do this?
This will probably give you a good start: http://blorkfish.wordpress.com/2012/10/23/typescript-organizing-your-code-with-amd-modules-and-require-js/
I don't know if this is the best way to set it up. But I got it to work.
I ended up dropping AMD on my project, since I'm also using AngularJS and they step on each other's toes. I did keep using that same DI pattern through, so it looks like this in the end.
I'm pretty happy with it. I experimenting uses classes instead (you can get really close if you keep your module stateless and have the constructor be the injector function), but I didn't like having to use this for all the dependencies.
Also, classes don't actually buy me anything, because if I were coding to an interface I'd have to define the types twice anyway.
interface IMyService {
doSomething();
}
module.exports = function(SomeOtherModule) {
return {doSomething: doSomething}
function doSomething() {
SomeOtherModule.blah()
}
}

Categories

Resources