I'm using ES6 classes for Angular controllers and I'm running into a slight annoyance with dependency injection. Take this controller, for example (obviously not a real class).
class PersonController {
constructor(MyDependency) {
MyDependency.test(); // this works
}
sendEvent() {
MyDependency.test(); // MyDependency is undefined
}
}
PersonController.$inject = ['MyDependency'];
export default PersonController;
When the constructor is run, the dependency is found fine. However if I call the sendEvent method (or any other class method), the dependency is no longer defined. I've gotten around this before by just attaching it to this in the constructor e.g. this.MyDependency = MyDependency;.
Is there any other way that this can be achieved without hanging the dependency off this?
It is because myDependency is not accessible in your methods. Do this.myDependency = myDependency in the constructor and then do this.my dependency.test() in your method.
EDIT: Below is an alternative approach:-
let _myDependency;
class PersonController {
constructor(MyDependency) {
_myDependency = MyDependency;
_myDependency.test(); // this works
}
sendEvent() {
_myDependency.test(); // this will work
}
}
Another ways to emulate private properties are :- using Symbols/weakmaps or creating a closure around the whole class and store MyDependency within this closure.
Seems like there is no other way to do this, so I will stick to using this!
Related
I have a class function (not originally developed by me)...
function Project()
{
Project.super.call(this);
// this._some_var = null;
/* other initial vars */
...
return DefensiveObject.create(this); // see comment below
}
function initialize()
{
//*******Initialize Project************
}
...
return Project;
This function is part of a module called "Project.js" included by running node main.js.
return DefensiveObject.create(this); // not return Object.create(this)
DefensiveObject is a class to prevent objects from getting or
setting properties that are not explicitly setup in the class.
The main.js calls Project.initialize() which resides within my Project class.
My question is why would there be a need to call "Project.super.call(this);"?
in Javascript the reserved word super is used on ES6 classes for referencing the parent of a child class, it doesn't makes sense to use it referencing a function.
please read this article where the usage of super is explained
https://jordankasper.com/understanding-super-in-javascript/
also this medium article can help you
https://medium.com/beginners-guide-to-mobile-web-development/super-and-extends-in-javascript-es6-understanding-the-tough-parts-6120372d3420
The Project.super.call(this) line was a way to allow use of a dispose method in a "Disposable" class (not included in the original question) which allowed for a clean up of code from memory.
before marking this question as a duplicate...I know what you are thinking, this has been asked countless times, but not exactly.
I know from various sources during my research (including official docs, and Angular Guru's and evangelists) that the $onInit block is commonly reserved for initialization work/logic that relies on angular having finished all of it's bindings.
However, variable initialization does not really fit this "work/logic" definition. Specially variables which don't have any angular logic in them. For that reason, the ES6 constructor seems to be a better fit for variable initialization. Same goes for method bindings that require lexicaly bound scope for callbacks like so:
class myController() {
constructor() {
this.myVariableOne = 1,
this.myVariableTwo = 2,
this.myVariableThree = 3;
this.myMethod = this.myMethod.bind(this);
}
$onInit() { }
myMethod() {
console.log(this.myVariableOne, this.myVariableTwo, this.myVariableThree);
}
}
And while this looks good at following "the angular way" of doings things as far as only using the $onInit block for initialization work/logic, I've also seen plenty of people that say that angular controller class constructors should only be used for Dependency Injection setup.
So, this has me confused. The constructor seems to be the best suited block for variable initialization and method bindings, and $onInit seems like it doesn't really fit that role, but it really isn't clear what I should use then. Can someone please help me figure out where I should be placing my variable definition and method bindings?
This totally depends on what are these properties. For initial static values (like in the code above) a constructor is the proper place.
$onInit is intended for DOM and data binding initialization code, it is a direct counterpart of pre-1.5 pre-link function. Other initialization code can be placed for testability reasons in $onInit, too.
Considering that there is some instance (not prototype) method that is called on initialization:
constructor() {
this.method = () => ...;
}
$onInit() {
this.method();
}
It can be tested like
const ctrl = $controller('...');
spyOn(ctrl, 'method').and...;
ctrl.$onInit();
expect(ctrl.method).toHaveBeenCalled();
It wouldn't be possible to spy or mock it if it were called in constructor.
This concern affects non-modular ES5 apps to a greater degree, because their methods are usually defined as this.method = ..., and controller prototype can't be easily reached because there's no way to import controller constructor.
I agree with your general assessment. I keep my constructors pretty light, but if I am doing things at instantiation that aren't really angular related, I've been putting them into the constructor. I haven't had any issues with them. I just looked at a dozen or so of them and I am basically not doing anything but initializing properties and assigning dependency injections to properties. I only have one controller where it calls any external code at all.
Writing about angular 1.5 is very sparse. If you haven't already seen this: https://toddmotto.com/rewriting-angular-styleguide-angular-2 I think it's the best style guide out there for "modern angularjs."
In a video from ng-conf 2015 (Angular 1.3 meets Angular 2.0), the syntax for using ES6 classes as controllers is shown as:
class UnicornHype {
constructor(unicornHorn, $q) {
this.$q = $q;
this.horn = unicornHorn
}
findUnicorn {
return this.$q((resolve, reject) => {
...
this.horn.thrust();
...
});
}
}
I see that the injected dependencies are assigned as instance properties and I'm wondering if that's a good way to do that. Since the controller's dependencies are usually singleton services, shouldn't they be shared by the instances?
The reason they've done it like this is that methods that were previously on $scope (and therefore in the constructor function's body) are now on the object's shared prototype. John Papa's style guide actually assigns them directly to this (though he's not using ES6 classes - but that shouldn't really matter since they're just syntactic sugar on the constructor function prototype stuff). Is that a good idea?
Another approach would be to keep methods on the prototype but assign the dependencies to local variables (assuming each controller is in its own module file). Something like:
var q, horn;
class UnicornHype {
constructor(unicornHorn, $q) {
[q, horn] = [$q, unicornHorn];
}
findUnicorn {
return q(...);
}
}
Is this better? If yes, would const actually be better than var here? Does this approach have any drawbacks?
A third method (using WeakMaps) is described here: Writing AngularJS Apps Using ES6. Should I forget everything I said above and do it this way?
I don't really understand why they use Weakmaps.
I quote:
Reason behind choosing WeakMap is, the entries of WeakMap that have objects as keys are removed once the object is garbage collected.
But aren't services long-lived? So why would you need to ensure garbage collection?
In javascript all non-primitives are pointers to the original instance, so the dependencies are always shared. So why would the instance-variable approach not be a good idea?
Anyway, I think the instance-variable approach seems the most future-proof way to go.
I'm getting my butt kicked trying to use TypeScript in a functional style with dependencies. Let's say I want to make a module that depends on another module.
If I wasn't using Dependency Injection it would look like this (in node).
SomeOtherModule = require("SomeOtherModule")
exports.doSomething = function() {
SomeOtherModule.blah()
}
This is how I do it with Dependency Injection
module.exports = function(SomeOtherModule) {
function doSomething() {
SomeOtherModule.blah()
}
return {doSomething: doSomething};
}
In typescript if you define a concrete class or module you can just type the functions as you export them or include them in the class. It's all right next to each other.
But since I can't define a module inside the DI function, the only way to do this that I can see would be to define an interface for the object I'm returning separately, which is annoying, because I want to have the type annotations in line with the definitions.
What's a better way to do this?
This will probably give you a good start: http://blorkfish.wordpress.com/2012/10/23/typescript-organizing-your-code-with-amd-modules-and-require-js/
I don't know if this is the best way to set it up. But I got it to work.
I ended up dropping AMD on my project, since I'm also using AngularJS and they step on each other's toes. I did keep using that same DI pattern through, so it looks like this in the end.
I'm pretty happy with it. I experimenting uses classes instead (you can get really close if you keep your module stateless and have the constructor be the injector function), but I didn't like having to use this for all the dependencies.
Also, classes don't actually buy me anything, because if I were coding to an interface I'd have to define the types twice anyway.
interface IMyService {
doSomething();
}
module.exports = function(SomeOtherModule) {
return {doSomething: doSomething}
function doSomething() {
SomeOtherModule.blah()
}
}
Perhaps this is a bit of a novice JQuery question but:
proper jquery plugins are written inside a closure
thus only methods defining the plugin interface are accessible from the outside
sometimes (or many times) one may need helper methods that it doesn't make sense to expose as part of plugin interface (for example because they alter internal state).
how do those get unit-tested?
For example, looking at blockUI plugin, how can methods install, remove, reset get unit-tested?
To draw a parallel, in Java I would:
create a BlockUI interface containing public methods only (by definition)
create a BlockUIImpl class implementing the above interface. This class would contain install(), remove(), reset() methods that could be public, or (package) protected
So, I would unit-test the Impl but client programmers would interact with the plugin via BlockUI interface.
The same applies here as with any other language and testing privates: To test private methods, you should exercise them via the public interface. In other words, by calling your public methods, the private methods get tested in the process because the public methods rely on the privates.
Generally private methods are not tested separately from the public interface - the entire point is that they are implementation details, and tests should generally not know too much about the specifics of the implementation.
Code written inside a function in JavaScript, or closure as you called it, is not necessarily isolated from the outside of that function.
It is useful to know that functions have visibility of the scope in which they are defined. Any closure you create carries the scope, and therefore functions, of the code that contains it.
This simple example with a jQuery plugin and an artificial "namespace" might serve to prove this assumption:
// Initialise this only when running tests
my_public_test_namespace = function(){};
jQuery.fn.makeItBlue = function() {
makeItBlue(this);
function makeItBlue(object) {
object.css('color','blue');
}
if(typeof my_public_test_namespace != "undefined") {
my_public_test_namespace.testHarness = function() {
return {
_makeItBluePrivateFn: makeItBlue
}
};
}
};
$("#myElement").makeItBlue(); // make something blue, initialise plugin
console.debug(my_public_test_namespace.testHarness()._makeItBluePrivateFn);
But don't forget you shouldn't really test privates. ;)
I came up with the same question and after navigating and finding answers that not really apply, here's what I ended up to solve a similar problem.
Problem: "I have a widget that has a behavior I want to test to ensure it's working as expected, some of the methods are called internally because they have to solve internal behavior, exposing them as public does not make sense because they wont be called from outside, testing the public methods means you wont test the internals of the widget, so finally what can I do?"
Solution: "Creata a test widget that exposes the methods you are interested in testing and use them in the qunit, here is the example:"
// Namespaces to avoid having conflicts with other things defined similarly
var formeditortest = formeditortest || {};
// widget that inherits from the container I want to test
$.widget( "app.testcontainer", $.app.container, {
executeDrop: function(drop, helper) {
var self = this;
self._executeDrop(drop, helper);
}
});
// Test cases
formeditortest.testDropSimple = function(assert) {
var container = $("<div />");
container.testcontainer();
container.testcontainer("drop", 0, 3);
assert.equal(true, $(innerDiv.children()[0]).hasClass("droparea"));
});
QUnit.test(name, function( assert ) {
formeditortest.testDropSimple(assert);
formeditortest.testDropBottom(assert);
});
Using this method the inherited testcontainer could have the preparation required to test elements and then the qunit will handle the test, this solves my problem, hope this works for someone else that is having troubles to approach these kind of tests.
Critics? welcome to comment, I want to improve this if I'm doing something silly!!!