So I have an existing application which uses IIFEs extensively in the browser. I'm trying to introduce some unit testing into the code and keep with the pattern of IIFE for new updates to the code base. Except, I'm having trouble even writing a test which gives me a handle to the code. For example I see this type of logic all over the code base:
var Router = (function (router) {
router.routeUser = function(user) {
console.log("I'm in! --> " + user)
};
return router;
})(Router || {});
Then the JS file is included in a script tag in the markup:
<script src="js/RouteUser.js"></script>
and called like this in the production code:
Router.routeUser(myUser)
So my question is, how do I write a test which tests the method routeUser? I've tried this in my Mocha Test:
var router = require('../../main/resources/public/js/RouteUser');
suite('Route User Tests', function () {
test('Route The User', function () {
if (!router)
throw new Error("failed!");
else{
router.routeUser("Me")
}
});
});
But I get an exception:
TypeError: router.routeUser is not a function
at Context.<anonymous> (src\test\js\RouteUser.test.js:8:20)
Then I tried returning the method, which gives the same error:
var Router = (function (router) {
return {
routeUser: function (user) {
console.log("I'm in! --> " + user)
}
}
}
)(Router || {});
Can anyone point me the right direction here?
It sounds that...
you have a codebase of scripts that are only used in the browser context (usage of IIFE suggests this)
you'd like to introduce browserless unit tests (Jest, Mocha?) using node.js (good idea!)
but you probably don't want to migrate the whole codebase to a different coding style at this moment in time (can be a lot of work depending on the size of your codebase)
Given these assumptions, the problem is that you want your code to...
act as a script when used on production (set global window.Router etc)
act as a module when used in unit tests so that you can require() it in unit tests
UMD
UMD, or universal module definition, used to be a common way to write code so that it can work in multiple environments. Interesting approach, but very cumbersome, and I like to think UMD is a thing of the past nowadays...
I'll leave it here for completeness.
Just take UMD's idea
If the only thing you want for now to make a specific script act as a module too, so that it's importable in tests, you could do a small tweak:
var Router = (function (router) {
router.routeUser = function(user) {
console.log("I'm in! --> " + user)
};
if (typeof exports === "object") {
module.exports = router;
// now the Mocha tests can import it!
}
return router;
})(Router || {});
Long term solution
In the long run, you can get lots of benefits by rewriting all your code to use ONLY modules and use a tool like webpack to package it for you. The above idea is a small step in your direction that gives you one specific benefit (testability). But it is not a long term solution and you'll have some trouble handling dependencies (what if your Router expects some globals to be in place?)
If you intend to run your Mocha tests in the browser, you do not have to alter your existing code.
Let's walk through the IIFE pattern, because based on your code, I think you may misunderstand how it works. The basic shape is this:
var thing = (function() {
return 1;
})();
console.log(thing) // '1'
It's a var declaration setting thing equal to the value on the right side of the equals sign. On the right, the first set of parens wraps a function. Then, a second set of parens sits right next to it, at the end. The second set invokes the function expression contained in the first set of parens. That means the return value of the function will be the right-side value in the var statement. So thing equals 1.
In your case, that means that the outer Router variable is set equal to the router variable returned by your function. That means you can access it as Router in your tests, after including the script in the DOM:
suite('Route User Tests', function () {
test('Route The User', function () {
if (!Router) // <- Notice the capital 'R'
throw new Error("failed!");
else {
Router.routeUser("Me") // <- capital 'R'
}
});
});
If you intend to run your tests with node, see Kos's answer.
Related
Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.
I have my code:
var User = function() {
...
}
and the test code using IIFE:
(function() { // test new user
var user = new User();
if (typeof user === 'undefined') {
console.log("Error: user undefined");
}
...
}());
both in the same js file. Works great! But, as the program grows, this is becoming too refractory for me to manage, as I have a piece of test code for every piece of business logic.
I've been taught to keep all my js in the same file, (minified is good) in production, but is there a best-practical way to keep my test code in a different file during development?
I was thinking I could use a shell script to append the test code to the production code when I want to run the tests, but I'd prefer a cross-platform solution.
I don't want or need a framework solution, I want to keep it light -- does node have anything built-in for this sort of thing?
Node has two expressions for this case. First:
module.exports = name_of_module;
Which is to export module for example function, object or something similar. And the second:
var module_name = require('Path/to/module');
to import it from other file. If you want to export IIFE you must assign it to global variable and module.export name of variable.
What 's the difference with the modularisation of a javascript code (with browserify by example) and the dependency injection?
Are they synonymes? Are the two going together? Or Am I missing some point?
Modularisation refers to breaking code into individual, independent "packages".
Dependency injection refers to not hardcoding references to other modules.
As a practical example, you can write modules which are not using dependency injection:
import { Foo } from 'foo';
export function Bar() {
return Foo.baz();
}
Here you have two modules, but this module imports a specific other hardcoded module.
The same module written using dependency injection:
export function Bar(foo) {
return foo.baz();
}
Then somebody else can use this as:
import { Foo } from 'foo';
import { Bar } from 'bar';
Bar(Foo());
You inject the Foo dependency at call time, instead of hardcoding the dependency.
You can refer this article:
Modules are code fragments that implement certain functionality and
are written by using specific techniques. There is no out-of-the box
modularization scheme in the JavaScript language. The upcoming
ECMAScript 6 specification tends to resolve this by introducing the
module concept in the JavaScript language itself. This is the future.
and Dependency injection in JavaScript
The goal
Let's say that we have two modules. The first one is a service which
makes Ajax requests and the second one is a router.
var service = function() {
return { name: 'Service' };
}
var router = function() {
return { name: 'Router' };
}
We have another function which needs these modules.
var doSomething = function(other) {
var s = service();
var r = router();
};
And to make the things a little bit more interesting the function
needs to accept one more parameter. Sure, we could use the above code,
but that's not really flexible. What if we want to use ServiceXML or
ServiceJSON. Or what if we want to mockup some of the modules for
testing purposes. We can't just edit the body of the function. The
first thing which we all come up with is to pass the dependencies as
parameters to the function. I.e.:
var doSomething = function(service, router, other) {
var s = service();
var r = router();
};
By doing this we are passing the exact implementation of the module
which we want. However this brings a new problem. Imagine if we have
doSomething all over our code. What will happen if we need a third
dependency. We can't edit all the function's calls. So, we need an
instrument which will do that for us. That's what dependency injectors
are trying to solve. Let's write down few goals which we want to
achieve:
we should be able to register dependencies
the injector should accept a function and should return a function which somehow gets the needed resources
we should not write a lot, we need short and nice syntax
the injector should keep the scope of the passed function
the passed function should be able to accept custom arguments, not only the described dependencies
A nice list isn't it. Let's dive in.
I am currently setting up some mocha tests using Node and in general they work. I now came across a problem I am not able to resolve.
I have a JS file containing the following: MyClass.js
(General CoffeeScript output for class MyClass + constructor: ->)
EDIT: This is browser code, I just want to use Node to test it. (Is that even desirable?)
(function() {
window.MyClass = (function() {
function MyClass() {
// Do something cool here
}
return MyClass;
})();
}).call(this);
I now require MyClass.js in my test file. Once I run it, it directly throws an error
Testfile:
var myclass = require('MyClass.js');
...
describe('MyClass', function() { ... });
Error:
ReferenceError: window is not defined.
So far, I understand why this is happening, window does not exist in Node. But I cannot come up with a solution. I actually do not need the real window object specifically, so I thought mocking it would be enough. But it is not...
var window = {},
myclass = require('myclass.js');
...
describe('MyClass', function() { ... });
This command is also not helping: $ mocha --globals window
I still end up with the same error.
Any idea is much appreciated!
You don't actually want the window object, what you want is the global object. Here is some code that can get it in the browser (in which case it will be the same as 'window') or in node (in which case it will be the same as 'global').
var global = Function('return this')();
Then set things on that rather than on 'window'.
Note: there are other ways of getting the global object, but this has the benefit that it will work inside strict mode code too.
With following code you can use your class-like object in web-browser environment and Node.js without modification. (Sorry, I don't know how to translate that to CoffeeScript)
(function (exports) {
var MyClass = (function() {
function MyClass() {
// Do something cool here
}
return MyClass;
})();
exports(MyClass);
})(function (exported) {
if (typeof module !== 'undefined' && module.exports) {
module.exports = exported;
} else if (typeof window !== 'undefined') {
window.MyClass = exported;
} else {
throw new Error('unknown environment');
}
});
As you already have a scope which doesn't pollute global name-space, you could reduce it to:
(function (exports) {
function MyClass() {
// Do something cool here
}
exports(MyClass);
})(function (exported) {
// see above
});
I'm not an expert in AMD, require.js and other module loaders, but I think it should be easy to extend this pattern to support other environments as well.
Edit
In a comment you said that the above solution is not working when translated back to CoffeeScript. Therefore, I suggest another solution. I haven't tried it but maybe this could be a way to solve your problem:
global.window = {}; // <-- should be visible in your myclass.js
require('myclass.js');
var MyClass = global.window.MyClass;
describe('MyClass', function() {
var my = new MyClass();
...
});
It's a horrible piece of code, but if it works, maybe for testing purposes it's sufficient.
Due to the module loading behaviour of node.js this only works if your require('myclass.js') is the first require of this file in the node process. But in case of testing with Mocha this should be true.
1) What you are looking for is the module.exports to expose things in Node:
http://openmymind.net/2012/2/3/Node-Require-and-Exports/
2) Also you don't need IIFE in Node, you can drop the (function() {...
3) You can alway look at some popular Node repo on Github to see examples, look at the Mocha code since you're using it, you'll learn a thing or two.
Something like jsdom is lighter than PhantomJS and yet provides quite a few things you need to test code that expects to be running with a proper window. I've used it with great success to test code that navigates up and down the DOM tree.
You ask:
This is browser code, I just want to use Node to test it. (Is that even desirable?)
It is very desirable. There's a point at which a solution like jsdom won't cut it but as long as your code is within the limit of what jsdom handles, might as well use it and keep the cost of launching a test environment to the minimum needed.
#hgoebl: As I'm not the OP, I can not add his original CoffeeScript code, but here is my example:
pubsub.coffee:
window.PubSub = window.PubSub || {}
PubSub.subscribe = ( subject, callback )->
now the test:
assert = require "assert"
pubsub = require './pubsub.coffee'
describe "pubsub.http interface", ->
it "should perform a http request", ->
PubSub.subscribe 1, 2
what works for me up to now is:
window.PubSub = window.PubSub || {}
window.PubSub.subscribe = ( subject, callback )->
and the test:
`window = {}`
assert = require "assert"
pubsub = require './pubsub.coffee'
describe "pubsub.http interface", ->
it "should perform a http request", ->
window.PubSub.subscribe 1, 2
The main drawback of the solution, is that I have to explicitly mention the window object in the implementation and the test. User code executed in a browser should be able to omit it.
I now came up with an other solution:
window = window || exports
window.PubSub = window.PubSub || {}
PubSub = PubSub || window.PubSub
PubSub.subscribe = ( subject, callback )->
and then in the test, simply requiring the PubSub namespace:
PubSub = require( './pubsub.coffee' ).PubSub
And finally, the solution from kybernetikos applied looks like this:
global = `Function('return this')()`
global.PubSub = global.PubSub || {}
PubSub.subscribe = ( subject, callback )->
As now, the PubSub namespace is in the global namespace, just a simple require is needed in the file that contains the mocha tests:
require( './pubsub.coffee' )
After a brief romance with the revealing module pattern I've come to realise a set-back when it comes to unit-testing modules. I cannot however decide if it is my approach to testing a module or whether there is some form of work-around.
Consider the following code:
var myWonderfulModule = (function () {
function publicMethodA (condition) {
if(condition === 'b') {
publicMethodB();
}
}
function publicMethodB () {
// ...
}
return {
methodA : publicMethodA,
methodB : publicMethodB
}
}());
If I wanted to test (using Jasmine) the various paths leading through publicMethodA to publicMethodB. I might write a small test like so:
it("should make a call to publicMethodB when condition is 'b'", function() {
spyOn(myWonderfulModule , 'publicMethodB');
myWonderfulModule.publicMethodA('b');
expect(myWonderfulModule.publicMethodB).toHaveBeenCalled();
});
If I understand correctly, there's a copy of publicMethodB within the closure that cannot be changed. Even if I change myWonderfulModule.publicMethodB afterwards:
myWonderfulModule.publicMethodB = undefined;
calling myWonderfulModule.publicMethodA will still run the original version of B.
The example above is of course simplified but there are plenty of scenarios I can think of where it would be convenient to unit test conditional paths through a method.
Is this a limitation of the revealing module pattern or simply a misuse of unit testing? If not what work-arounds are available to me? I'm considering moving to something like RequireJS or reverting back to non-modular code.
Any advice appreciated!
You cant test the intern methodes of a closure. And you also shouldn't spy on it. Think about about your module as a black box. You put something in and you get something out. All you should test is that the thing you get out of your module is the one that you expect.
Spying on methodes in your module makes not much sense. Think about it. You spy on it, the test passes. Now you change the functionality so it creates a bug, the test still passes cause the function is still called but you never mention the bug. If you just test the thing that cames out you dont need to spy on internal methodes cause, that they are called is implicite when the outcome of the module is what you expect.
So in your case there is no thing that goes in and nothing comes out. This makes not much sense but I believe that your module interacts with DOM or makes an ajax call. This are things that you can test (DOM) or you should spy on (ajax).
You should also make you self familiar with Inversion of Control and Dependency Injection. These are patterns that will make your modules much more easier to test.
If you use the keyword "this" when you call publicMethodB() from publicMethodA() it will work. For example:
var myWonderfulModule = (function () {
function publicMethodA (condition) {
if(condition === 'b') {
this.publicMethodB();
}
}
function publicMethodB () {
// ...
}
return {
methodA : publicMethodA,
methodB : publicMethodB
}
}());