How to mock an external dependency method callback params in Nodejs? - javascript

Let's say that I have the following:
lib/modules/module1.js
var m2 = require('module2');
module.exports = function(){
return {
// ...
get: function(cb){
m2.someMethod(params, function(error, data){
if(error){
cb(error)
}
cb(null, data)
})
},
// ...
}
}
Now let's say that I have a set of tests in another dir, e.g. tests/testModule1.js. From this file, I create an instance of module1 to peform some tests.
I would like to mock the objects passed by the m2.someMethod to it's callback function (not the cb function), from the file testModule1.js.
I've looked into Sinon.js, but I couldn't figure a way to do this. Actually, I that even possible?
Thanks.

You could use something like proxyquire, but I'm not a fan of modifying the built in require.
Personally, I would suggest rewriting your code to use dependency injection:
module.exports = function(m2){
return {
// ...
get: function(cb){
m2.someMethod(params, function(error, data){
if(error){
cb(error)
}
cb(null, data)
})
},
// ...
}
}
Note that I moved m2 to be a parameter in your exported function. Then somewhere else (app, or main, or whatever), you could do this:
app.js
var module1Creator = require('module1');
var module2 = require('module2');
var module1 = module1Creator(module2);
Then when you need to test it...
testModule1.js
var module1Creator = require('module1');
// inject the "fake" version containing test data, spies, etc
var module2Mocked = require('module2mock');
var module1 = module1Creator(module2mocked);

I would normaly agree that about changing the design and, as suggested by #dvlsg, DI would be my choice as well.
However, the project I'm working on is already on the move, and has a considerable size. By doing this change would imply in a huge manpower cost that, in this case specifically, might not be worth doing it.
As a solution, I've realized that once you do a require('someModule'), the someModule is loaded and it's stored as a singleton, in some sort of global cache (I don't fully understand this mechanism, but I'll look into it), and doesn't matter if you require('someModule') from another file, you will receive the cached version.
So, if in lib/modules/module1.js I do require('module2'), module2 is loaded and stored in this cache and, I can require('module2') and mock it in tests/testModule1.js. This will reflect when the get() from lib/modules/module1.js is called.
For that, I used Sinon.js to create the mocks in the test files.
The procedure above actually solved my problem, in a way that I didn't have to change the whole design, and I was able to do the tests. Thats why I'm posting this as an answer. However, I'll not set this as the correct answer here because, as I said before, I don't fully understand this mechanism and change required modules is not a good practice.
I would like to see what other devs have to say about this and, if the discussion leads to acceptance, I'll ultimately set this as correct.

Related

Postman:how to set up library of (semi-)complicated reusable scripts for collection

Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.

Best approach to passing variables between multi-file Node.js modules?

I have a Node.js module that I have kept as a single file up to this point. It's getting rather large though and has a lot of functionality in it that might be better separated into other modules. For example, separating out logging initialization and functionality into it's own module.
My module has a lot of (I want to say "global" but not really) top-level variables that lots of different functions access, use and modify. If I separate out functionality into separate files/modules and require them into my primary module, what is the proper approach to passing those variables between the modules?
For example, with everything in one module/file, it's easy to do this:
const logger = (log, message) {........}
const makeRequestHandler = (url, filepath) {
....
logger.info('some message here')
....
}
So it's pretty easy to access top-level systems like the logger. But, if I decided to split my logger and makeRequestHandler into their own modules/files, how would I handle this?
let logger = require('./mylogger') // Custom module
let makeRequest = require('./makerequest') // Another custom module
makeRequest.handler(url, filepath, logger)
This would work, but it doesn't seem elegant or optimal. It would get even more weird if I have a lot of different variables that I needed to pass in:
makeRequest.handler(url, filepath, logger, profiler, reportingBuffer, compressionHandler)
I've also considered passing stuff into the modules when requiring:
let makeRequest = require('./makeRequest')(logger)
or better yet:
let makeRequest = require('./makeRequest')(this) // I can access all variables made in my primary/top-level module
Is there an approach here that is more proper and better/easier to maintain? Is the last one the best approach?
What about a global locator pattern or service locator/service provider pattern as pointed out in comments wherein you can have something like a service registry and include these services in any module you want to use them in.
Although I am not sure about being the best solution of all, but it is easier to implement and feels like a neater solution than passing in the this context around the modules.
//logger.js
const logger = (log, message) {........}
export logger
Now, in the app file is where you can initialize the logger and other service instances and register them in the global locator
let logger = require('./mylogger') // Custom module
init() {
//init and set the logger
global.logger = new logger();
...
}
And this is how you can use it in the code to makRequest
let logger = global.logger;
const makeRequestHandler = (url, filepath) {
....
logger.info('some message here')
....
}
What I feel is problem with these solutions :
//Solution 1 : As you pointed out yourself this can get messy when number of paramters increase and is not very readable or understandable.
let logger = require('./mylogger')
let makeRequest = require('./makerequest')
makeRequest.handler(url, filepath, logger)
//Solution 2 : Passing around the `this` context is never a good idea,for keeping sensitive data independent or scope isolation
let makeRequest = require('./makeRequest')(this)
note :
This article explains some aspects of this solution in detail for your consideration.
Also there are some npm modules which provide these features like Service Locator
. HTH

Difference between require('module')() and const mod = require('module') mod() in node/express

I have two files: server.js and db.js
server.js looks as such:
...
const app = express();
app.use('/db', db());
app.listen(3000, () => {
console.log('Server started on port 3000')
});
...
and db.js as such:
...
function init() {
const db = require('express-pouchdb')(PouchDB, {
mode: 'minimumForPouchDB'
});
return db;
}
...
This works just fine, and I am able to reach the pouchdb http-api from my frontend. But before, I had const PouchDBExpress = require('pouchdb-express'); in the top of db.js, and the first line in init() looked like this; const db = PouchDBExpress(PouchDB, {. This gave an error in one of the internal files in pouchdb saying cannot set property query on req which only has getters (paraphrasing).
So this made me copy the exaples from pouchdb-servers GitHub examples which requires and invokes pouched-express directly, and everthing worked fine. Is there an explanation for this? I'm glad it works now, but I'm sort of confused as to what could cause this.
The only difference between:
require('module')()
and
const mod = require('module');
mod();
is that in the second case, you retain a reference to the module exports object (perhaps for other uses) whereas in the first one you do not.
Both cases load the module and then call the exported object as a function. But, if the module export has other properties or other methods that you need access to then, obviously, you need to retain a reference to it as in the second option.
For us to comment in more detail about the code scenario that you said did not work, you will have to show us that exact code scenario. Describing what is different in words rather than showing the actual code makes it too hard to follow and impossible to spot anything else you may have inadvertently done wrong to cause your problem.
In require('module')(), you don't retain a reference of the module imported.
While in const mod = require('module'); mod(), you retain a reference and can use the same reference later in your code.
This problem might be due to some other reason like -
Are you using a some another global instance of the db, and your code works in the given case as you are making a local instance
Some other code dependent scenario.
Please provide more details for the same

RequireJs extend module initialize begin and end

I have created a JavaScript library which can be used for logging purposes.
I also want to support the logging of requirejs.
Which functions/events of requirejs can I prototype/wrap so I can log when a module is initialized and when it is done initializing and returns the initialized object.
For instance if I call require(["obj1","obj2", "obj3"], function(obj1, obj2, obj3){}
I would like to know when requirejs begins on initializing each of the object, and I would like to know when each object is completely initialized.
I looked into the documentation/code, but could not find any usefull functions I can access from the requirejs object or the require object.
Note: I do not want to change the existing code of requirejs I wish to append functionality from the outside by either prototyping or wrapping.
What I have tried (problem is that this only accesses the begin and end of the entire batch of modules):
var oldrequire = require;
require = function (deps, callback, errback, optional) {
console.log("start");
var callbackWrapper = callback;
callbackWrapper = function () {
console.log("end");
var args = new Array();
for(var i = 0; i < arguments.length; i++) {
args.push(arguments[i]);
}
callback.apply(this, args);
};
oldrequire.call(this, deps, callbackWrapper, errback, optional);
};
This is a "better than nothing answer", not a definitive answer, but it might help you look in another direction. Not sure if that's good or bad, certainly it's brainstorming.
I've looked into this recently for a single particular module I had to wrap. I ended up writing a second module ("module-wrapper") for which I added a path entry with the name of the original module ("module"). I then added a second entry ("module-actual") that references the actual module which I require() as a dependency in the wrapper.
I can then add code before and after initialization, and finally return the actual module. This is transparent to user modules as well as the actual module, and very clean and straightforward from a design standpoint.
However, it is obviously not practical to create a wrapper per module manually in your case, but you might be able to generate them dynamically with some trickery. Or somehow figure out what name was used to import the (unique) wrapper module from within it so that it can in turn dynamically import the associated actual module (with an async require, which wouldn't be transparent to user code).
Of course, it would be best if requirejs provided official hooks. I've never seen such hooks in the docs, but you might want to go through them again if you're not more certain than me.

RequireJS dependency override for configurable dependency injection

I'm working with something that seems a perfect fit for DI but it's being added to an existing framework that did not have that in mind when it was written. The config that defines dependencies is coming from a back-end model. Really it's not a full config at this point it basically contains a key that can be used to determine a particular view should be available or not.
I'm using require so the dependency looks something like this
// Dependency
define(['./otherdependencies'], function(Others) {
return {
dep: "I'm a dependency"
};
});
And right now the injector looks something like this
// view/injector
define([
'./abackendmodel',
'./dependency'
], function(Model, Dependency) {
return {
show: function() {
if (model.showDepency) {
var dep = new Dependency();
this.$el.append(dep);
}
}
};
});
This is a far stretch from the actual code but the important part is how require works. Notice in the injector code the dependency is required and used in the show method but only if the model says it should be shown. The problem is that there maybe additional things required by the dependency that aren't available when it shouldn't be shown. So what I'd really like to do is not have to specify that dependency unless the model.showDependency is true. I've come up with a couple of ideas but nothing that I like.
Idea one
Have another async require call based on that model attribute. So the injector would look like this.
// Idea 2 view/injector
define([
'./abackendmodel'
], function(Model) {
var Dep1 = null;
if (model.showDepedency) {
require([
'./dependency'
], function(Dependency) {
Dep1 = Dependency;
});
}
return {
show: function() {
if (Dep1) {
var dep = new Dep1();
this.$el.append(dep);
}
}
};
});
Obviously this has issues. If show is called before the async require call is finished then Dep1 will still be null. So we don't really show the dependency which is a goal and obviously there's JS errors that will be thrown in this case. Also we're still using an if check on show which I don't like but the use case is that a dependency may or may not be present and we just don't want to require it if it's not needed so I might not be able to get around that. Also keep in mind that the model.showDependency is not actually a boolean value. It can be have multiple values which would call for different dependencies to be required. I'm just stripping it down here for simplicity of understanding the basic issue.
Idea two
This is less solidified i.e. I don't think this will even work but I've considered playing with the require config.path stuff. My idea was basically having two configs so that './dependency' pointed to different places. Problem with that is despite what the model.showDependency value is the config is the same require config so can't change that at run-time. Maybe there's some magic that could be done here like having separate view directory path defined and using a factory type object to return the one that we care about but since that would ultimately result in the same async behaviour in Idea one I don't think that buys me anything (it's basically the same).
Idea three
Have the dependency return null base on the model.showDependency attribute.
This might be the best solution right now. I'm still stuck with some ifs but I don't think that's going away. Also this prevent in initialization code from being called.
Any better ideas?
Why not try using a promise, for loading the dependency?
You have two choices, depending on how your code needs to work.
Option 1)
Return a promise for the result of the module 'view/injector', the result of this promise would be the current object result show above.
Option 2)
Use a promise to load the dependency, and then execute the logic once the promise has been resolved.
Below is an example of Option 2, using the jQuery style deferred. I typically prefer
when.js or Q. This example might fall apart if order of appending is important.
// Option 2
define([
'./abackendmodel'
], function(Model) {
var dep1Promise = null;
if (model.showDepedency) {
var dep1Deferred = $.Deferred();
dep1Promise = dep1Deferred.promise();
require([
'./dependency'
], function(Dependency) {
dep1Deferred.resolve(Dependency);
}, dep1Deferred.reject); // Optionally reject the promise if there is an error finding the dependency.
}
return {
show: function() {
if (dep1Promise) {
dep1Promise.then(function(Dep1) {
var dep = new Dep1();
this.$el.append(dep);
});
}
}
};
});

Categories

Resources