javascript: module calls a function in the file that requires the module - javascript

My first time writing a js library. The library is intended to execute, at specific times, functions in the file that required the library. Kind of like Angular executes user implemented hooks such as $onInit, except that, in my case, user can define an arbitrary number of functions to be called by my library. How can I implement that?
One way I have in mind is to define a registerFunction(name, function) method, which maps function names to implementations. But can user just give me an array of names and I automatically register the corresponding functions for them?

Unless you have a specific requirement that it do so, your module does not need to know the names of the functions it is provided. When your module invokes those functions, it will do so by acting on direct references to them rather than by using their names.
For example:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn() )
}
// main application
const myFunc1 = () => console.log('Function 1 executing')
const myFunc2 = () => console.log('Function 2 executing')
const moduleThatInvokesMyFunctions = require('./my-module.js')
// instruct the module to invoke my 2 cool functions
moduleThatInvokesMyFunctions([ myFunc1, myFunc2 ])
//> Function 1 executing
//> Function 2 executing
See that the caller provides direct function references to the module, which the module then uses -- without caring or even knowing what those functions are called. (Yes, you can obtain their names by inspecting the function references, but why bother?)
If you want a more in-depth answer or explanation, it would help to know more about your situation. What environment does your library target: browsers? nodejs? Electron? react-native?
The library is intended to execute, at specific times, functions in the file that required the library
The "at specific times" suggests to me something that is loosely event-based. So, depending on what platform you're targeting, you could actually use a real EventEmitter. In that case, you'd invent unique names for each of the times that a function should be invoked, and your module would then export a singleton emitter. Callers would then assign event handlers for each of the events they care about. For callers, that might look like this:
const lifecycleManager = require('./your-module.js')
lifecycleManager.on( 'boot', myBootHandler )
lifecycleManager.on( 'config-available', myConfigHandler )
// etc.
A cruder way to handle this would be for callers to provide a dictionary of functions:
const orchestrateJobs = require('./your-module.js')
orchestrateJobs({
'boot': myBootHandler,
'config-available': myConfigHandler
})
If you're not comfortable working with EventEmitters, this may be appealing. But going this route requires that you consider how to support other scenarios like callers wanting to remove a function, and late registration.
Quick sketch showing how to use apply with each function:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn.apply( thisValue, arrayOfArguments ) )
}
Note that this module still has no idea what names the caller has assigned to these functions. Within this scope, each routine bears the moniker "fn."
I get the sense you have some misconceptions about how execution works, and that's led you to believe that the parts of the program need to know the names of other parts of the program. But that's not how continuation-passing style works.
Since you're firing caller functions based on specific times, it's possible the event model might be a good fit. Here's a sketch of what that might look like:
// caller
const AlarmClock = require('./your-module.js')
function doRoosterCall( exactTime ) {
console.log('I am a rooster! Cock-a-doodle-doo!')
}
function soundCarHorn( exactTime ) {
console.log('Honk! Honk!')
}
AlarmClock.on('sunrise', doRoosterCall)
AlarmClock.on('leave-for-work', soundCarHorn)
// etc
To accomplish that, you might do something like...
// your-module.js
const EventEmitter = require('events')
const singletonClock = new EventEmitter()
function checkForEvents() {
const currentTime = new Date()
// check for sunrise, which we'll define as 6:00am +/- 10 seconds
if(nowIs('6:00am', 10 * 1000)) {
singletonClock.emit('sunrise', currentTime)
}
// check for "leave-for-work": 8:30am +/- 1 minute
if(nowIs('8:30am', 60 * 1000)) {
singletonClock.emit('leave-for-work', currentTime)
}
}
setInterval( checkForEvents, 1000 )
module.exports = singletonClock
(nowIs is some handwaving for time-comparisons. When doing cron-like work, you should assume your heartbeat function will almost never be fired when the time value is an exact match, and so you'll need something to provide "close enough" comparisons. I didn't provide an impl because (1) it seems like a peripheral concern here, and (2) I'm sure Momentjs, date-fns, or some other package provides something great so you won't need to implement it yourself.

Related

Sinon stub a function defined in the same file

I have code along the lines of:
// example.js
export function doSomething() {
if (!testForConditionA()) {
return;
}
performATask();
}
export function testForConditionA() {
// tests for something and returns true/false
// let's say this function hits a service or a database and can't be run in tests
...
}
export function performATask() {
...
}
// example.test.js
import * as example from 'example';
it('validates performATask() runs when testForConditionA() is true', () => {
const testForConditionAStub = sinon.stub(example, 'testForConditionA').returns(true);
const performATaskSpy = sinon.stub(example, 'performATask');
example.doSomething();
expect(performATaskSpy.called).to.equal(true);
});
(I know, this is a contrived example, but I tried to keep it short)
I haven't found a way to mock testForConditionA() using Sinon.
I know there are work arounds, like
A) place everything that's in example.js into a class, and then the functions of the class can be stubbed.
B) move testForConditionA() (and other dependencies) out of example.js into a new file, and then use proxyquire
C) inject the dependencies into doSomething()
However, none of these options are viable - I'm working in a large codebase, and many files would need a rewrite & overhaul. I've searched on this topic, and I see several other posts, like this Stubbing method in same file using Sinon, but outside of refactoring code into a separate class (or a factory as one person suggested), or refactoring into a separate file and using proxyquire, I haven't found a solution. I've used other testing & mocking libraries before in the past, so it's surprising that Sinon isn't able to do this. Or is it? Any suggestions on how to go about stubbing a function without refactoring the code it's trying to test?
This bit from a very related answer (mine), shows why it is not really that surprising:
ES modules are not mutable by default, which means Sinon can't do zilch.
The EcmaScript spec dictates this, so the only current way to mutate the exports is for the runtime to not adhere to the spec. This is essentially what Jest does: it provides its own runtime, translates the import calls into equivalent CJS calls (require) calls and provides its own require implementation in that runtime that hooks into the loading process. The resulting "module" usually has mutable exports that you can overwrite (i.e. stub).
Jest does not support native (as in no transpilation/modification of source) ESM either. Track issues 4842 and 9430 for how complex this (requires changes to Node).
So, no, Sinon cannot do this on its own. It is only a stubbing library. It does not touch the runtime or do anything magic, as it must work regardless of environment.
Now back to your original issue: testing your module. The only way I see this happening is through some sort of dependency injection mechanism (which you touch upon in alternative C). You obviously have some (internal/external) state your module depends on, so that means you need a way to change that state from the outside or inject a test double (what you are trying).
One easy way is just to create a setter strictly meant for testing:
function callNetworkService(...args){
// do something slow or brittle
}
let _doTestForConditionA = callNetworkService;
export function __setDoTestForConditionA(fn){
_doTestForConditionA = fn;
}
export function __reset(){
_doTestForConditionA = callNetworkService;
}
export function testForConditionA(...args) {
return _doTestForConditionA(...args);
}
You would then test your module simply like this:
afterEach(() => {
example.__reset();
});
test('that my module calls the outside and return X', async () => {
const fake = sinon.fake.resolves({result: 42});
example.__setDoTestForConditionA(fake);
const pendingPromise = example.doSomething();
expect(fake.called).to.equal(true);
expect((await pendingPromise).result).toEqual(42);
});
Yes, you do modify your SUT to allow testing, but I have never found that all that offensive. The technique works regardless of framework (Jasmine, Mocha, Jest) or runtime (browser, Node, JVM) and reads fine.
Optionally injected dependencies
You do mention injecting the dependencies into the function actually depending on them, and that has some issues that would propagate all over the codebase.
I would like to challenge that a bit by showing a technique I have used a bit in the past. See this comment (by me) on the Sinon issue tracker: https://github.com/sinonjs/sinon/issues/831#issuecomment-198081263
I use this example to show how you can inject stubs in a constructor that none of the usual consumers of this constructor needs to care about. Does require that you use some kind of Object to not add additional parameters, of course.
/**
* Request proxy to intercept and cache outgoing http requests
*
* #param {Number} opts.maxAgeInSeconds how long a cached response should be valid before being refreshed
* #param {Number} opts.maxStaleInSeconds how long we are willing to use a stale cache in case of failing service requests
* #param {boolean} opts.useInMemCache default is false
* #param {Object} opts.stubs for dependency injection in unit tests
* #constructor
*/
function RequestCacher (opts) {
opts = opts || {};
this.maxAge = opts.maxAgeInSeconds || 60 * 60;
this.maxStale = opts.maxStaleInSeconds || 0;
this.useInMemCache = !!opts.useInMemCache;
this.useBasicToken = !!opts.useBasicToken;
this.useBearerToken = !!opts.useBearerToken;
if (!opts.stubs) {
opts.stubs = {};
}
this._redisCache = opts.stubs.redisCache || require('./redis-cache');
this._externalRequest = opts.stubs.externalRequest || require('../request-helpers/external-request-handler');
this._memCache = opts.stubs.memCache || SimpleMemCache.getSharedInstance();
}
(see the issue tracker for expanded comments)
There is nothing forcing anyone to provide stubs, but a test can provide them to override how the dependencies work.

How to call an API dynamically in functional programming paradigm

My application receives http requests from humain clients.
My application needs to call only one API, among 12 other APIs, depending on one specific data in the input it receives.
My first thought was of course
// requestPrice.js
const service = req.body.service
const APIs = {
ser1: callAPI1,
ser2: callAPI2,
ser3: callAPI3,
// ...
ser12: callAPI12,
}
return APIs[service](req.body)
This works fine but I guess needs some refactoring to make it SOLID compliant.
Normally in OOP I would go with one of the design patterns such as strategy or chain of responsibility maybe.
However I'm using the functional programming so a bit different.
I thought of doing the following:
// ser1.js
export default callAPI(data) {
// code 1
}
// ser2.js
export default callAPI(data) {
// code 2
}
// ser3.js
export default callAPI(data) {
// code 3
}
//...
// ser12.js
export default callAPI(data) {
// code 12
}
// requestPrice.js
const service = req.body.service
const api = require(`./${service}`)
return api(req.body)
This looks much better than the first version as it follows much better the Single responsibility principle. Plus it follows Open/Closed principle as well, I guess, as the requestPrice.js won't change if a 13th api is to be added.
In the other hand, I should be able to easily unit test even the file requestPrice.js as the req can be injected.
Is it SOLID principles compliant to do so or is there a better and cleaner way?
I would suggest a factory method (implemented as a curried function in FP) so that decision of which service to call and what to do in each service becomes separated. request.body should be passed to the returned impl function.
function createService(body) {
if(checkInput(body) == [something]) return service1;
else if(checkInput(body) == [something2]) return service2;
..
}
function service1(body) {..}
function service2(body) {..}
..
let service = createService(req.body);
service(request.body);
I haven't put it in different files but you may do so. Now createService can be in a different module. And each impl (service1, service2, etc) can be in their own separate files, and the caller of service doesn't need to know which impl to call, hence maintaining Dependency inversion. Higher level module doesn't know about the lower level module. :)

Postman:how to set up library of (semi-)complicated reusable scripts for collection

Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.

RxJS Testing Observable sequence without passing scheduler

I have problems attempting to test a piece of code that is similar to the following function.
Basically the question boils down to: is it possible to change the Scheduler for the debounce operator without passing a separate Scheduler to the function call?
The following example should explain the use case a bit more concrete. I am trying to test a piece of code similar to the following. I want to test the chain in the function (using a TestScheduler) without having to pass a scheduler to the debounce() operator.
// Production code
function asyncFunctionToTest(subject) {
subject
.tap((v) => console.log(`Tapping: ${v}`))
.debounce(1000)
.subscribe((v) => {
// Here it would call ReactComponent.setState()
console.log(`onNext: ${v}`)
});
}
The test file would contain the following code to invoke the function and make sure the subject emits the values.
// Testfile
const testScheduler = new Rx.TestScheduler();
const subject = new Rx.Subject();
asyncFunctionToTest(subject);
testScheduler.schedule(200, () => subject.onNext('First'));
testScheduler.schedule(400, () => subject.onNext('Second'))
testScheduler.advanceTo(1000);
The test code above still takes one actual second to do the debounce. The only solution i have found is to pass the TestScheduler into the function and passing it to the debounce(1000, testScheduler) method. This will make the debounce operator use the test scheduler.
My initial idea was to use observeOn or subscribeOn to change the defaultScheduler that is used throughout the operation chain by changing
asyncFunctionToTest(subject);
to be something like asyncFunctionToTest(subject.observeOn(testScheduler)); or asyncFunctionToTest(subject.subscribeOn(testScheduler));
that does not give me the result as i expected, however i presume i might not exactly understand the way the observeOn and subscribeOn operators work. (I guesstimate now that when using these operators it changes the schedulers the whole operation chain is run on, but operators still pick their own schedulers, unless specifically passed?)
The following JSBin contains the runnable example where i passed in the scheduler. http://jsbin.com/kiciwiyowe/1/edit?js,console
No not really, unless you actually patched the RxJS library. I know this was brought up recently as an issue and there may be support for say, being able to change what the DefaultScheduler at some point in the future, but at this time it can't be reliably done.
Is there any reason why you can't include the scheduler? All the operators that accept Schedulers already do so optionally and have sensible defaults so it really costs you nothing given that you production code could simply ignore the parameter.
As a more general aside to why simply adding observeOn or subscribeOn doesn't fix it is that both of those operators really only affect how events are propagated after they have been received by that operator.
For instance you could implement observeOn by doing the following:
Rx.Observable.prototype.observeOn = (scheduler) => {
var source = this;
return Rx.Observable.create((observer) => {
source.subscribe(x =>
{
//Reschedule this for a later propagation
scheduler.schedule(x,
(s, state) => observer.onNext(state));
},
//Errors get forwarded immediately
e => observer.onError(e),
//Delay completion
() => scheduler.schedule(null, () => observer.onCompleted()))
});
};
All the above is doing is rescheduling the incoming events, if operators down stream or upstream have other delays this operator has no effect on them. subscribeOn has a similar behavior except that it reschedules the subscription not the events.

What is the benefit of this pattern: loos augmentation

So, I have the need for a singleton. It really is a rather large "do something" object. Processes information etc.. it could be extended, and some methods could or might even be inherited, but overall, there doesn't need to exist more than one of them. So, I read a bit here which I love the concept: http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html
I am thinking more in terms of leveraging the sub module behavior.
But, I'd like to break my obj into sub-modules. But I am not seeing the need to pass in the parent sub-module as the "return" on that parent gives me access anyways. ala. Perhaps I am missing the "robustness" or real usage here.
For example.
var a = {};
a.m = function(){
var conf = {
a: 'aaa',
b: 'bbb'
}
var funcs = {
func1: function(){
console.log('a.m sub object func1');
}
}
return { // doing this gives me access
conf: conf,
funcs: funcs
};
}()
// this sub module obj WILL need some behaviors/methods/vals in a.m
a.anothersub = (function(m){
var anotherSub = m;
anotherSub.funcs.func1(); // access to a.m methods do I even need to pass it in?
a.m.funcs.func1(); // also access to a.m methods
}( a.m || {}))
// is a better approach to extend a.anothersub with a.m?
// jQuery.extend(a.anothersub, a.m);
If both "m" and "anothersub" are part of object 'a'. Is there a need for loose or tight augmentation here and for sake of keeping code compartmentalized and of same function behavior, I am creating these "sub objects".
I read that article and felt I could leverage its power. But not really sure this is the best approach here, or even needed. Your thoughts?
This all comes down to how tightly-coupled your modules/submodules actually are, and how much you can expect them to exist in all places around your application (ie: every page of a site, or at the global level of an application, et cetera).
It's also broaching a couple of different topics.
The first might be the separation of concerns, and another might be dependency-inversion, while another, tied to both, might be code organization/distribution.
Also, it depends on how cohesive two submodules might be...
If you had something like:
game.playerFactory = (function () {
return {
makePlayer : function () { /*...*/ }
};
}());
game.playerManager = (function (factory) { return {/*...*/}; }(game.playerFactory));
It might make sense to have the factory passed into the manager as an argument.
At that point, attaching both to game is really just a convenient place to make both accessible to the global scope.
Calling game from inside of one or the other, however, is problematic, in large systems, systems with lots of submodules, or systems where the interface is still in flux (when are they not?).
// input-manager.js
game.inputManager = (function () {
var jumpKey = game.playerManager.players.player1.inputConfig.jump;
}());
If all of your buttons are mapped out and bound to in that way, for every button for every player, then all of a sudden you've got 40 lines of code that are very tightly bound to:
The global name of game
The module name of playerManager
The module-interface for playerManager (playerManager.players.player1)
The module-interface for player (player.inputConfig.jump)
If any one of those things changes, then the whole submodule breaks.
The only one the input-manager should actually care about is the object that has the .inputConfig interface.
In this case, that's a player object... ...in another case, it might be completely decoupled or stuck on another interface.
You might be half-way through implementing one gigantic module, and realize that it should be six smaller ones.
If you've been passing in your objects, then you really only need to change what you're passing in:
game.inputManager = (function (hasInput) {
var jumpKey = hasInput.inputConfig.jump;
}(game.playerManager.players.player1));
Can easily become
game.inputManager = function (hasInput) {
/*...*/
}(game.playerManager.getPlayer("BobTheFantastic").config));
and only one line of code changed, rather than every line referencing game. ......
The same can be said for the actual global-reference:
// my-awesome-game.js
(function (ns, root) {
root[ns] = { };
}( "MyAwesomeGame", window ));
// player-factory.js
(function (ns, root) {
root[ns] = {
make : function () { /*...*/ }
};
}("playerFactory", MyAwesomeGame));
// player-manager.js
(function (ns, root, player) {
var manager = {
players : [],
addPlayer : function () { manager.players.push(player.make()); }
};
}("playerManager", MyAwesomeGame, MyAwesomeGame.playerManager));
Your code isn't impervious to change, but what you have done is minimize the amount of change that any one submodule needs to make, based on external changes.
This applies directly to augmentation, as well.
If you need to override some piece of some software, in a completely different file, 20,000 lines of code down the page, you don't want to have to suffer the same fate as changing interfaces elsewhere...
(function (override, character) {
override.jump = character.die;
}( MyAwesomeGame.playerManager.get(0), MyAwesomeGame.playerManager.get(1) ));
Now, every time player 1 tries to jump, player 2 dies.
Fantastic.
If the interface for the game changes in the future, only the actual external call has to change, for everything to keep working.
Even better.

Categories

Resources