How to call an API dynamically in functional programming paradigm - javascript

My application receives http requests from humain clients.
My application needs to call only one API, among 12 other APIs, depending on one specific data in the input it receives.
My first thought was of course
// requestPrice.js
const service = req.body.service
const APIs = {
ser1: callAPI1,
ser2: callAPI2,
ser3: callAPI3,
// ...
ser12: callAPI12,
}
return APIs[service](req.body)
This works fine but I guess needs some refactoring to make it SOLID compliant.
Normally in OOP I would go with one of the design patterns such as strategy or chain of responsibility maybe.
However I'm using the functional programming so a bit different.
I thought of doing the following:
// ser1.js
export default callAPI(data) {
// code 1
}
// ser2.js
export default callAPI(data) {
// code 2
}
// ser3.js
export default callAPI(data) {
// code 3
}
//...
// ser12.js
export default callAPI(data) {
// code 12
}
// requestPrice.js
const service = req.body.service
const api = require(`./${service}`)
return api(req.body)
This looks much better than the first version as it follows much better the Single responsibility principle. Plus it follows Open/Closed principle as well, I guess, as the requestPrice.js won't change if a 13th api is to be added.
In the other hand, I should be able to easily unit test even the file requestPrice.js as the req can be injected.
Is it SOLID principles compliant to do so or is there a better and cleaner way?

I would suggest a factory method (implemented as a curried function in FP) so that decision of which service to call and what to do in each service becomes separated. request.body should be passed to the returned impl function.
function createService(body) {
if(checkInput(body) == [something]) return service1;
else if(checkInput(body) == [something2]) return service2;
..
}
function service1(body) {..}
function service2(body) {..}
..
let service = createService(req.body);
service(request.body);
I haven't put it in different files but you may do so. Now createService can be in a different module. And each impl (service1, service2, etc) can be in their own separate files, and the caller of service doesn't need to know which impl to call, hence maintaining Dependency inversion. Higher level module doesn't know about the lower level module. :)

Related

Sinon stub a function defined in the same file

I have code along the lines of:
// example.js
export function doSomething() {
if (!testForConditionA()) {
return;
}
performATask();
}
export function testForConditionA() {
// tests for something and returns true/false
// let's say this function hits a service or a database and can't be run in tests
...
}
export function performATask() {
...
}
// example.test.js
import * as example from 'example';
it('validates performATask() runs when testForConditionA() is true', () => {
const testForConditionAStub = sinon.stub(example, 'testForConditionA').returns(true);
const performATaskSpy = sinon.stub(example, 'performATask');
example.doSomething();
expect(performATaskSpy.called).to.equal(true);
});
(I know, this is a contrived example, but I tried to keep it short)
I haven't found a way to mock testForConditionA() using Sinon.
I know there are work arounds, like
A) place everything that's in example.js into a class, and then the functions of the class can be stubbed.
B) move testForConditionA() (and other dependencies) out of example.js into a new file, and then use proxyquire
C) inject the dependencies into doSomething()
However, none of these options are viable - I'm working in a large codebase, and many files would need a rewrite & overhaul. I've searched on this topic, and I see several other posts, like this Stubbing method in same file using Sinon, but outside of refactoring code into a separate class (or a factory as one person suggested), or refactoring into a separate file and using proxyquire, I haven't found a solution. I've used other testing & mocking libraries before in the past, so it's surprising that Sinon isn't able to do this. Or is it? Any suggestions on how to go about stubbing a function without refactoring the code it's trying to test?
This bit from a very related answer (mine), shows why it is not really that surprising:
ES modules are not mutable by default, which means Sinon can't do zilch.
The EcmaScript spec dictates this, so the only current way to mutate the exports is for the runtime to not adhere to the spec. This is essentially what Jest does: it provides its own runtime, translates the import calls into equivalent CJS calls (require) calls and provides its own require implementation in that runtime that hooks into the loading process. The resulting "module" usually has mutable exports that you can overwrite (i.e. stub).
Jest does not support native (as in no transpilation/modification of source) ESM either. Track issues 4842 and 9430 for how complex this (requires changes to Node).
So, no, Sinon cannot do this on its own. It is only a stubbing library. It does not touch the runtime or do anything magic, as it must work regardless of environment.
Now back to your original issue: testing your module. The only way I see this happening is through some sort of dependency injection mechanism (which you touch upon in alternative C). You obviously have some (internal/external) state your module depends on, so that means you need a way to change that state from the outside or inject a test double (what you are trying).
One easy way is just to create a setter strictly meant for testing:
function callNetworkService(...args){
// do something slow or brittle
}
let _doTestForConditionA = callNetworkService;
export function __setDoTestForConditionA(fn){
_doTestForConditionA = fn;
}
export function __reset(){
_doTestForConditionA = callNetworkService;
}
export function testForConditionA(...args) {
return _doTestForConditionA(...args);
}
You would then test your module simply like this:
afterEach(() => {
example.__reset();
});
test('that my module calls the outside and return X', async () => {
const fake = sinon.fake.resolves({result: 42});
example.__setDoTestForConditionA(fake);
const pendingPromise = example.doSomething();
expect(fake.called).to.equal(true);
expect((await pendingPromise).result).toEqual(42);
});
Yes, you do modify your SUT to allow testing, but I have never found that all that offensive. The technique works regardless of framework (Jasmine, Mocha, Jest) or runtime (browser, Node, JVM) and reads fine.
Optionally injected dependencies
You do mention injecting the dependencies into the function actually depending on them, and that has some issues that would propagate all over the codebase.
I would like to challenge that a bit by showing a technique I have used a bit in the past. See this comment (by me) on the Sinon issue tracker: https://github.com/sinonjs/sinon/issues/831#issuecomment-198081263
I use this example to show how you can inject stubs in a constructor that none of the usual consumers of this constructor needs to care about. Does require that you use some kind of Object to not add additional parameters, of course.
/**
* Request proxy to intercept and cache outgoing http requests
*
* #param {Number} opts.maxAgeInSeconds how long a cached response should be valid before being refreshed
* #param {Number} opts.maxStaleInSeconds how long we are willing to use a stale cache in case of failing service requests
* #param {boolean} opts.useInMemCache default is false
* #param {Object} opts.stubs for dependency injection in unit tests
* #constructor
*/
function RequestCacher (opts) {
opts = opts || {};
this.maxAge = opts.maxAgeInSeconds || 60 * 60;
this.maxStale = opts.maxStaleInSeconds || 0;
this.useInMemCache = !!opts.useInMemCache;
this.useBasicToken = !!opts.useBasicToken;
this.useBearerToken = !!opts.useBearerToken;
if (!opts.stubs) {
opts.stubs = {};
}
this._redisCache = opts.stubs.redisCache || require('./redis-cache');
this._externalRequest = opts.stubs.externalRequest || require('../request-helpers/external-request-handler');
this._memCache = opts.stubs.memCache || SimpleMemCache.getSharedInstance();
}
(see the issue tracker for expanded comments)
There is nothing forcing anyone to provide stubs, but a test can provide them to override how the dependencies work.

javascript: module calls a function in the file that requires the module

My first time writing a js library. The library is intended to execute, at specific times, functions in the file that required the library. Kind of like Angular executes user implemented hooks such as $onInit, except that, in my case, user can define an arbitrary number of functions to be called by my library. How can I implement that?
One way I have in mind is to define a registerFunction(name, function) method, which maps function names to implementations. But can user just give me an array of names and I automatically register the corresponding functions for them?
Unless you have a specific requirement that it do so, your module does not need to know the names of the functions it is provided. When your module invokes those functions, it will do so by acting on direct references to them rather than by using their names.
For example:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn() )
}
// main application
const myFunc1 = () => console.log('Function 1 executing')
const myFunc2 = () => console.log('Function 2 executing')
const moduleThatInvokesMyFunctions = require('./my-module.js')
// instruct the module to invoke my 2 cool functions
moduleThatInvokesMyFunctions([ myFunc1, myFunc2 ])
//> Function 1 executing
//> Function 2 executing
See that the caller provides direct function references to the module, which the module then uses -- without caring or even knowing what those functions are called. (Yes, you can obtain their names by inspecting the function references, but why bother?)
If you want a more in-depth answer or explanation, it would help to know more about your situation. What environment does your library target: browsers? nodejs? Electron? react-native?
The library is intended to execute, at specific times, functions in the file that required the library
The "at specific times" suggests to me something that is loosely event-based. So, depending on what platform you're targeting, you could actually use a real EventEmitter. In that case, you'd invent unique names for each of the times that a function should be invoked, and your module would then export a singleton emitter. Callers would then assign event handlers for each of the events they care about. For callers, that might look like this:
const lifecycleManager = require('./your-module.js')
lifecycleManager.on( 'boot', myBootHandler )
lifecycleManager.on( 'config-available', myConfigHandler )
// etc.
A cruder way to handle this would be for callers to provide a dictionary of functions:
const orchestrateJobs = require('./your-module.js')
orchestrateJobs({
'boot': myBootHandler,
'config-available': myConfigHandler
})
If you're not comfortable working with EventEmitters, this may be appealing. But going this route requires that you consider how to support other scenarios like callers wanting to remove a function, and late registration.
Quick sketch showing how to use apply with each function:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn.apply( thisValue, arrayOfArguments ) )
}
Note that this module still has no idea what names the caller has assigned to these functions. Within this scope, each routine bears the moniker "fn."
I get the sense you have some misconceptions about how execution works, and that's led you to believe that the parts of the program need to know the names of other parts of the program. But that's not how continuation-passing style works.
Since you're firing caller functions based on specific times, it's possible the event model might be a good fit. Here's a sketch of what that might look like:
// caller
const AlarmClock = require('./your-module.js')
function doRoosterCall( exactTime ) {
console.log('I am a rooster! Cock-a-doodle-doo!')
}
function soundCarHorn( exactTime ) {
console.log('Honk! Honk!')
}
AlarmClock.on('sunrise', doRoosterCall)
AlarmClock.on('leave-for-work', soundCarHorn)
// etc
To accomplish that, you might do something like...
// your-module.js
const EventEmitter = require('events')
const singletonClock = new EventEmitter()
function checkForEvents() {
const currentTime = new Date()
// check for sunrise, which we'll define as 6:00am +/- 10 seconds
if(nowIs('6:00am', 10 * 1000)) {
singletonClock.emit('sunrise', currentTime)
}
// check for "leave-for-work": 8:30am +/- 1 minute
if(nowIs('8:30am', 60 * 1000)) {
singletonClock.emit('leave-for-work', currentTime)
}
}
setInterval( checkForEvents, 1000 )
module.exports = singletonClock
(nowIs is some handwaving for time-comparisons. When doing cron-like work, you should assume your heartbeat function will almost never be fired when the time value is an exact match, and so you'll need something to provide "close enough" comparisons. I didn't provide an impl because (1) it seems like a peripheral concern here, and (2) I'm sure Momentjs, date-fns, or some other package provides something great so you won't need to implement it yourself.

Postman:how to set up library of (semi-)complicated reusable scripts for collection

Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.

Difference between javascript modularisation and dependency Injection

What 's the difference with the modularisation of a javascript code (with browserify by example) and the dependency injection?
Are they synonymes? Are the two going together? Or Am I missing some point?
Modularisation refers to breaking code into individual, independent "packages".
Dependency injection refers to not hardcoding references to other modules.
As a practical example, you can write modules which are not using dependency injection:
import { Foo } from 'foo';
export function Bar() {
return Foo.baz();
}
Here you have two modules, but this module imports a specific other hardcoded module.
The same module written using dependency injection:
export function Bar(foo) {
return foo.baz();
}
Then somebody else can use this as:
import { Foo } from 'foo';
import { Bar } from 'bar';
Bar(Foo());
You inject the Foo dependency at call time, instead of hardcoding the dependency.
You can refer this article:
Modules are code fragments that implement certain functionality and
are written by using specific techniques. There is no out-of-the box
modularization scheme in the JavaScript language. The upcoming
ECMAScript 6 specification tends to resolve this by introducing the
module concept in the JavaScript language itself. This is the future.
and Dependency injection in JavaScript
The goal
Let's say that we have two modules. The first one is a service which
makes Ajax requests and the second one is a router.
var service = function() {
return { name: 'Service' };
}
var router = function() {
return { name: 'Router' };
}
We have another function which needs these modules.
var doSomething = function(other) {
var s = service();
var r = router();
};
And to make the things a little bit more interesting the function
needs to accept one more parameter. Sure, we could use the above code,
but that's not really flexible. What if we want to use ServiceXML or
ServiceJSON. Or what if we want to mockup some of the modules for
testing purposes. We can't just edit the body of the function. The
first thing which we all come up with is to pass the dependencies as
parameters to the function. I.e.:
var doSomething = function(service, router, other) {
var s = service();
var r = router();
};
By doing this we are passing the exact implementation of the module
which we want. However this brings a new problem. Imagine if we have
doSomething all over our code. What will happen if we need a third
dependency. We can't edit all the function's calls. So, we need an
instrument which will do that for us. That's what dependency injectors
are trying to solve. Let's write down few goals which we want to
achieve:
we should be able to register dependencies
the injector should accept a function and should return a function which somehow gets the needed resources
we should not write a lot, we need short and nice syntax
the injector should keep the scope of the passed function
the passed function should be able to accept custom arguments, not only the described dependencies
A nice list isn't it. Let's dive in.

js - How to decorate/proxy/spy on all functions? For creating a runtime profiler

So I have this decorate function that takes an object and a method-name and wraps it with external logic.
function decorate(object, methodName) {
var originalMethod = object[methodName];
object[methodName] = function () {
// pre-logic
var retVal = originalMethod.apply(this, arguments);
// post-logic
return retVal;
};
}
Now I want to wrap ALL of the functions in my application, i.e.
All the recursive public functions of the object
All private scope functions
All anonymous functions
Anything else I might have forgotten.
My purpose in doing this is to implement a "JS Profiler" that will run alongside my application during automated testing, and output performance data to logs.
I need this for testing purposes, so the solution must have minimal changes to the actual code of my application.
Possible solutions I've considered:
Public methods can be easily traversed and replaced using a recursive object traversal function.
Some hack using eval() to be able to access private methods.
Ideally, to handle all cases, I could use a proxy HTTP server (Node.js for example) that will transform each javascript file before sending it to the browser. This way my codebase will remain clean, but my tests will have the necessary changes.
The first 2 are only partial solutions, and the last one seems like an overkill and also a potential "bug factory"...
Does anyone have any other ideas on how to achieve what I need?

Categories

Resources