A more elegant way to test time passing in mocha - javascript

I recently had to fix a bug with an old react component at work. Generally when fixing a bug, we write a test with it. The problem is, the react component does an asynchronous fetch from the server (I can refactor the code so the async action is moved into redux, but we don't do refactoring as part of bug fixes here) so when testing what the component is supposed to render, I have to wait 500ms to allow the promise to resolve.
I know I wouldn't have to do a setTimeout if I create an instance of the component and call the method directly, I can just do a .then, but we like to test the input/output of the component without calling internal methods.
Is there a more elegant solution than having to set a timeout? here's the current code:
it('autofills and locks all the fields where user data is present', function(done) {
const emailInput = $wrapper.find('#email');
emailInput.value = 'email#example.com';
emailInput.simulate('blur');
// - autofill does an async request
// - we need to wait for the promise to resolve before
// checking if the inputs are disabled or not
setTimeout(() => {
const identifierInput = $wrapper.find('#id');
const lastNameInput = $wrapper.find('#last-name');
const phoneNumberInput = $wrapper.find('#phone-number');
const firstNameInput = $wrapper.find('#first-name');
expect(firstNameInput.html().includes('disabled')).to.be.true;
expect(lastNameInput.html().includes('disabled')).to.be.true;
expect(phoneNumberInput.html().includes('disabled')).to.be.true;
expect(identifierInput.html().includes('disabled')).to.be.true;
done();
}, 500);
});

" we don't do refactoring as part of bug fixes here"... Wow.
Anyway if you NEED to write a test to complete your job, you will need to mock your API call. That 500ms timeout will fail anytime the network is a bit slow or whatever, it's a very very very fragile test.
Since you didn't specify much about how the request is made I can't tell you anything else, but you will find mocking libraries for all HTTP request flavors: fetch-mock for fetch, nock... There are tons.
The idea is that you should setup a fake answer upfront and return it immediately (with a Promise.resolve() if needed).

Related

What's the advantage of fastify-plugin over a normal function call?

This answer to a similar question does a great job at explaining how fastify-plugin works and what it does. After reading the explanation, I still have a question remaining; how is this different from a normal function call instead of using the .register() method?
To clarify with an example, how are the two approaches below different from each other:
const app = fastify();
// Register a fastify-plugin that decorates app
const myPlugin = fp((app: FastifyInstance) => {
app.decorate('example', 10);
});
app.register(myPlugin);
// Just decorate the app directly
const decorateApp = (app: FastifyInstance) => {
app.decorate('example', 10);
};
decorateApp(app);
By writing a decorateApp function you are creating your own "API" to load your application.
That said, the first burden you will face soon is sync or async:
decorateApp is a sync function
decorateAppAsync within an async function
For example, you need to preload something from the database before you can start your application.
const decorateApp = (app) => {
app.register(require('#fastify/mongodb'))
};
const businessLogic = async (app) => {
const data = await app.mongo.db.collection('data').find({}).toArray()
}
decorateApp(app)
businessLogic(app) // whoops: it is async
In this example you need to change a lot of code:
the decorateApp function must be async
the mongodb registration must be awaited
the main code that loads the application must be async
Instead, by using the fastify's approach, you need to update only the plugin that loads the database:
const applicationConfigPlugin = fp(
+ async function (fastify) {
- function (fastify, opts, next) {
- app.register(require('#fastify/mongodb'))
- next()
+ await app.register(require('#fastify/mongodb'))
}
)
PS: note that fastify-plugin example code misses the next callback since it is a sync function.
The next bad pattern will be high hidden coupling between functions.
Every application needs a config. Usually, the fastify instance is decorated with it.
So, you will have something like:
decorateAppWithConfig(app);
decorateAppWithSomethingElse(app);
Now, decorateAppWithSomethingElse will need to know that it is loaded after decorateAppWithConfig.
Instead, by using the fastify-plugin, you can write:
const applicationConfigPlugin = fp(
async function (fastify) {
fastify.decorate('config', 42);
},
{
name: 'my-app-config',
}
)
const applicationBusinessLogic = fp(
async function (fastify) {
// ...
},
{
name: 'my-app-business-logic',
dependencies: ['my-app-config']
}
)
// note that the WRONG order of the plugins
app.register(applicationBusinessLogic);
app.register(applicationConfigPlugin);
Now, you will get a nice error, instead of a Cannot read properties of undefined when the config decorator is missing:
AssertionError [ERR_ASSERTION]: The dependency 'my-app-config' of plugin 'my-app-business-logic' is not registered
So, basically writing a series of functions that use/decorate the fastify instance is doable but it adds
a new convention to your code that will have to manage the loading of the plugins.
This job is already implemented by fastify and the fastify-plugin adds many validation checks to it.
So, by considering the question's example: there is no difference, but using that approach to a bigger application
will lead to a more complex code:
sync/async loading functions
poor error messages
hidden dependencies instead of explicit ones

Difference between fake, spy, stub and mock of sinon library ( sinon fake vs spy vs stub vs mock )

I tried to understand difference between sinon library's fake, spy, stub and mock but not able to understand it clearly.
Can anybody help me to understand about it?
Just for understanding purpose call:
FuncInfoCollector = is a Function that records arguments, return value, the value of this(context) and exception thrown (if any) for all of its calls.
(this FuncInfoCollector is dummy name given by me, it is not present in SINON lib)
Fake = FuncInfoCollector + can only create a fake function. To replace a function that already exists in the system under test you call sinon.replace(target, fieldname, fake). You can wrap an existing function like this:
const org = foo.someMethod;
sinon.fake((...args) => org(...args));
A fake is immutable: once created, the behavior can't be changed.
var fakeFunc = sinon.fake.returns('foo');
fakeFunc();
// have call count of fakeFunc ( It will show 1 here)
fakeFunc.callCount;
Spy = FuncInfoCollector + can create new function + It can wrap a function that already exists in the system under test.
Spy is a good choice whenever the goal of a test is to verify something happened.
// Can be passed as a callback to async func to verify whether callback is called or not?
const spyFunc = sinon.spy();
// Creates spy for ajax method of jQuery lib
sinon.spy(jQuery, "ajax");
// will tell whether jQuery.ajax method called exactly once or not
jQuery.ajax.calledOnce
Stub = spy + it stubs the original function ( can be used to change behaviour of original function).
var err = new Error('Ajax Error');
// So whenever jQuery.ajax method is called in a code it throws this Error
sinon.stub(jQuery, "ajax").throws(err)
// Here we are writing assert to check where jQuery.ajax is throwing an Error or not
sinon.assert.threw(jQuery.ajax(), err);
Mock = Stub + pre-programmed expectations.
var mk = sinon.mock(jQuery)
// Should be called atleast 2 time and almost 5 times
mk.expects("ajax").atLeast(2).atMost(5);
// It throws the following exception when called ( assert used above is not needed now )
mk.expects("ajax").throws(new Error('Ajax Error'))
// will check whether all above expectations are met or not, hence assertions aren't needed
mk.verify();
Please have a look at this link also sinon.replace vs sinon.stub just to replace return value?
Just to add some more info to the otherwise good answer, we added the Fake API to Sinon because of shortcomings of the other original APIs (Stub and Spy). The fact that these APIs are chainable led to constant design issues and recurring user-problems and they were bloated to cater to quite unimportant use cases, which is why we opted for creating a new immutable API that would be simpler to use, less ambiguous and cheaper to maintain. It was built on top of the Spy and Stub Apis to let Fakes be somewhat recognizable and have explicit method for replacing props on objects (sinon.replace(obj,'prop',fake)).
Fakes can essentially be used anywhere a stub or spy can be used and so I have not used the old APIs myself in 3-4 years, as code using the more limited fakes is simpler to understand for other people.

Jasmine Async test generation

let's imagine we have a promise that does a large amounts of operations and return helper functions.
A banal example:
const testPromise = testFn => () => {
const helper = Promise.resolve({testHelper: () => 'an helper function'}) // I/O Promise that returns an helper for testing
return helper.then(testFn).finally(() => console.log('tear down'));
}
// This describe would work as expected
describe('Async test approach', () => {
it('A test', testPromise(async ({testHelper}) => {
expect(testHelper()).toBe('an helper function')
}))
})
// This part doesn't work
describe('Async describe approach', testPromise(async ({testHelper}) => {
it('Test 1', () => {
expect(testHelper()).toBe('an helper function')
})
it('Test 2', () => {
expect(testHelper()).not.toBe('A chair')
})
}))
}
What I would like to achieve is something like the second example where I can use async code within describe without re-evaluating testPromise.
describe doesn't handle async so I am not even able to loop and create dynamic tests properly.
I did read many comments around saying that describe should only be a simple way to group tests but... then... how can someone make async generated tests based on I/O result?
Thanks
= ADDITIONAL CONSIDERATION =
Regarding all the comment you guys kindly added, I should have added few additional details...
I am well aware that tests must be defined synchronously :), that is exactly where problems starts. I totally disagree with that and I am trying to find an alternative that avoids before/after and doing it without specifying an external variable. Within Jest issues there was an open one to address that, it seems they did agree on making describe async but they won't do it. The reason is... Jest is using Jasmine implementation of describe and this "fix" should be done in there.
I wanted to avoid beforeAll and afterAll, as much as I could. My purpose was creating an easy (and neat) way to define integration tests tailored on my needs without letting users to worry about initialize and tear down stuff around. I will continue to use the Example 1 above style, that seems the best solution to me, even if it would be clearly a longer process.
Take a look at Defining Tests. The doc says:
Tests must be defined synchronously for Jest to be able to collect your tests.
This is the principle for defining test cases. Which means the it function should be defined synchronously. That's why your second example doesn't work.
Some I/O operations should be done in beforeAll, afterAll, beforeEach, afterEach methods to prepare your test doubles and fixtures. The test should be isolated from the external environment as much as possible.
If you must do this, maybe you can write the dynamically obtained testHelper function to a static js file, and then test it in a synchronous way
As it was noted, describe serves to group tests.
This can be achieved with beforeAll. Since beforeAll should be called any way, it can be moved to testPromise:
const prepareHelpers = (testFn) => {
beforeAll(() => {
...
return helper.then(testFn);
})
}
describe('Async describe approach', () => {
let testHelper;
prepareHelpers(helpers => { testHelper = helpers.testHelper });
...

RxJS Testing Observable sequence without passing scheduler

I have problems attempting to test a piece of code that is similar to the following function.
Basically the question boils down to: is it possible to change the Scheduler for the debounce operator without passing a separate Scheduler to the function call?
The following example should explain the use case a bit more concrete. I am trying to test a piece of code similar to the following. I want to test the chain in the function (using a TestScheduler) without having to pass a scheduler to the debounce() operator.
// Production code
function asyncFunctionToTest(subject) {
subject
.tap((v) => console.log(`Tapping: ${v}`))
.debounce(1000)
.subscribe((v) => {
// Here it would call ReactComponent.setState()
console.log(`onNext: ${v}`)
});
}
The test file would contain the following code to invoke the function and make sure the subject emits the values.
// Testfile
const testScheduler = new Rx.TestScheduler();
const subject = new Rx.Subject();
asyncFunctionToTest(subject);
testScheduler.schedule(200, () => subject.onNext('First'));
testScheduler.schedule(400, () => subject.onNext('Second'))
testScheduler.advanceTo(1000);
The test code above still takes one actual second to do the debounce. The only solution i have found is to pass the TestScheduler into the function and passing it to the debounce(1000, testScheduler) method. This will make the debounce operator use the test scheduler.
My initial idea was to use observeOn or subscribeOn to change the defaultScheduler that is used throughout the operation chain by changing
asyncFunctionToTest(subject);
to be something like asyncFunctionToTest(subject.observeOn(testScheduler)); or asyncFunctionToTest(subject.subscribeOn(testScheduler));
that does not give me the result as i expected, however i presume i might not exactly understand the way the observeOn and subscribeOn operators work. (I guesstimate now that when using these operators it changes the schedulers the whole operation chain is run on, but operators still pick their own schedulers, unless specifically passed?)
The following JSBin contains the runnable example where i passed in the scheduler. http://jsbin.com/kiciwiyowe/1/edit?js,console
No not really, unless you actually patched the RxJS library. I know this was brought up recently as an issue and there may be support for say, being able to change what the DefaultScheduler at some point in the future, but at this time it can't be reliably done.
Is there any reason why you can't include the scheduler? All the operators that accept Schedulers already do so optionally and have sensible defaults so it really costs you nothing given that you production code could simply ignore the parameter.
As a more general aside to why simply adding observeOn or subscribeOn doesn't fix it is that both of those operators really only affect how events are propagated after they have been received by that operator.
For instance you could implement observeOn by doing the following:
Rx.Observable.prototype.observeOn = (scheduler) => {
var source = this;
return Rx.Observable.create((observer) => {
source.subscribe(x =>
{
//Reschedule this for a later propagation
scheduler.schedule(x,
(s, state) => observer.onNext(state));
},
//Errors get forwarded immediately
e => observer.onError(e),
//Delay completion
() => scheduler.schedule(null, () => observer.onCompleted()))
});
};
All the above is doing is rescheduling the incoming events, if operators down stream or upstream have other delays this operator has no effect on them. subscribeOn has a similar behavior except that it reschedules the subscription not the events.

Deps autorun in Meteor JS

Decided to test out Meteor JS today to see if I would be interested in building my next project with it and decided to start out with the Deps library.
To get something up extremely quick to test this feature out, I am using the 500px API to simulate changes. After reading through the docs quickly, I thought I would have a working example of it on my local box.
The function seems to only autorun once which is not how it is suppose to be working based on my initial understanding of this feature in Meteor.
Any advice would be greatly appreciated. Thanks in advance.
if (Meteor.isClient) {
var Api500px = {
dep: new Deps.Dependency,
get: function () {
this.dep.depend();
return Session.get('photos');
},
set: function (res) {
Session.set('photos', res.data.photos);
this.dep.changed();
}
};
Deps.autorun(function () {
Api500px.get();
Meteor.call('fetchPhotos', function (err, res) {
if (!err) Api500px.set(res);
else console.log(err);
});
});
Template.photos.photos = function () {
return Api500px.get();
};
}
if (Meteor.isServer) {
Meteor.methods({
fetchPhotos: function () {
var url = 'https://api.500px.com/v1/photos';
return HTTP.call('GET', url, {
params: {
consumer_key: 'my_consumer_key_here',
feature: 'fresh_today',
image_size: 2,
rpp: 24
}
});
}
});
}
Welcome to Meteor! A couple of things to point out before the actual answer...
Session variables have reactivity built in, so you don't need to use the Deps package to add Deps.Dependency properties when you're using them. This isn't to suggest you shouldn't roll your own reactive objects like this, but if you do so then its get and set functions should return and update a normal javascript property of the object (like value, for example), rather than a Session variable, with the reactivity being provided by the depend and changed methods of the dep property. The alternative would be to just use the Session variables directly and not bother with the Api500px object at all.
It's not clear to me what you're trying to achieve reactively here - apologies if it should be. Are you intending to repeatedly run fetchPhotos in an infinite loop, such that every time a result is returned the function gets called again? If so, it's really not the best way to do things - it would be much better to subscribe to a server publication (using Meteor.subscribe and Meteor.publish), get this publication function to run the API call with whatever the required regularity, and then publish the results to the client. That would dramatically reduce client-server communication with the same net result.
Having said all that, why would it only be running once? The two possible explanations that spring to mind would be that an error is being returned (and thus Api500px.set is never called), or the fact that a Session.set call doesn't actually fire a dependency changed event if the new value is the same as the existing value. However, in the latter case I would still expect your function to run repeatedly as you have your own depend and changed structure surrounding the Session variable, which does not implement that self-limiting logic, so having Api500px.get in the autorun should mean that it reruns when Api500px.set returns even if the Session.set inside it isn't actually doing anything. If it's not the former diagnosis then I'd just log everything in sight and the answer should present itself.

Categories

Resources