RxJS Testing Observable sequence without passing scheduler - javascript

I have problems attempting to test a piece of code that is similar to the following function.
Basically the question boils down to: is it possible to change the Scheduler for the debounce operator without passing a separate Scheduler to the function call?
The following example should explain the use case a bit more concrete. I am trying to test a piece of code similar to the following. I want to test the chain in the function (using a TestScheduler) without having to pass a scheduler to the debounce() operator.
// Production code
function asyncFunctionToTest(subject) {
subject
.tap((v) => console.log(`Tapping: ${v}`))
.debounce(1000)
.subscribe((v) => {
// Here it would call ReactComponent.setState()
console.log(`onNext: ${v}`)
});
}
The test file would contain the following code to invoke the function and make sure the subject emits the values.
// Testfile
const testScheduler = new Rx.TestScheduler();
const subject = new Rx.Subject();
asyncFunctionToTest(subject);
testScheduler.schedule(200, () => subject.onNext('First'));
testScheduler.schedule(400, () => subject.onNext('Second'))
testScheduler.advanceTo(1000);
The test code above still takes one actual second to do the debounce. The only solution i have found is to pass the TestScheduler into the function and passing it to the debounce(1000, testScheduler) method. This will make the debounce operator use the test scheduler.
My initial idea was to use observeOn or subscribeOn to change the defaultScheduler that is used throughout the operation chain by changing
asyncFunctionToTest(subject);
to be something like asyncFunctionToTest(subject.observeOn(testScheduler)); or asyncFunctionToTest(subject.subscribeOn(testScheduler));
that does not give me the result as i expected, however i presume i might not exactly understand the way the observeOn and subscribeOn operators work. (I guesstimate now that when using these operators it changes the schedulers the whole operation chain is run on, but operators still pick their own schedulers, unless specifically passed?)
The following JSBin contains the runnable example where i passed in the scheduler. http://jsbin.com/kiciwiyowe/1/edit?js,console

No not really, unless you actually patched the RxJS library. I know this was brought up recently as an issue and there may be support for say, being able to change what the DefaultScheduler at some point in the future, but at this time it can't be reliably done.
Is there any reason why you can't include the scheduler? All the operators that accept Schedulers already do so optionally and have sensible defaults so it really costs you nothing given that you production code could simply ignore the parameter.
As a more general aside to why simply adding observeOn or subscribeOn doesn't fix it is that both of those operators really only affect how events are propagated after they have been received by that operator.
For instance you could implement observeOn by doing the following:
Rx.Observable.prototype.observeOn = (scheduler) => {
var source = this;
return Rx.Observable.create((observer) => {
source.subscribe(x =>
{
//Reschedule this for a later propagation
scheduler.schedule(x,
(s, state) => observer.onNext(state));
},
//Errors get forwarded immediately
e => observer.onError(e),
//Delay completion
() => scheduler.schedule(null, () => observer.onCompleted()))
});
};
All the above is doing is rescheduling the incoming events, if operators down stream or upstream have other delays this operator has no effect on them. subscribeOn has a similar behavior except that it reschedules the subscription not the events.

Related

Cypress how to store values yielded by a function into a variable

This has been bugging me from a long time. I am not well versed with javascript. Here it goes:
How do i store return value of a function into a variable:
lenValue = cy.get(selector).children().length
Above line of code returns undefined But when i try following in cypress test runner console then I get a valid output:
cy.$$(selector).children().length --> gives me correct number
How to return value from inside a then function and catch it to reuse later:
file1.js
function a(selector, attrName){
cy.get(selector).then(function ($el){
return $el.attr(attrName));
}
file2.js
state = file1Obj.a('#name','name')
What you're doing makes complete sense, but simply put, you cannot. (per the docs).
https://docs.cypress.io/guides/core-concepts/variables-and-aliases/#Return-Values
You can, however, use aliases to accomplish what (I think) you're after.
https://docs.cypress.io/guides/core-concepts/variables-and-aliases/#Aliases
#aeischeid shows you the wrong way to do it.
His code works only for a static site, but web pages are rarely static. As soon as API fetches are involved, lucky timing goes out the window and the test bombs.
This is why Cypress commands have automatic retry. Otherwise we could just build tests with jQuery.
Since cy.$$(selector).children().length --> gives me correct number, use that inside the helper function.
function a(selector, attrName) {
return cy.$$(selector).attr(attrName); // jQuery methods used
}
Or
function a(selector, attrName) {
return Cypress.$(selector).attr(attrName); // jQuery methods used
}
But be aware that jQuery only handles static pages, it does not retry if the attribute that you want to query arrives slowly.
For that use a command
cy.get('#name')
.should('have.attr', 'name') // retries until name exists
.then(name => { // guaranteed to have a value
// use name here
})
Here is a example from a cypress test I have that seems pretty relevant
let oldDescription;
cy.get('input#description').should(($input) => {
oldDescription = $input.val();
});
let randomDescription = Math.random().toString(36).substring(7);
cy.get('input#description').clear().type(randomDescription);
cy.get('input#description')
.parents('.ant-table-row')
.contains('Save').click();
cy.get('input#description').should('not.exist');
cy.contains(`${randomDescription}`);
cy.contains(`${oldDescription}`).should('not.exist');
because oldDescription is set inside of an asyncronous callback it isn't safe to expect it to be set, farther down the code outside of that callback, but in a lot of cases with cypress you end up having some other .get call or thing that waits, effectively pausing the code long enough that you can get away with not worrying about it.

Difference between fake, spy, stub and mock of sinon library ( sinon fake vs spy vs stub vs mock )

I tried to understand difference between sinon library's fake, spy, stub and mock but not able to understand it clearly.
Can anybody help me to understand about it?
Just for understanding purpose call:
FuncInfoCollector = is a Function that records arguments, return value, the value of this(context) and exception thrown (if any) for all of its calls.
(this FuncInfoCollector is dummy name given by me, it is not present in SINON lib)
Fake = FuncInfoCollector + can only create a fake function. To replace a function that already exists in the system under test you call sinon.replace(target, fieldname, fake). You can wrap an existing function like this:
const org = foo.someMethod;
sinon.fake((...args) => org(...args));
A fake is immutable: once created, the behavior can't be changed.
var fakeFunc = sinon.fake.returns('foo');
fakeFunc();
// have call count of fakeFunc ( It will show 1 here)
fakeFunc.callCount;
Spy = FuncInfoCollector + can create new function + It can wrap a function that already exists in the system under test.
Spy is a good choice whenever the goal of a test is to verify something happened.
// Can be passed as a callback to async func to verify whether callback is called or not?
const spyFunc = sinon.spy();
// Creates spy for ajax method of jQuery lib
sinon.spy(jQuery, "ajax");
// will tell whether jQuery.ajax method called exactly once or not
jQuery.ajax.calledOnce
Stub = spy + it stubs the original function ( can be used to change behaviour of original function).
var err = new Error('Ajax Error');
// So whenever jQuery.ajax method is called in a code it throws this Error
sinon.stub(jQuery, "ajax").throws(err)
// Here we are writing assert to check where jQuery.ajax is throwing an Error or not
sinon.assert.threw(jQuery.ajax(), err);
Mock = Stub + pre-programmed expectations.
var mk = sinon.mock(jQuery)
// Should be called atleast 2 time and almost 5 times
mk.expects("ajax").atLeast(2).atMost(5);
// It throws the following exception when called ( assert used above is not needed now )
mk.expects("ajax").throws(new Error('Ajax Error'))
// will check whether all above expectations are met or not, hence assertions aren't needed
mk.verify();
Please have a look at this link also sinon.replace vs sinon.stub just to replace return value?
Just to add some more info to the otherwise good answer, we added the Fake API to Sinon because of shortcomings of the other original APIs (Stub and Spy). The fact that these APIs are chainable led to constant design issues and recurring user-problems and they were bloated to cater to quite unimportant use cases, which is why we opted for creating a new immutable API that would be simpler to use, less ambiguous and cheaper to maintain. It was built on top of the Spy and Stub Apis to let Fakes be somewhat recognizable and have explicit method for replacing props on objects (sinon.replace(obj,'prop',fake)).
Fakes can essentially be used anywhere a stub or spy can be used and so I have not used the old APIs myself in 3-4 years, as code using the more limited fakes is simpler to understand for other people.

Jasmine Async test generation

let's imagine we have a promise that does a large amounts of operations and return helper functions.
A banal example:
const testPromise = testFn => () => {
const helper = Promise.resolve({testHelper: () => 'an helper function'}) // I/O Promise that returns an helper for testing
return helper.then(testFn).finally(() => console.log('tear down'));
}
// This describe would work as expected
describe('Async test approach', () => {
it('A test', testPromise(async ({testHelper}) => {
expect(testHelper()).toBe('an helper function')
}))
})
// This part doesn't work
describe('Async describe approach', testPromise(async ({testHelper}) => {
it('Test 1', () => {
expect(testHelper()).toBe('an helper function')
})
it('Test 2', () => {
expect(testHelper()).not.toBe('A chair')
})
}))
}
What I would like to achieve is something like the second example where I can use async code within describe without re-evaluating testPromise.
describe doesn't handle async so I am not even able to loop and create dynamic tests properly.
I did read many comments around saying that describe should only be a simple way to group tests but... then... how can someone make async generated tests based on I/O result?
Thanks
= ADDITIONAL CONSIDERATION =
Regarding all the comment you guys kindly added, I should have added few additional details...
I am well aware that tests must be defined synchronously :), that is exactly where problems starts. I totally disagree with that and I am trying to find an alternative that avoids before/after and doing it without specifying an external variable. Within Jest issues there was an open one to address that, it seems they did agree on making describe async but they won't do it. The reason is... Jest is using Jasmine implementation of describe and this "fix" should be done in there.
I wanted to avoid beforeAll and afterAll, as much as I could. My purpose was creating an easy (and neat) way to define integration tests tailored on my needs without letting users to worry about initialize and tear down stuff around. I will continue to use the Example 1 above style, that seems the best solution to me, even if it would be clearly a longer process.
Take a look at Defining Tests. The doc says:
Tests must be defined synchronously for Jest to be able to collect your tests.
This is the principle for defining test cases. Which means the it function should be defined synchronously. That's why your second example doesn't work.
Some I/O operations should be done in beforeAll, afterAll, beforeEach, afterEach methods to prepare your test doubles and fixtures. The test should be isolated from the external environment as much as possible.
If you must do this, maybe you can write the dynamically obtained testHelper function to a static js file, and then test it in a synchronous way
As it was noted, describe serves to group tests.
This can be achieved with beforeAll. Since beforeAll should be called any way, it can be moved to testPromise:
const prepareHelpers = (testFn) => {
beforeAll(() => {
...
return helper.then(testFn);
})
}
describe('Async describe approach', () => {
let testHelper;
prepareHelpers(helpers => { testHelper = helpers.testHelper });
...

javascript: module calls a function in the file that requires the module

My first time writing a js library. The library is intended to execute, at specific times, functions in the file that required the library. Kind of like Angular executes user implemented hooks such as $onInit, except that, in my case, user can define an arbitrary number of functions to be called by my library. How can I implement that?
One way I have in mind is to define a registerFunction(name, function) method, which maps function names to implementations. But can user just give me an array of names and I automatically register the corresponding functions for them?
Unless you have a specific requirement that it do so, your module does not need to know the names of the functions it is provided. When your module invokes those functions, it will do so by acting on direct references to them rather than by using their names.
For example:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn() )
}
// main application
const myFunc1 = () => console.log('Function 1 executing')
const myFunc2 = () => console.log('Function 2 executing')
const moduleThatInvokesMyFunctions = require('./my-module.js')
// instruct the module to invoke my 2 cool functions
moduleThatInvokesMyFunctions([ myFunc1, myFunc2 ])
//> Function 1 executing
//> Function 2 executing
See that the caller provides direct function references to the module, which the module then uses -- without caring or even knowing what those functions are called. (Yes, you can obtain their names by inspecting the function references, but why bother?)
If you want a more in-depth answer or explanation, it would help to know more about your situation. What environment does your library target: browsers? nodejs? Electron? react-native?
The library is intended to execute, at specific times, functions in the file that required the library
The "at specific times" suggests to me something that is loosely event-based. So, depending on what platform you're targeting, you could actually use a real EventEmitter. In that case, you'd invent unique names for each of the times that a function should be invoked, and your module would then export a singleton emitter. Callers would then assign event handlers for each of the events they care about. For callers, that might look like this:
const lifecycleManager = require('./your-module.js')
lifecycleManager.on( 'boot', myBootHandler )
lifecycleManager.on( 'config-available', myConfigHandler )
// etc.
A cruder way to handle this would be for callers to provide a dictionary of functions:
const orchestrateJobs = require('./your-module.js')
orchestrateJobs({
'boot': myBootHandler,
'config-available': myConfigHandler
})
If you're not comfortable working with EventEmitters, this may be appealing. But going this route requires that you consider how to support other scenarios like callers wanting to remove a function, and late registration.
Quick sketch showing how to use apply with each function:
// my-module.js
module.exports = function callMyFunctions( functionList ) {
functionList.forEach( fn => fn.apply( thisValue, arrayOfArguments ) )
}
Note that this module still has no idea what names the caller has assigned to these functions. Within this scope, each routine bears the moniker "fn."
I get the sense you have some misconceptions about how execution works, and that's led you to believe that the parts of the program need to know the names of other parts of the program. But that's not how continuation-passing style works.
Since you're firing caller functions based on specific times, it's possible the event model might be a good fit. Here's a sketch of what that might look like:
// caller
const AlarmClock = require('./your-module.js')
function doRoosterCall( exactTime ) {
console.log('I am a rooster! Cock-a-doodle-doo!')
}
function soundCarHorn( exactTime ) {
console.log('Honk! Honk!')
}
AlarmClock.on('sunrise', doRoosterCall)
AlarmClock.on('leave-for-work', soundCarHorn)
// etc
To accomplish that, you might do something like...
// your-module.js
const EventEmitter = require('events')
const singletonClock = new EventEmitter()
function checkForEvents() {
const currentTime = new Date()
// check for sunrise, which we'll define as 6:00am +/- 10 seconds
if(nowIs('6:00am', 10 * 1000)) {
singletonClock.emit('sunrise', currentTime)
}
// check for "leave-for-work": 8:30am +/- 1 minute
if(nowIs('8:30am', 60 * 1000)) {
singletonClock.emit('leave-for-work', currentTime)
}
}
setInterval( checkForEvents, 1000 )
module.exports = singletonClock
(nowIs is some handwaving for time-comparisons. When doing cron-like work, you should assume your heartbeat function will almost never be fired when the time value is an exact match, and so you'll need something to provide "close enough" comparisons. I didn't provide an impl because (1) it seems like a peripheral concern here, and (2) I'm sure Momentjs, date-fns, or some other package provides something great so you won't need to implement it yourself.

How to wait until multiple EventEmitter instances have emitted same event in Node.js?

I'm working on a Node.js module/utility which will allow me to scaffold some directories/files. Long story short, right now I have main function which looks something like this:
util.scaffold("FileName")
This "scaffold" method returns an EventEmitter instance, so, when using this method I can do something like this:
util.scaffold("Name")
.on("done", paths => console.log(paths)
In other words, when all the files are created, the event "done" will be emitted with all the paths of the scaffolded files.
Everything good so far.
Right now, I'm trying to do some tests and benchmarks with this method, and I'm trying to find a way to perform some operations (assertions, logs, etc) after this "scaffold" method has been called multiple times with a different "name" argument. For example:
const names = ["Name1", "Name2", "Name3"]
const emitters = names.map(name => {
return util.scaffold(name)
})
If I was returning a Promise instead of an EventEmitter, I know that I could do something like this:
Promise.all(promises).then(()=> {
//perform assertions, logs, etc
})
However, I'm not sure how can I do the equivalent using EventEmitters. In other words, I need to wait until all these emitters have emitted this same event (i.e. "done") and then perform another operation.
Any ideas/suggestions how to accomplish this?
Thanks in advance.
With promise.all you have a unique information when "everything" is done.
Of course that is when all Promises inside are fullfiled/rejected.
If you have an EventEmitter the information when "everything" is done can not be stored inside your EventEmitter logic because it doesn't know where or how often the event is emmited.
So first solution would be to manage an external state "everything-done" and when this changes to true you perform the other operation.
So like promise.all you have to wrap around it.
The second approach i could imagine is a factory where you build your EventEmitters that keeps track of the instances. Then this factory could provide the information whether all instances have been fired. But this approach could fail on many levels: One Instance->many Calls; One Instance->no Call; ...
just my 5 cent and i would be happy to see another solution
The simplest approach, as mentioned by others, is to return promises instead of EventEmitter instances. However, pursuant to your question, you can write your callback for the done event as follows:
const names = ['Name1', 'Name2', 'Name3']
let count = 0
util.scaffold('Name').on('done', (paths) => {
count += 1
if (count < names.length) {
// There is unfinished scaffolding
} else {
// All scaffolding complete
}
})
I ended up doing what #theGleep suggested and wrapping each of those emitters inside a Promise, like this:
const names = ["Name1", "Name2", "Name3"]
const promises = names.map(name => {
return new Promise((resolve) => {
util.scaffold(name).on("done", paths => {
resolve(paths)})
})
})
// and then
Promise.all(promises).then(result => {
// more operations
})
It seems to be doing what I need so far, so I'll just use this for now. Thanks everyone for your feedback :)

Categories

Resources