I need to share an object (db connection pool) between multiple test suites in Jest.
I have read about globalSetup and it seems to be the right place to initialize this object, but how can I pass that object from globalSetup context to each test suite context?
I'm dealing with a very similar issue, although in my case I was trying to share a single DB container (I'm using dockerode) that I spin up once before all tests and remove after to conserve the overhead of spinning a container for each suite.
Unfortunately for us, after going over a lot of documentation and Github issues my conclusion is that this is by design since Jest's philosophy is to run tests sandboxed and under that restriction, I totally get why they choose not to support this.
Specifically, for my use case I ended up spinning a container in a globalSetup script, tagging it with some unique identifier for the current test run (say, timestamp) and removing it at the end with a globalTeardown (that works since global steps can share state between them).
This looks roughly like:
const options: ContainerCreateOptions = {
Image: POSTGRES_IMAGE,
Tty: false,
HostConfig: {
AutoRemove: true,
PortBindings: {'5432/tcp': [{HostPort: `${randomPort}/tcp`}]}
},
Env: [`POSTGRES_DB=${this.env['DB_NAME']}`,
`POSTGRES_USER=${this.env['DB_USERNAME']}`,
`POSTGRES_PASSWORD=${this.env['DB_PASSWORD']}`]
};
options.Labels = { testContainer: `${CURRENT_TEST_TIMESTAMP}`};
let container = await this.docker.createContainer(options);
await container.start();
where this.env in my case is a .env file I preload based on the test.
To get the container port/ip (or whatever else I'm interested in) for my tests I use a custom envrironment that my DB-requiring tests use to expose the relevant information on a global variable (reminder: you can only put primitives and objects on global, not instances).
In your case, the requirement to pass a connection pool between suites is probably just not suited for jest since it will never let you pass around an instance between different suites (at least that's my understanding based on all of the links I shared). You can, however, try either:
Put all tests that need the same connection pool into a single suite/file (not great but would answer your needs)
Do something similar to what I suggested (setup DB once in global, construct connection pool in custom env and then at least each suite gets its own pool) that way you have a lot of code reuse for setup which is nice. Something similar is discussed here.
Miscalenous: I stumbled across testdeck which might answer your needs as well (setup an abstract test class with your connection pool and inherit it in all required tests) - I didn't try it so I don't know how it will behave with Jest, but when I was writing in Java that's how we used to achieve similar functionality with TestNG
In each test file you can define a function to load the necessary data or to mock the data. User the beforeEach() function for this. Documentation example here!
Related
Is there any advantages (or disadvantages) on using #NestJS/Config instead of using dotenv to retrieve envvar? In both cases I could create a class that is responsible for all envvars, but should I?
I know #NestJS/Config uses dotenv behind the curtains, but is there any reason why one should choose one over the other?
The two big advantages are the ability to use Joi or class-validator or whatever else you want as a schema validator to ensure you have your env values correct to begin with, before trying to access them at runtime and getting an error. Earlier feedback loop means fewer failures later on. The other big advantage is the use of DI meaning it's easier (usually) to mock the env variable value in your test cases, rather than having to assign to process.env itself. There's also slight speed improvements, as Nest caches the value so if you read it again you don't need to read from process.env, but other than that there's not too much to mention. If you don't want to use it, don't feel like you have to. There is also the disadvantage of not being able to use the ConfigService inside a decorator
My understanding is using #nestjs/config is easy for you to manage your config/envvars as a module in your project. So it can be easily swapped in different place:
e.g. if you need a different set of config for test, you don't have to actually modify your process.env.xxx or use a different .env file.
However if you do that, it requires all/most your other services to utilize this pattern as well. It wouldn't be so helpful if you have all your other service to be a pure function export.
Now I try to update webdriverio's maxInstances > 1 for our product automation test, but I cannot find a good way to make the different test instances use different user to run test. Sometimes different instances will use the same user to login, it will cause the first login instance session timeout.
Anybody knows how to lock/unlock user for this kind of scenario?
Let me get this straight: you are trying to run multiple instances, and have different parameters for each worker being spawned, right? (e.g: in your case use different, non-conflicting user accounts for the flows being run). Doesn't it seem like you zugzwang'ed yourself?
In theory, every test cases should have these three characteristics:
small
atomic
autonomous
❒ Possible solutions:
Wait for wdio-shared-store-service to be reviewed > approved > merged > released
This can take a week, or it can take a month, or several months. It might not even be a feature being shipped altogether as there was quite a lot of concern around being able to share data across checks (which people will eventually end up doing).
Currently there is not other functionality packaged inside the framework that will allow you to achieve the same outcome. If you are able to, maybe try to look for solutions by shifting the test approach.
Reorganise your testing logic:
have specific test accounts associated with specific checks (test cases) inside your feature files (test suites)
if you want to be extremely serious about your checks being atomic, then have each user account created for a specific test suite (as a test suite will only be run on one instance, per regression run)
organise your test suites as such that only specific user accounts will be used for a specific run, and make your runs specific: adding the -suite switch to your command
suites: {
user: [
'./test/features/user/AnalyticsChecks.js',
'./test/features/user/SitemapChecks.js'
],
superuser: [
'./test/features/superuser/HeaderChecks.js',
'./test/features/superuser/FooterChecks.js'
],
admin: [
'./test/features/admin/DropUserRoles.js',
'./test/features/admin/ClearCache.js'
],
batman: [
'./i/am/the/Darkness.js'
],
}
Create your users dynamically (pre-test, or during test run)
If you're lucky and have access to some backend calls to push users with specific configs to the DB (Ask for support from your Backend devs. Bring them candy!), then create a script that produces such users,
Alternatively, you can can create a custom_command that achieves the same. You call in your before/beforeEach hooks.
!Note: Don't forget to cleanup after yourself in the after/afterEach hooks.
Drawbacks:
contradicts the small principle
you won't be able to run them against PRODUCTION
you will probably be flagged by your backend for continuously polluting the DBs with new users (especially if you run 3-5 full regressions a day)
You can surely find a solution to remediate this framework limitation. Be creative. Everything goes. Cheers!
Let's imagine, in a OO world, I want to build a Torrent object which listens to the network and lets me interact with it. It would inherit an EventEmitter and would look something like this:
var torrent = new Torrent(opts)
torrent.on('ready', cb) // add torrent to the UI
torrent.on('metadata', cb) // update data in the UI
and I can also make it do things:
torrent.stop()
torrent.resume()
Then of course if I want to do delete the torrent from memory I can call torrent.destroy().
The cool thing about this OO approach is that I can easily package this functionality in its own npm module, test the hell out of, and give users a nice clean reusable API.
My question is, how do I achieve this with Cycle.js apps?
If I create a driver it's unclear how I would go about creating many torrents and having their own independent listeners. Also consider I'd like to package functionality in a way that others get to easily reuse it in other Cycle.js apps.
It seems to me that you are trying to solve a problem thinking about it as you would write "imperative code".
I think creating Torrent instances with their own listeners is not something you should be using in cycle components.
I would go about it differently - creating Torrent module and figuring out what would be its sources and sinks. If this module should be reusable and published, you can create it as a function that would receive streams as arguments. Maybe something similar to TodoMVC Task component (which is then used in its parent component).
Since this module can be created as a pure function, testing it should be at least just as easy.
This implementation of course depends on your requirements but communication with the module would then be done only with streams and since it would be declarative there would be no need for methods like stop() and destroyed() which you would call from elsewhere.
How do I test it?
In cycle.js you'd write a component with intent model and view functions.
You'd test intent(), for given input Streams, produces Streams of actions that you want. For models, you'd test that given http and action streams, you get the state you want, and for view, you test that given a state you get the VDom you want.
One tricky bit with cycle.js is that since it passes functions around, normal JavaScript objects that use the 'this' keyword are not worth the trouble due to 'this' context problems. If you are working with cycle.js and you think you might write a JS class for use with Isolate, Onionify, or Collections most likely, you are going in the wrong direction. See MDN docs about 'this'
how I would go about creating many torrents
The Cycle.js people have several ways to deal with groups of things like this.
This ticket describes some things that might work for that:
Wrap subapp in Web Component
Stanga and similars.
Cycle Collections
Cycle Onionify
I am investigating the pub/sub pattern because I am reading a book that highly advocates event driven architecture, for the sake of loose coupling. But I feel that the loose coupling is only achieved by sacrificing readability/transparency.
I'm having trouble understanding how to write easily-understood pub/sub code. The way I currently write my code results in a lot of one-to-one channels, and I feel like doing so is a bad practice.
I'm using require.js AMD modules, which means that I have many smaller-sized files, so I feel like it would be very difficult for someone to follow the flow of my publishes.
In my example code below, there are three different modules:
The UI / Controller module, handling user clicks
A translator module
A data storage module
The gist is that a user submits text, it gets translated to english, then stored into a database. This flow is split into three modules in their own file.
// Main Controller Module
define(['pubSub'] function(pubSub) {
submitButton.onclick(function() {
var userText = textArea.val();
pubSub.publish("userSubmittedText", userText);
});
});
// Translator module
define(['pubSub'] function(pubSub) {
function toEnglish(text) {
// Do translation
pubSub.publish("translatedText", translatedText);
};
pubSub.subscribe("userSubmittedText", toEnglish);
});
// Database module
define(['pubSub'] function(pubSub) {
function store(text) {
// Store to database
};
pubSub.subscribe("translatedText", store);
});
For a reader to see the complete flow, he has to switch between the three modules. But how you would make clear where the reader should look, after seeing the first pubSub.publish("userSubmittedText", userText);?
I feel like publishes are like a cliff hanger, where the reader wants to know what is triggered next, but he has to go and find the modules with subscribed functions.
I could comment EVERY publish, explaining what modules contain the functions that are listening, but that seems impractical. And I don't think that is what other people are doing.
Furthermore, the above code uses one-to-one channels, which I think is bad style, but I'm not sure. Only the Translator module's toEnglish() function will ever subscribe to the pubSub channel "userSubmittedText", yet I have to create the new channel for what is basically a single function call. While this way my Controller module doesn't have to have Translator as a dependency, it just doesn't feel like true decoupling.
This lack of function flow transparency is concerning to me, as I have no idea how someone reading such source code would know how to follow along. Clearly I must be missing something important. Maybe I'm not using a helpful convention, or maybe my publish event names are not descriptive enough?
Is the loose coupling of pub/sub only achieved by sacrificing of flow transparency?
The idea of the publish subscribe pattern is that you don't make any assumptions about who has subscribed to a topic or who is publishing. From Wikipedia (http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern):
[...] Instead, published messages are characterized into classes,
without knowledge of what, if any, subscribers there may be.
Similarly, subscribers express interest in one or more classes, and
only receive messages that are of interest, without knowledge of what,
if any, publishers there are.
If your running code doesn't make any assumptions, your comments shouldn't do either. If you want a more readable way of module communication, you can use requirejs' dependency injection instead, which you already do with your pubsub module. This way, you could make it easier to read the code (which brings other disadvantages). It all depends on what you want to achieve...
I recently finished up https://github.com/mercmobily/JsonRestStores. I feel a little uneasy as I haven't yet written any unit testing.
The module is tricky at best to test: it allows you to create Json REST stores, AND interact with the store using the API directly.
So, the unit test should:
Start a web server that implements a number of stores. Ideally, I should have one store for each tested feature I suppose
Test the results while manipulating that store, both using HTTP calls and direct API calls
The problem is that each store can have override a lot of functions. To make things more complicated, the store has a range of database drivers it can use (well, potentially -- at the moment I only have the MongoDB driver). So, wanting to test the module with MongoDB, I would have to first create a collection, and then test things using each DB layer...
I mean, it would be a pretty epic task. Can anybody shed some light on how to make something like this simpler? It seems to have all of the ingredients to make the Unit Testing from hell (API calls, direct calls, database, different configurable DB drivers, highly configurable class which encourages method overriding...)
Help?
You can write first the unit tests instead of start writing system tests
when you are going to add unit tests, you will need to learn mocking tests.