How to handle login users when use maxInstances for webdriverio - javascript

Now I try to update webdriverio's maxInstances > 1 for our product automation test, but I cannot find a good way to make the different test instances use different user to run test. Sometimes different instances will use the same user to login, it will cause the first login instance session timeout.
Anybody knows how to lock/unlock user for this kind of scenario?

Let me get this straight: you are trying to run multiple instances, and have different parameters for each worker being spawned, right? (e.g: in your case use different, non-conflicting user accounts for the flows being run). Doesn't it seem like you zugzwang'ed yourself?
In theory, every test cases should have these three characteristics:
small
atomic
autonomous
❒ Possible solutions:
Wait for wdio-shared-store-service to be reviewed > approved > merged > released
This can take a week, or it can take a month, or several months. It might not even be a feature being shipped altogether as there was quite a lot of concern around being able to share data across checks (which people will eventually end up doing).
Currently there is not other functionality packaged inside the framework that will allow you to achieve the same outcome. If you are able to, maybe try to look for solutions by shifting the test approach.
Reorganise your testing logic:
have specific test accounts associated with specific checks (test cases) inside your feature files (test suites)
if you want to be extremely serious about your checks being atomic, then have each user account created for a specific test suite (as a test suite will only be run on one instance, per regression run)
organise your test suites as such that only specific user accounts will be used for a specific run, and make your runs specific: adding the -suite switch to your command
suites: {
user: [
'./test/features/user/AnalyticsChecks.js',
'./test/features/user/SitemapChecks.js'
],
superuser: [
'./test/features/superuser/HeaderChecks.js',
'./test/features/superuser/FooterChecks.js'
],
admin: [
'./test/features/admin/DropUserRoles.js',
'./test/features/admin/ClearCache.js'
],
batman: [
'./i/am/the/Darkness.js'
],
}
Create your users dynamically (pre-test, or during test run)
If you're lucky and have access to some backend calls to push users with specific configs to the DB (Ask for support from your Backend devs. Bring them candy!), then create a script that produces such users,
Alternatively, you can can create a custom_command that achieves the same. You call in your before/beforeEach hooks.
!Note: Don't forget to cleanup after yourself in the after/afterEach hooks.
Drawbacks:
contradicts the small principle
you won't be able to run them against PRODUCTION
you will probably be flagged by your backend for continuously polluting the DBs with new users (especially if you run 3-5 full regressions a day)
You can surely find a solution to remediate this framework limitation. Be creative. Everything goes. Cheers!

Related

Custom commands or utility classes/functions?

I'm working with a team that has a project in Nightwatch.js. They are defining commands for almost everything:
Datetime functions as commands (return current day, array of days in a period, etc.)
SQL queries to get data as commands
To me, some of these fit better to utility classes, but they prefer to have them as commands, so they can access them via browser.Command.
What's the correct approach or recommendation? All in commands looks odd to me, but also the codes look more clean, as you don't need to do imports.
Thanks,
This question will probably get closed for being opinion based, but anyway.
I would agree that not all utilities should be commands. I would generally see the scope of command to be more to do with acting on things in the browser as the user would, or a way to group user actions, e.g. in a typical system I would see commands like -
login(), which would group the clicking into text fields and entering the username password and eventually clicking the login button.
addUser(), which would maybe be a small, but repetitive group of commands to add a user to an admin panel for example.
dragTo(x), which would maybe handle some drag and drop functionality.
etc.
I wouldn't see date utils etc as commands, but as you say, just utility functions, that are more about the underlying implementation of the test, and not how the user would interact with the system.
Although not Nightwatch, maybe take some advice from the Best Practices of Custom Commands from Cypress - https://docs.cypress.io/api/cypress-api/custom-commands#Best-Practices

Cypress Across Multiple Machines [Azure Devops]

I have 20 windows 10 VMS and have about 700 solo cypress tests and I want a way to be able to spread all these tests across the 20 VMs so the tests run a lot faster is this possible and how also take into mind I need to run a Jar, as well as that, is what builds the website that the cypress tests are run on. Any suggestions to speed up these processes? I am also using Azure DevOps as that is the company standard to run these automation tests.
Without using any additional paid services (like Cypress.io's dashboard service), I think the strategy is to:
divide your 700 tests into several subsets, then
run these subsets on multiple agents in parallel.
To accomplish #1, you could either use several Cypress configuration files and hardcode a specific subset of tests using the testFiles property, or you could create a script that selects dynamically determines the tests (based on some parameter) and then uses the Cypress module API to actually run your tests.
Regardless of which approach you take with #1, you'll have to use ADO multi-configurations (see SO post here, which gives a guide on how to set up and use them) in order to accomplish #2. Essentially, ADO multi-configurations allow you to define the same set of tasks that can be run on multiple agents, while passing a Multiplier argument to each of those agents. This Multiplier argument is just a comma-separated string, for example, 'testSet1,testSet2,testSet3'. However, each agent will receive only 1 of those values, for example, the first agent might receive 'testSet1', the second agent might receive 'testSet2', etc.
So now, you can imagine that each agent performs the setup of your application and then runs a subset of the Cypress tests depending on which argument it receives ('testSet1', 'testSet2', etc.).
Last, you might want to collate the test results so that you can publish them. I'd recommend outputting the test results into a generated, deterministic folder in a shared network drive for each of the test runs. For example, if you had 3 agents running the 700 tests, they could publish their test result XML's into the //shared-drive/cypress/results/<date> folder. Then, you would have a final, separate agent job that would collate the test results and publish them.
If you can/want to set up the infrastructure, there is sorry-cypress that you should check out. It has many of the features of cypress' paid service, most importantly parallelization with load balancing.
In Azure DevOps you can set up agents to handle this. Then you can define a pipeline job and tell ADO to run it in parallel, pointing at your pool of agents.
Additionally, you will need to pay for Cypress.io's dashboard service, which will enable parallel test execution.
Once everything is in place, you'll need to run the tests with the --record and --parallel flags. Cypress.io's service will act as the orchestrator to tell each machine which tests to run, and to combine all the test results together.

How to test a series of interactions?

I've been learning how to write automated tests for a React app, and it's brought up questions on how best to test a series of interactions with the app.
Let's say that I have a React app that allows the user to assign students to seats in a classroom: Here's an image of what such an app would look like.
I'd like to test creating a new student, adding that student to a seat in the classroom, and then moving that student back to the area of unseated students in the middle column (I'll call it the roster).
What would be the best way to write out the test(s) for this? Would it make sense to have a single large test case, which would include assertions for each of the interactions I'm testing (creating a new student, moving it between the classroom and roster)?
Or should I create multiple cases, where one depends on the other? In other words...
Test 1) Check that a new student is created successfully when submitting the form
Test 2) Using the student created in test 1, move the student to a seat in the classroom
Test 3) With the student now in a seat (due to test 2), move it back to the roster area
Another question I have is whether React Test Library would be sufficient for this purpose. Or would this be considered an end-to-end test that should be written using Cypress?
Your feedback is much appreciated!
The process you described can surely be tested with react-testing-library as an integrated test. From the docs/faq, it encourages testing a component high enough up the component tree, without mocking child components, in order to simulate and test what an actual user interaction would be like.
Also check out this Kent C. Dodds blog post .
You want your tests to be as granular as possible, as you have already split them in 3 different cases. This will make it easier for everyone to read your code.
As for the testing library, I would go with Cypress which advertises itself as a JavaScript End to End Testing Framework.
You are testing end-to-end functionality here by manipulating what seem to be multiple components. React testing library, on the other hand, seems to focus on testing individual components on a more low-level approach - there's a chance it will falter on the long run, but it probably gets the job done aswell.

How to share an object between multiple test suites in Jest?

I need to share an object (db connection pool) between multiple test suites in Jest.
I have read about globalSetup and it seems to be the right place to initialize this object, but how can I pass that object from globalSetup context to each test suite context?
I'm dealing with a very similar issue, although in my case I was trying to share a single DB container (I'm using dockerode) that I spin up once before all tests and remove after to conserve the overhead of spinning a container for each suite.
Unfortunately for us, after going over a lot of documentation and Github issues my conclusion is that this is by design since Jest's philosophy is to run tests sandboxed and under that restriction, I totally get why they choose not to support this.
Specifically, for my use case I ended up spinning a container in a globalSetup script, tagging it with some unique identifier for the current test run (say, timestamp) and removing it at the end with a globalTeardown (that works since global steps can share state between them).
This looks roughly like:
const options: ContainerCreateOptions = {
Image: POSTGRES_IMAGE,
Tty: false,
HostConfig: {
AutoRemove: true,
PortBindings: {'5432/tcp': [{HostPort: `${randomPort}/tcp`}]}
},
Env: [`POSTGRES_DB=${this.env['DB_NAME']}`,
`POSTGRES_USER=${this.env['DB_USERNAME']}`,
`POSTGRES_PASSWORD=${this.env['DB_PASSWORD']}`]
};
options.Labels = { testContainer: `${CURRENT_TEST_TIMESTAMP}`};
let container = await this.docker.createContainer(options);
await container.start();
where this.env in my case is a .env file I preload based on the test.
To get the container port/ip (or whatever else I'm interested in) for my tests I use a custom envrironment that my DB-requiring tests use to expose the relevant information on a global variable (reminder: you can only put primitives and objects on global, not instances).
In your case, the requirement to pass a connection pool between suites is probably just not suited for jest since it will never let you pass around an instance between different suites (at least that's my understanding based on all of the links I shared). You can, however, try either:
Put all tests that need the same connection pool into a single suite/file (not great but would answer your needs)
Do something similar to what I suggested (setup DB once in global, construct connection pool in custom env and then at least each suite gets its own pool) that way you have a lot of code reuse for setup which is nice. Something similar is discussed here.
Miscalenous: I stumbled across testdeck which might answer your needs as well (setup an abstract test class with your connection pool and inherit it in all required tests) - I didn't try it so I don't know how it will behave with Jest, but when I was writing in Java that's how we used to achieve similar functionality with TestNG
In each test file you can define a function to load the necessary data or to mock the data. User the beforeEach() function for this. Documentation example here!

Writing unit testing for a module that creates Json REST services

I recently finished up https://github.com/mercmobily/JsonRestStores. I feel a little uneasy as I haven't yet written any unit testing.
The module is tricky at best to test: it allows you to create Json REST stores, AND interact with the store using the API directly.
So, the unit test should:
Start a web server that implements a number of stores. Ideally, I should have one store for each tested feature I suppose
Test the results while manipulating that store, both using HTTP calls and direct API calls
The problem is that each store can have override a lot of functions. To make things more complicated, the store has a range of database drivers it can use (well, potentially -- at the moment I only have the MongoDB driver). So, wanting to test the module with MongoDB, I would have to first create a collection, and then test things using each DB layer...
I mean, it would be a pretty epic task. Can anybody shed some light on how to make something like this simpler? It seems to have all of the ingredients to make the Unit Testing from hell (API calls, direct calls, database, different configurable DB drivers, highly configurable class which encourages method overriding...)
Help?
You can write first the unit tests instead of start writing system tests
when you are going to add unit tests, you will need to learn mocking tests.

Categories

Resources