Cypress Across Multiple Machines [Azure Devops] - javascript

I have 20 windows 10 VMS and have about 700 solo cypress tests and I want a way to be able to spread all these tests across the 20 VMs so the tests run a lot faster is this possible and how also take into mind I need to run a Jar, as well as that, is what builds the website that the cypress tests are run on. Any suggestions to speed up these processes? I am also using Azure DevOps as that is the company standard to run these automation tests.

Without using any additional paid services (like Cypress.io's dashboard service), I think the strategy is to:
divide your 700 tests into several subsets, then
run these subsets on multiple agents in parallel.
To accomplish #1, you could either use several Cypress configuration files and hardcode a specific subset of tests using the testFiles property, or you could create a script that selects dynamically determines the tests (based on some parameter) and then uses the Cypress module API to actually run your tests.
Regardless of which approach you take with #1, you'll have to use ADO multi-configurations (see SO post here, which gives a guide on how to set up and use them) in order to accomplish #2. Essentially, ADO multi-configurations allow you to define the same set of tasks that can be run on multiple agents, while passing a Multiplier argument to each of those agents. This Multiplier argument is just a comma-separated string, for example, 'testSet1,testSet2,testSet3'. However, each agent will receive only 1 of those values, for example, the first agent might receive 'testSet1', the second agent might receive 'testSet2', etc.
So now, you can imagine that each agent performs the setup of your application and then runs a subset of the Cypress tests depending on which argument it receives ('testSet1', 'testSet2', etc.).
Last, you might want to collate the test results so that you can publish them. I'd recommend outputting the test results into a generated, deterministic folder in a shared network drive for each of the test runs. For example, if you had 3 agents running the 700 tests, they could publish their test result XML's into the //shared-drive/cypress/results/<date> folder. Then, you would have a final, separate agent job that would collate the test results and publish them.

If you can/want to set up the infrastructure, there is sorry-cypress that you should check out. It has many of the features of cypress' paid service, most importantly parallelization with load balancing.

In Azure DevOps you can set up agents to handle this. Then you can define a pipeline job and tell ADO to run it in parallel, pointing at your pool of agents.
Additionally, you will need to pay for Cypress.io's dashboard service, which will enable parallel test execution.
Once everything is in place, you'll need to run the tests with the --record and --parallel flags. Cypress.io's service will act as the orchestrator to tell each machine which tests to run, and to combine all the test results together.

Related

Identify which thread a test is running in in vitest

I would like to use an in-memory database for some integration test suites in vitest. As part of that, I'd like to, in a beforeEach block, restore the database to a specific state. However, I want to make sure that tests that are running in parallel do not conflict with one another. I'd rather not have to create a new database for every test, so it seems like the right thing is to set up a database for each thread that's running. Is there a way, from inside of the vite test to know which thread it is in?

How to handle login users when use maxInstances for webdriverio

Now I try to update webdriverio's maxInstances > 1 for our product automation test, but I cannot find a good way to make the different test instances use different user to run test. Sometimes different instances will use the same user to login, it will cause the first login instance session timeout.
Anybody knows how to lock/unlock user for this kind of scenario?
Let me get this straight: you are trying to run multiple instances, and have different parameters for each worker being spawned, right? (e.g: in your case use different, non-conflicting user accounts for the flows being run). Doesn't it seem like you zugzwang'ed yourself?
In theory, every test cases should have these three characteristics:
small
atomic
autonomous
❒ Possible solutions:
Wait for wdio-shared-store-service to be reviewed > approved > merged > released
This can take a week, or it can take a month, or several months. It might not even be a feature being shipped altogether as there was quite a lot of concern around being able to share data across checks (which people will eventually end up doing).
Currently there is not other functionality packaged inside the framework that will allow you to achieve the same outcome. If you are able to, maybe try to look for solutions by shifting the test approach.
Reorganise your testing logic:
have specific test accounts associated with specific checks (test cases) inside your feature files (test suites)
if you want to be extremely serious about your checks being atomic, then have each user account created for a specific test suite (as a test suite will only be run on one instance, per regression run)
organise your test suites as such that only specific user accounts will be used for a specific run, and make your runs specific: adding the -suite switch to your command
suites: {
user: [
'./test/features/user/AnalyticsChecks.js',
'./test/features/user/SitemapChecks.js'
],
superuser: [
'./test/features/superuser/HeaderChecks.js',
'./test/features/superuser/FooterChecks.js'
],
admin: [
'./test/features/admin/DropUserRoles.js',
'./test/features/admin/ClearCache.js'
],
batman: [
'./i/am/the/Darkness.js'
],
}
Create your users dynamically (pre-test, or during test run)
If you're lucky and have access to some backend calls to push users with specific configs to the DB (Ask for support from your Backend devs. Bring them candy!), then create a script that produces such users,
Alternatively, you can can create a custom_command that achieves the same. You call in your before/beforeEach hooks.
!Note: Don't forget to cleanup after yourself in the after/afterEach hooks.
Drawbacks:
contradicts the small principle
you won't be able to run them against PRODUCTION
you will probably be flagged by your backend for continuously polluting the DBs with new users (especially if you run 3-5 full regressions a day)
You can surely find a solution to remediate this framework limitation. Be creative. Everything goes. Cheers!

How to get total count of specs (total and disabled) before test run when use sharding

Is there a way to get information about how many tests in total protractor is going to run/ignore before run?
As I'm using sharding spec reporters do not work well because they calculate tests within one spec file only.
Simple counting of it/ xit functions is not gonna work as I have many parametrized tests.

How to share an object between multiple test suites in Jest?

I need to share an object (db connection pool) between multiple test suites in Jest.
I have read about globalSetup and it seems to be the right place to initialize this object, but how can I pass that object from globalSetup context to each test suite context?
I'm dealing with a very similar issue, although in my case I was trying to share a single DB container (I'm using dockerode) that I spin up once before all tests and remove after to conserve the overhead of spinning a container for each suite.
Unfortunately for us, after going over a lot of documentation and Github issues my conclusion is that this is by design since Jest's philosophy is to run tests sandboxed and under that restriction, I totally get why they choose not to support this.
Specifically, for my use case I ended up spinning a container in a globalSetup script, tagging it with some unique identifier for the current test run (say, timestamp) and removing it at the end with a globalTeardown (that works since global steps can share state between them).
This looks roughly like:
const options: ContainerCreateOptions = {
Image: POSTGRES_IMAGE,
Tty: false,
HostConfig: {
AutoRemove: true,
PortBindings: {'5432/tcp': [{HostPort: `${randomPort}/tcp`}]}
},
Env: [`POSTGRES_DB=${this.env['DB_NAME']}`,
`POSTGRES_USER=${this.env['DB_USERNAME']}`,
`POSTGRES_PASSWORD=${this.env['DB_PASSWORD']}`]
};
options.Labels = { testContainer: `${CURRENT_TEST_TIMESTAMP}`};
let container = await this.docker.createContainer(options);
await container.start();
where this.env in my case is a .env file I preload based on the test.
To get the container port/ip (or whatever else I'm interested in) for my tests I use a custom envrironment that my DB-requiring tests use to expose the relevant information on a global variable (reminder: you can only put primitives and objects on global, not instances).
In your case, the requirement to pass a connection pool between suites is probably just not suited for jest since it will never let you pass around an instance between different suites (at least that's my understanding based on all of the links I shared). You can, however, try either:
Put all tests that need the same connection pool into a single suite/file (not great but would answer your needs)
Do something similar to what I suggested (setup DB once in global, construct connection pool in custom env and then at least each suite gets its own pool) that way you have a lot of code reuse for setup which is nice. Something similar is discussed here.
Miscalenous: I stumbled across testdeck which might answer your needs as well (setup an abstract test class with your connection pool and inherit it in all required tests) - I didn't try it so I don't know how it will behave with Jest, but when I was writing in Java that's how we used to achieve similar functionality with TestNG
In each test file you can define a function to load the necessary data or to mock the data. User the beforeEach() function for this. Documentation example here!

Writing unit testing for a module that creates Json REST services

I recently finished up https://github.com/mercmobily/JsonRestStores. I feel a little uneasy as I haven't yet written any unit testing.
The module is tricky at best to test: it allows you to create Json REST stores, AND interact with the store using the API directly.
So, the unit test should:
Start a web server that implements a number of stores. Ideally, I should have one store for each tested feature I suppose
Test the results while manipulating that store, both using HTTP calls and direct API calls
The problem is that each store can have override a lot of functions. To make things more complicated, the store has a range of database drivers it can use (well, potentially -- at the moment I only have the MongoDB driver). So, wanting to test the module with MongoDB, I would have to first create a collection, and then test things using each DB layer...
I mean, it would be a pretty epic task. Can anybody shed some light on how to make something like this simpler? It seems to have all of the ingredients to make the Unit Testing from hell (API calls, direct calls, database, different configurable DB drivers, highly configurable class which encourages method overriding...)
Help?
You can write first the unit tests instead of start writing system tests
when you are going to add unit tests, you will need to learn mocking tests.

Categories

Resources