I want to test my API with jest. Before executing the tests of a test file I want to reset my database using a npm command like this:
import { execSync } from 'child_process'
import prisma from 'lib/prisma'
import { foods } from 'prisma/seed-data'
import { getFoods } from './foods'
beforeAll(() => {
execSync('npm run db:test:reset')
})
afterEach(async () => {
await prisma.$disconnect()
})
test('getFoods', async () => {
const fetchedFoods = await getFoods()
expect(fetchedFoods).toEqual(foods)
})
The npm run db:test:reset excecutes this script:
echo "DROP DATABASE foodplanet WITH (FORCE); CREATE DATABASE foodplanet;" | sudo docker exec -i foodplanet_test psql -U postgres
cat prisma/backups/test/backup.sql | sudo docker exec -i foodplanet_test psql -U postgres foodplanet
Everything works fine if I run this test file by itself. However if I execute all tests there are issues with multiple tests executing this script at the same time. Running the tests sequentially with -i still results in errors like:
ERROR: relation "Account" already exists
or
ERROR: database "foodplanet" does not exist
I get different errors every time I run my unit test therefore I think there are still multiple instances trying to execute this script at the same time.
What could be the reason that the tests doesn't get executed sequentially?
Related
I'm using detox with cucumber and the setup works great until the last moment of calling AfterAll hooks.
In the hook I do the following:
AfterAll(async () => {
console.log('CLEANING DETOX');
await detox.cleanup();
});
To run cucumber I use the following script from package.json:
"bdd": "cucumber-js --require-module #babel/register",
The problem happens when I interrupt cucumber run for any reason. AfterAll hook won't run in this case and detox won't clear its stale files.
Which wouldn't be a problem but detox is using ~/Library/Detox/device.registry.state.lock file to track which simulators are being used by runner. Essentially, this leads to detox constantly launching new simulator devices as this file never gets cleared.
I thought, I could just create a simple wrapper script:
const { execSync } = require('child_process');
const detox = require('detox');
const stdout = execSync('cucumber-js --require-module #babel/register');
process.on('SIGINT', async function () {
console.log('Cleanig up detox');
await detox.cleanup();
});
console.log(stdout);
However, that didn't work either as detox.cleanup() only removes the file when it has detox.device setup. Which happens in BeforeAll hook:
BeforeAll({ timeout: 120 * 1000 }, async () => {
const detoxConfig = { selectedConfiguration: 'ios.debug2' };
// console.log('CONFIG::', detoxConfig);
await detox.init(detoxConfig);
await detox.device.launchApp();
});
My only idea left is to manually clear the file - I should be able to grab lock file path from detox internal somehow - my worry about this approach is tight dependency on detox implementation. Which is why I would rather call detox.cleanup.
EDIT:
Ended up doing this workaround for now:
BeforeAll({ timeout: 120 * 1000 }, async () => {
await startDetox();
});
async function startDetox() {
const detoxConfig = { selectedConfiguration: 'ios.debug' };
const lockFile = getDeviceLockFilePathIOS();
unlink(lockFile, (err) => {
if (err) {
console.debug('Lock file was not deleted:::', err);
}
});
await detox.init(detoxConfig);
await detox.device.launchApp();
}
Wonder if anyone has a better idea ?
Have you tried adding
detox clean-framework-cache && detox build-framework-cache into your command like so:
detox clean-framework-cache && detox build-framework-cache && cucumber-js --require-module #babel/register
This is a detox command that clears the cache and resets it so detox starts from fresh every time, it barely adds any time to the execution time so can be run on every test and means the first thing the build does before starting the simulator is clear the cache and start from fresh. I had the exact same issue with our cucumber detox builds and this fixed it for us.
After fiddling with workaround for way too long I ended up migrating to jest-cucumber.
It manages to work way better with detox and doesn't require all this workarounds.
I'm trying to get the commit information details in the Cypress Dashboard. I haven't been able to accomplish just yet, but I have made some advances though...
I'll describe what I have done so far:
Installed the commit-info npm package by running the command:
npm install --save #cypress/commit-info
Import the plugin in plugin/index.js file like so:
const { commitInfo } = require('#cypress/commit-info');
module.exports = on => {
on('file:preprocessor', file => {
commitInfo().then(console.log);
});
};
Now I get all the information, author, branch, commit & message, in the terminal!:)
However, I still don't have the information details linked to my Cypress Dashboard.
This is what I currently get:
What're the next steps? The documentation is not clear to me...
In our case we run everything inside a docker container. We copy our code into the container but do not copy the .git directory, it's large, time consuming, and we don't need it. #cypress/commit-info assumes there is a .git directory, so since there isn't, it doesn't work.
We overcame this by setting the values cypress expects explicitly in the cypress run command in our Jenkinsfile:
def commitMessage = sh(script:"git log --format=%B -n 1 ${env.GIT_COMMIT}", returnStdout:true).trim()
def commitAuthor = sh(script:"git log --format='%an' -n 1 ${env.GIT_COMMIT}", returnStdout:true).trim()
def commitEmail = sh(script:"git log --format='%ae' -n 1 ${env.GIT_COMMIT}", returnStdout:true).trim()
def cypressVars = "COMMIT_INFO_BRANCH=${env.GIT_BRANCH} COMMIT_INFO_SHA=${env.GIT_COMMIT} COMMIT_INFO_REMOTE=${env.GIT_URL} COMMIT_INFO_MESSAGE=\"${commitMessage}\" COMMIT_INFO_AUTHOR=\"${commitAuthor}\" COMMIT_INFO_EMAIL=${commitEmail}"
// call cypress however you do and include cypressVars as part of the command
Not a big problem per se but I'm curious as to what's causing this behavior. I'm writing some very basic code to learn how to do some testing. I'm using jestjs for testing a node/express application, and am presently testing the development version of my project locally. All versions are up to date (most current available).
In the configuration for jest I have the following setup:
...
"test": "./node_modules/.bin/env-cmd -f ./config/test.env jest --watch"
},
"jest": {
"testEnvironment": "node",
"verbose": true
}
And my environment configuration (as referenced above by the env-cmd:
PORT=3000
SENDGRID_API_KEY=<API KEY>
JWT_SECRET=<JWT SECRET>
MONGODB_URL=mongodb://127.0.0.1:27017/task-manager-api-test
The --watch flag is supposed to work sort of like nodemon - whenever I save my test file it re-runs the tests. The problem seems to be that whenever I save the file some of the tests fail (it's fairly inconsistent as to which tests fail) - but if I manually re-run the tests (--watch gives me a CLI that allows me to re-run tests with a keypress) the tests pass.
I'm using the following in my test file to make sure that the DB instance has no data in it before running the tests:
// User to seed DB
const testUserUID = new mongoose.Types.ObjectId()
const testUser = {
_id: testUserUID,
name: 'firstName lastName',
email: 'automatedTest#test.com',
password: 'test1234',
tokens: [{
token: jwt.sign({ _id: testUserUID }, process.env.JWT_SECRET)
}]
}
// Setup
beforeEach(async () => {
await User.deleteMany()
await new User(testUser).save()
})
An Example of one of my tests:
test('Should signup a user', async () => {
await request(app)
.post('/users')
.send({
name: 'hardcodeFirst hardcodeLast',
email: 'hardcodeTest#test.com',
password: 'test1234'
})
.expect(201)
})
One of the more common errors I am getting is a MongoError:
MongoError: E11000 duplicate key error collection: task-manager-api-test.users index: email_1 dup key: { : "automatedtest#test.com" }
The other errors that are being thrown are related to the tests failing - so I'm getting values that the test does not expect.
I've tried googling some stuff related to testing async with jest but I haven't found anything that isn't shown in the documentation about how to use promises or async/await with jest. I've verified that my environment variables aren't pointing at my remote DB instance. I've run the tests in my normal (non-vscode) terminal. I've also verified that the tests always pass when using the --watch CLI (pressing Enter or a repeatedly) - the tests are only failing when I save the test file and it automatically re-runs due to the --watch flag.
Talking to one of my developer buddies it was suggested that I've possibly somehow created some sort of race condition. That would be a new situation for me if that's the case!
Thanks in advance for taking a look/any help offered!
EDIT: Included .env for my test environment
--watch flag works only for github repos. u should add watchAll
"test": "env-cmd -f ./config/test.env jest --watch"
rest of your code looks fine.
I am trying to run multiple Karma test files in parallel from inside a Node script and get to know which tests are passing or failing. Right now what I have is this:
const exec = require("child_process").exec;
exec("karma start " + filename, (error, stdout, stderr) => {
// handle errors and test results...
});
The code above works well, and I can get the information on tests passed or failed from stdout. However, it requires having installed Karma and all of the associated dependencies (reporters, browser launchers, etc.) globally. I am looking for a solution that doesn't require me to install all dependencies globally.
My first thought was this:
const karma = require("karma");
const server = new karma.Server(config, () => {
// some logic
});
However, when trying this other approach, I have been unable to gather the test results programmatically.
When using new karma.Server(), is there any way in which I could know which tests have passed or failed (and, ideally, a stack trace of the error)? Alternatively, is there any other way in which I can execute my tests and get the desired information programmatically without the need to install dependencies globally?
Actually, changing the exec line to this seems to do the trick:
exec("node node_modules/karma/bin/karma start " + filename, (error, stdout, stderr) => {
It turns out I'd only need to run the locally installed version of Karma instead of the global one. :-)
I am using knex for seeding and I have a folder called development where I have all the seeds files.
What I would want is: How to seed single file.
The command I am using is: knex seed:run --env=development
But this command is to seed all the files and I get duplicate rows on db.
Suppose I have created seed files yesterday, and I seed them, today I want to add another seed file, which I only want to seed this file, not the files from yesterday.
An example from Laravel is: php artisan db:seed --class=ProductTableSeeder
Thanks
For those of you checking this in 2019+
According to the knex documentation
To run a specific seed file, execute:
$ knex seed:run --specific=seed-filename.js
Just move your scripts to another folder except the desired script, run the seed and copy the scripts back.
The seed API only has two commands, make and run. This is from the docs.
runknex.seed.run([config])
Runs all seed files for the current environment.
So all scripts will be executed on each run
Don't work: knex seed:run --specific=seed-filename.js
create in DB knex_seed_lock table
Add this to seed file:
const file_name = path.basename(__filename)
const seedIsExist = await knex('knex_seeds_lock').where({ file_name }).first()
if (seedIsExist) {
return
}
And add this to the end of the file:
await knex('knex_seeds_lock').insert({ file_name })
As a result, in the database you will get all the seed that you already ran earlier
Personally I just wrap my promise in exports.up/down in an if (boolean) {...} and switch the ones to false that I don't want to run.
The solution from Artem works fine:
You just need to create table: knex_seeds_lock
The code that i used is here:
`
const path = require('path')
exports.seed = async (knex) => {
try {
const file_name = path.basename(__filename)
const seedIsExist = await knex('knex_seeds_lock').where({ file_name }).first()
if (seedIsExist) {
return
} else {
await knex('users_types').del();
await knex('knex_seeds_lock').insert({ file_name })
await knex('users_types').insert([
{name: 'client'},
{name: 'driver'},
{name: 'admin'}
]);
return;
}
} catch (err) {
console.log("ERROR SEEDING")
console.log(err);
}
}`
#Madison_Lai's answer is correct, but similar to one comment on that answer my --specific flag was also being ignored.
TLDR: Adding the -- separator solved my issue.
npm run knex seed:run -- --specific=seed-filename.js
Details:
In my case, the knex script in my package.json included a passed argument, which ended up causing ordering issues when I tried to pass additional arguments via CLI.
In package.json
"scripts": {
"knex": "./path/to/knex --knexfile=./path/to/knexfile.js"
}
In CLI,
npm run knex seed:run --specific=seed-filename.js
was getting parsed as below, completely losing the --specific argument
./path/to/knex --knexfile=./path/to/knexfile.js "seed:run"
Prefixing -- before passing the additional argument (as recommended here) solved the issue for me
npm run knex seed:run -- --specific=seed-filename.js