Recent versions of IntelliJ IDEA support the execution of Jest tests.
I couldn't find an option (or even better a shortcut) to update snapshot tests within IntelliJ IDEA.
Is there an option/shortcut to update snapshots within IntelliJ IDEA?
What I have been doing is to right click on the failing Jest test and select the Create option in the pop-up menu to create a new run configuration for just that failing test.
I then add -u to the Jest options and run that specific test (once) to update the snapshot.
It is far from ideal, but you can keep them around for later, if you like, to re-run them with the -u option when needed.
I was wondering the same thing and I asked Jetbrains. They said this feature was requested and you can track the status of it here: https://youtrack.jetbrains.com/issue/WEB-26008
I'm not sure when it will be complete but looks like it is on their radar.
The -u option can be applied as a Jest run config default if you want it to be active for all Jest tests.
Related
Is there an option in Javascript's Mocha test runner to only run tests which failed on the previous run? Is there an easy way to implement that if not? There are a lot of words written about retrying flaky tests but thats not what I want. I want to run tests, see failures, make updates to the code, then automatically run only the previous failed tests to see if my changes fixed them
Found this https://github.com/segmentio/mocha-broken but integration into WebStorm wasn't streamlined enough so dropped it...
Might fit your needs though, worth a try
I see that WebStorm supports Jest testing, and I see that I can toggle auto-test on code changes. But that's not the same as Jest --watch mode which is lot faster.
Is it possible to configure watch mode somehow?
EDIT
I found out that I can pass --watch option to each test config, but I would have to do it for every file, and also it's not possible to use Jest watch options like e.g. "filter".
You can edit the default Jest configuration and add --watch in Jest options. Every new configuration will contain this option.
By the way, could you please tell how you are going to use --watch in every configuration? As far as I know, this option usually is used for all tests in one config.
Moreover, please provide more details and documentation on filter option you mentioned.
Jest -watch support will be provided in 2017.3. Please see https://youtrack.jetbrains.com/issue/WEB-26205
I started to learn ReactJS yesterday (to be used in my next product), I am willing to set up my dev environment but now I'm stuck with Jest...
Having a bluetooth lightbulb on my desk (already op with scripts etc..), I want to get a red light when my tests launched with jest --watch fail (see create-react-app from FB devs here)
The problem is, I don't know how to run a callback after the tests, it seems like no one ran into this issue on the interwebz, no solution found yet for me.
Update:
I am currently using a log file to grep:
lamp.rb
def ci
if File.readlines("path/jest.log").grep(/failed/).any?
File.truncate('path/jest.log', 0)
fail_jest # This method updates my lightbulb :) (red blink)
end
rescue
puts 'No jest log found :('
end
Launching my jest tests like this: unbuffer npm run test |& tee tmp/jest.log
I am still looking for a better solution !
Thanks for your help
Your problem is not specific to React or Jest. When your run jest tests, you are basically running a Node/npm command and when the tests fail the process exists with an unsuccessful exit code. This is the same mechanism used to make automated CI builds fail when tests don't pass. So I'd suggest you start your research from there and depending on your lightbulb's API, it should be straight forward to make it fire events whenever the process fails regardless of the reason.
I'm trying to use webdriverio with the jasmine test framework. I can run my test by typing jasmine at the command line. However, when I do wdio wdio.conf.js it opens a bunch of extra browsers which don't do anything. I'm just wondering what the point of the wdio.conf.js file is when I can just run jasmine at the commandline. Ultimately it's the same thing, right? However, I can't get the wdio.conf.js file to work in the same manner so it's useless to me. Perhaps I'm not managing the browser clients correctly but I don't see any guidelines on how this is commonly done. I read the documentation but it's pretty vague beyond auto-generating the wdio.conf file so that 'everything just works'. Am I supposed to use grunt or gulp to run my tests or are those tools separate from the wdio.conf idea?
I'm Just trying to get my head around all these different tools. All I need to do is make multiple automated tests to test a website. Thanks for your help.
This may help, https://github.com/webdriverio/webdriverio/blob/master/examples/standalone/webdriverio.with.jasmine.spec.js
I asked a similar question, which was answered by the main contributor here, Running WebdriverIO 'spec' tests as node file
I am having a small issue with WebStorm that I am hoping someone has experienced and solved before.
I am using WebStorm to build a angular.js app and I have it set up to use Karma to run my tests. This is fine for the most part: I have a Karma configuration setup and I can get to to run the tests or debug them with no issue.
My problem is that when I try to run a test individually by clicking on one of the test in the "Test Run" tree it goes off to a node configuration, tries to run it and fails (because its looking for js dependencies). After that I just go back to my 'karma config' and it runs through the whole of the test no problem.
Does anyone know how I can get the IDE hooked up so that I can trigger my tests from the UI?
Running tests from file right-click menu is only supported for those runners that allow executing individual tests (JSTestDriver, for example). There is currently no such possibility for Karma (WEB-13173). See the discussion at https://github.com/karma-runner/karma/issues/1235.
to run individual test files, you can have several karma configuration files with different sets of tests included. Plus you can rename individual tests/suits in the way mentioned in https://github.com/karma-runner/karma/issues/553