On the Blanket.JS website it is said that Mocha is supported. I followed the procedure of including blanket.js on my test page and put the data-cover attribute on the appropriate script elements but no report of coverage is being shown on the generated test page.
I have only 2 unit tests: 1 for a Backbone.js Model and 1 for a Backbone.js Collection. Can somebody assist me in getting the code coverage report?
Nevermind, to solve this problem you need to include mocha-blanket.js, also.
Related
I would like to run only specific Cypress tests related to the application files that I changed, but every time that I make a change all the integration tests are running. I've already checked a tool called cypress grep but it uses tags to map the tests but it is not dynamic or automatic since I need to set the tags and also add the tags manually in the pipeline.
Do you know an alternative tool to solve this issue?
We are using nx with Angular, works quite well https://nx.dev/using-nx/affected. Not sure if it's suitable for your project and if it's worth setting up whole nx.
You'll want to use the find-cypress-specs where you can get a list of changed specs, number, and by tags.
You'll have to form the Cypress run command in your YML file.
When I generate a coverage report using nyc it always give coverage results of the file(how much percentage is been covered). Although is there any way to figure out, which testcase name has contribute to the coverage of that file ?
Don't think it is possible to see the tests that hit the lines of the code or that provide coverage for a specific file. Nyc can be integrated with different coverage tools like codecov en coveralls, but in this tools I haven't found an option for this.
I don't know what you want with the option to view the test names for the different files. But for debugging purposes there is also an visual studio code extension for mocha that let's you debug your tests (https://marketplace.visualstudio.com/items?itemName=maty.vscode-mocha-sidebar).
When Intern tests don't load source files (0% covered), they don't show up in the (lcov) coverage report (running in nodejs).
Typically a problem JS tools struggle with, I think.
E.g. Jest has a simple workaround.
I'm looking for the simplest workaround for intern, ideally with v3.
Since Intern uses istanbul under the cover, wonder if --include-all-source flag works and can be passed easily?
Is there a standard recipe to make the loader aware of all files?
I have files that don't load well in nodejs too, can they be included?
Taking a look at the Intern project itself and in the config script there is such an option called coverage, coverage is defined as:
An array of file paths or globs that should be instrumented for code
coverage, or false to completely disable coverage. This property
should point to the actual JavaScript files that will be executed, not
pre-transpiled sources (coverage results will still be mapped back to
original sources). Coverage data will be collected for these files
even if they’re not loaded by Intern for tests, ALLOWING A TEST WRITER TO SEE WHICH FILES HAVENT BEEN TESTED
writer to see which files haven’t been tested, as well as coverage
on files that were tested. When this value is unset, Intern will still
look for coverage data on a global coverage variable, and it will
request coverage data from remote sessions. Explicitly setting
coverage to false will prevent Intern from even checking for coverage
data. 💡This property replaces the excludeInstrumentation property
used in previous versions of Intern, which acted as a filter rather
than an inclusive list.
Sorry for the uppercase, where just suppose to highlight the sentence.
coverage uses glob just as istanbul does, so you could specify something like coverage: ['src/**/*.js'].
I realize this because Intern itself uses this configuration to collect coverage and it seems to work for them.
Edit: As pointed in the comments, this features only appears in v4 of intern.
I am trying to set up automated unit tests for my JavaScript code using PhantomJS and QUnit, and also to generate code coverage with JSCover - basically as described here: http://julianhigman.com/blog/2013/07/23/testing-javascript-with-qunit-phantomjs-and-jscover/
The thing is that this page, and others I have seen on the subject, assume you only have a single HTML page that will load and run all of the QUnit tests in your project.
In my case I have something like 50 JavaScript source files, and a corresponding .js unit test file for each. I was going to go down the route of having a separate HTML page per unit test file, so that during development I could run tests for a specific file in-browser individually. But apart from the overhead of having to maintain 50 (admittedly very basic) HTML files, I am not sure how to make this work nicely with JSCover (without generating 50 coverage reports).
Is it best practice for me to just include all of the 50 unit test files into a single HTML page?
I would choose to include all 50 unit test files in a single HTML page.
You can use QUnit modules to break up your tests into groups if you don't want to run all of the tests all of the time. You can then run tests one module at a time by using the drop-down list at the top-right of the QUnit page or by using a query-string parameter in the URL, for example http://localhost:8000/qunit-tests.html?module=SomeModuleName or file:///path/to/your/qunit-tests.html?module=SomeModuleName.
Including all 50 test files in the same HTML page means you can run all of the tests in one go and then generate code coverage for all your code. I also can't help but feel that running PhantomJS on 50 HTML pages one after the other would be considerably slower than running it on one single larger HTML page.
The homepage for blanket.js says I only need to data a data-cover attribute to my script tags.
But how is blanket.js supposed to modify the files before they are run?
The documentation on Github says its first step is
Loading your source files using a modified RequireJS/Require script.
Do I have to use RequireJS to get blanket.js to work?
(I cannot find anywhere where this is documented.)
Currently, I do not use requireJS, and currently, blanket.js is not working. Could this be the cause of the problem?
A while ago, Blanket used Require.js internally - I don't know if this is still true. Though, require.js was bundled with the blanket.js file. So no, you don't need to include require.js.
What you may need to include is an adapter for your Unit test Framework. https://github.com/alex-seville/blanket/tree/master/src/adapters