When Intern tests don't load source files (0% covered), they don't show up in the (lcov) coverage report (running in nodejs).
Typically a problem JS tools struggle with, I think.
E.g. Jest has a simple workaround.
I'm looking for the simplest workaround for intern, ideally with v3.
Since Intern uses istanbul under the cover, wonder if --include-all-source flag works and can be passed easily?
Is there a standard recipe to make the loader aware of all files?
I have files that don't load well in nodejs too, can they be included?
Taking a look at the Intern project itself and in the config script there is such an option called coverage, coverage is defined as:
An array of file paths or globs that should be instrumented for code
coverage, or false to completely disable coverage. This property
should point to the actual JavaScript files that will be executed, not
pre-transpiled sources (coverage results will still be mapped back to
original sources). Coverage data will be collected for these files
even if they’re not loaded by Intern for tests, ALLOWING A TEST WRITER TO SEE WHICH FILES HAVENT BEEN TESTED
writer to see which files haven’t been tested, as well as coverage
on files that were tested. When this value is unset, Intern will still
look for coverage data on a global coverage variable, and it will
request coverage data from remote sessions. Explicitly setting
coverage to false will prevent Intern from even checking for coverage
data. 💡This property replaces the excludeInstrumentation property
used in previous versions of Intern, which acted as a filter rather
than an inclusive list.
Sorry for the uppercase, where just suppose to highlight the sentence.
coverage uses glob just as istanbul does, so you could specify something like coverage: ['src/**/*.js'].
I realize this because Intern itself uses this configuration to collect coverage and it seems to work for them.
Edit: As pointed in the comments, this features only appears in v4 of intern.
Related
Some background context:
I'm working on a game written in BABYLONJS, which renders 3D graphics inside an HTML5 canvas using webGL. That is to say this project is not a typical Web UI where I need to test DOM elements like button clicks or form submits. BABYLON JS has its own way of simulating pointer events in the context of a 3D scene and you can pass it a pointerInfo object that mocks if a mesh was hit etc., and that's what I would like to use.
My project came bootstrapped with esbuild. I love it because it's fast at transpiling typescript and bundles everything into a single artifact and doesn't produce javascript artifacts next to my typescript files, so my directories are clean.
I started testing using jest. This was fine until I started running into issues where
Window is not defined
would crop up. It was because the code being tested brought other code along that was testing window.navigator... for attributes to see if this was a mobile device or not. I could try to mock the Window object but it is a pain. Also when trying to simulate a click the PointerEvent was not defined. I tried tried adding JSDOM, but that didn't seem to help and I wasn't able to get unblocked. It just seemed like I was trying to use a tool built for node, when I should just test in a real browser.
But Googling "browser based testing" usually finds results I don't want. I'm not looking for full end-to-end user interaction testing. I don't want selenium/chrome driver style of testing because:
it's slow
my project is not a traditional website, I don't have many HTML elements for a user to interact with
I don't want to test the whole stack of logging in, dealing with authentication etc.
I just want to test classes and functions in small unit test level, but I need access to Window and PointerEvent and all the goodies that come with a browser for free.
Next I looked at Jasmine. Jasmine standalone has a browser based SpecRunner.html. It's a single HTML page that you modify. It includes its own jasmine boot scripts that are loaded with script tags, and then your source code files and your test files are also imported as JS files with script tags.
This seemed promising in that the tests run in a browser so presumably they have access to the window object. However, both my specs and my source code are written in typescript, not javascript, so how do I place my code and tests into the SpecRunner.html?
esbuild is only bundling a single output artifact. If instead of using esbuild, If I used the tsc command and pollute my directories full of javascript artifacts, then yes, the jasmine SpecRunner.html would have access to the javascript files, but tsc is slower and js files everywhere is messy.
But before I get too far on this,... I think the downside to this approach is that for every test file I write, I need to manually modify SpecRunner.html to include all the source code to be tested and all the test files which will be annoying to maintain whenever file paths or file names change.
TLDR;
Any advice on what is a good solution to run unit tests with a real browser (not selenium style) when using typescript and esbuild? I don't have a real preference for any particular test framework.
I bundle all my JS assets into one minified uglified file via r.js (part of requirejs).
If any unhandled errors occur on the browser, I use raygun (like Airbrake) to report it back to me. The only problem is the line number I get in my error message, refers to the bundled minified file. Which doesn't help much.
Is there a way to correctly map the line number of my single minified and uglified bundled asset, into the individual JS file with the correct line number?
The first thing you need to do is have r.js generate a source map of the bundle. To do this, in the options you pass to r.js you need to have the option generateSourceMaps set to true and you must set the optimize option to "uglify2" or
"closure" with a closure compiler jar build after r1592 (20111114 release)."
(I'm citing from this documentation.) I've done it with optimize set to "uglify2" and was able to get decent references to the original source code in Chrome.
The logging service must also support it. This post over at the raygun forums suggests that raygun does not yet support source maps.
Someone created a list of such services as a gist over at github. Some of the services are marked as supporting source maps. I can't vouch for its accuracy but it could be a good starting point to find a service that supports it.
If Protractor is replacing Angular Scenario Runner for E2E testing, does that mean I will still be able to use it with Karma as my E2E testing framework ?
Not recommended by the current maintainer of Protractor:
https://github.com/angular/protractor/issues/9#issuecomment-19927049
Protractor and Karma should not be used together; instead they provide separate systems for running tests. Protractor and Karma cover different aspects of testing - Karma is intended mostly for unit tests, while Protractor should be used for end to end testing.
Protractor is built on top of WebDriverJS, which uses a Selenium/WebDriver server to provision browsers and drive test execution. Examples of pure WebDriverJS can be found here: http://code.google.com/p/selenium/wiki/WebDriverJs
And
https://github.com/angular/protractor/issues/9#issuecomment-19931154
Georgios - I think it makes sense to keep Protractor and Karma separate - for end to end tests, you want the native event driving and flexibility of webdriver, while for unit tests you want fast execution and autowatching of files.
UPDATE. Here is a simple package I've created to add minimal Karma setup to any project with one single command npm install min-karma.
I'd like to clarify some possible misconceptions about Karma and Protractor. Karma FAQ actually does refer to Adapter for Angular's Scenario Runner, which, however, seems to be abandoned, with Protractor being recommended instead.
Karma
Karma is a test runner that will run the JavaScript files specified in you configuration file explicitly or using node-globs. (For non-JavaScript external templates, Angular's Unit Testing Guide recommends using Karma html preprocessor to compile them into JavaScript first.)
These can be all your source files, some of them, some of them plus some additional files or files irrelevant to your project, only providing some extra configuration - you name it!
You can have multiple karma config files for different purposes, which you can run in parallel or one-by-one. Each karma process launches its own set of browsers (these are currently available).
This feature of Karma to run only a set of files is what makes it perfect for fast tests running in background upon each source file edit, and get immediate feedback, which is brilliant! The only negative is the "noisy" error reporting that will hopefully improve!
Karma is not only for unit tests
Unit test is for a single unit of your source code. In Angular's case a typical unit is Angular Component (Service, Factory, Provider, Controller, Filter, Directive etc). Remember to keep your Controllers thin, so too many unit tests for latters is a red flag.
In a unit test, every other units of code, on which this unit depends (so-called unit's dependencies) should not be tested at the same time. Instead they should be "mocked", e.g. replaced by something simple like dummy instances. Angular provides great mock environment support. Ideally you want to see all those mocks directly inside your tests, so you never need to wonder where all those dependencies come from.
Karma is just as useful for Integration Tests, where a group of source code units is tested together, with only some of their dependencies being mocked. It is important to remember that any dependency is by default provided from your source code modules (as long as those modules either injected directly in your tests, or they are dependencies of other modules injected (in which case you don't need to inject them, but no harm to do so). The mocked dependencies will override the provided ones.
Running Fast and Frequent is the main feature of Karma. This means you want to avoid any server requests, any database queries, anything that can take longer than fractions of seconds. (Otherwise it will NOT be fast!) Those long processes are ones you want to mock. This also explains why it is a bad practice to put raw low level services like $http directly inside your controllers or any complicated business logic units. By wrapping those low level outside communication services into smaller dedicated services, you make it much easier to "mock them away".
What Karma does not do is running your site as it is, which is what End-to-End (E2E) testing is. In principle, you could use Angular's internal methods to recreate the site or its pieces. Which, for small pieces, can be useful, and a fast way e.g. to test directives.
It is, however, not recommended way to throw complicated code inside your tests. The more you do it, the more chance is that you make errors in that code instead of what you are actually testing.
That is why I personally dislike the often mentioned complicated way of testing methods using low level methods like $http. It works cleaner to isolate any reference to low level methods into dedicated methods of your own, whose single responsibility is to make http requests. These dedicated methods should be able to work with real backend, not a fake one! Which you can easily test - manually or even perfectly fine with Karma running with another special config, as long as you don't mix that config with the one usually used to run Karma regular and fast.
Now, having your dedicated small services tested, you can safely and easily mock them to test your other logic and put these tests into your regular Karma setup.
To summarize. Use Karma to run any set of JavaScript files. It is (should be) fast. You don't see your complete app, so can't test the final result effectively and reliably. Would I run it with Protractor? Why would I? Running Protractor would slow down my tests, defeating the purpose of Karma. It is easy to run Protractor separately.
Protractor
Protractor is:
an end-to-end test framework for AngularJS applications. Protractor runs tests against your application running in a real browser, interacting with it as a user would.
So Protractor does exactly what Karma doesn't - run your real final application. This reveals its both power and limitations:
Running complete application is the only reliable final test that your application works as expected. You can write up complete user story scenarios and put them into your tests!
But it is harder to track errors without isolating individual units of your source code. This is why you still need Karma to test your JavaScript code first.
Now would I want to run Protractor with Karma? I surely can run them in separate terminal windows, in parallel. I could, in principle, have them share test files if I need to, but normally I'd rather not. Why? Because I want to keep my tests small with single dedicated purpose.
The only exception would be a file defining testing macros useful for both runners. This, however, would not be a test file but a macro definition file.
Other than that, I like a clear separation between my tests. Those to be run frequently and fast, and those for the complete app. That makes a clear separation between when using Karma and when Protractor.
Karma is a test runner provided by the Angular team, Karma will execute your tests in multiple browsers which shall ensure that our application is compatible in all browsers.
Unit Test for angular js can be used karma + jasmine
Jasmine is a javascript unit testing framework and will provide us with utilities to test our application. This works best on the Angular framework and thus, our choice of “automated unit testing tool”.
https://github.com/shahing/testingangularjs
And Protractor is an end-to-end test framework for Angular and AngularJS applications.
Protractor runs tests against your application running in a real browser, headless browsers , cross browser testing and can be hosted on saucelabs.
https://github.com/shahing/Protractor-Web-Automation
Yes, you can use karma and protractor together. Karma is used for unit testing the component you created using angular command you can test those components using karma.
Protractor is used for end to end test. It is mainly used for UI testing.
I think the question says most of it. I have an autogenerated ManualSpecRunner.html file as created by maven / jasmine plug-in and I've got it to put itself into the deployable .war by using:
<jasmineTargetDir>${basedir}/pathForMyWebapp</jasmineTargetDir>
However, all the links to js files within the ManualSpecRunner.html are hard coded file:/// references - this is a bit mental, I want them to just be the relative paths to the files that are also in the webapp i.e.
Currently it gives me this path:
file:///home/username/code/HEAD/pathForMyWebapp/js/yui.js
whereas I need it to have the far more simple
/pathForMyWebapp/js/yui.js
I have tried changing two other variables in the maven script, but neither seems to have the desired effect, neither of these configuration options do what I need, the second having seemingly no effect:
<jsSrcDir>/pathForMyWebapp</jsSrcDir>
nor
<jsTestSrcDir>/pathForMyWebapp</jsTestSrcDir>
I've looked through the documentation but think I must be missing something (also, more notes on various config params listed in https://github.com/searls/jasmine-maven-plugin/blob/master/src/main/java/com/github/searls/jasmine/AbstractJasmineMojo.java are meant to do would be helpful so I can work out if I'm doing it wrong or if it's not possible!)
Any suggestions?
[p.s. I've changed some of the path names as they've got sensitive info in them, so please ignore their oddness!]
I think I understand the source of your confusion. It looks like you're trying to direct the target of the jasmine-maven-plugin to a directory inside your project's packaged *.war file so that you can run your specs against the code after it's deployed to a server, is that correct?
Unfortunately, the plugin wasn't designed with that use in mind. The jasmineTargetDir directory is usually left at its default value of target/jasmine and wasn't intended to be bundled with your application (it's analogous to the target/surefire-reports generated by maven-surefire-plugin for Java unit tests). So the reason that the script tags in ManualSpecRunner.html point to invalid locations is because that file is generated in order to be run from the local filesystem in a browser from the workstation that's building the project (to facilitate TDD).
All of that to say, if I'm reading your intention right, I think it'd be a cool feature to build a third spec runner that could be deployed with the app and executed remotely. (Especially if the project's Jasmine specs are functional/integration as opposed to isolated unit tests.) Unfortunately that's not something the project does yet.
I'm afraid that for now, if you needed to bundle the jasmine tests and execute them on the deployed server, you would need to copy ManualSpecRunner.html and jasmine into your src/main/webapp, fix the script tag references, and then manually maintain it as files are added and removed.
Make sense?
Our project has more than 300 JSP files and more than 200 JavaScript files. I'd like to do some cleanup, removing unnecessary JS files. Even if the JSP includes the JS maybe none of the functions are used. The goal is to reduce both complexity and time needed to load the page. My IDE is Eclipse. Giving the dynamic nature of JavaScript I guess it will be hard or even impossible.
If it's conceivable that the application can be tested with a lot of coverage (i.e. going through every dialog, error message, and situation imaginable) you may be able to work with your access log files - compare the list of JS files to those fetched after period x of heavy use.
An alternative implementation of this would be setting up a "honeypot" (see my answer to this question).
Both these methods are of course "soft" in that their quality relies in how throroughly the application is actually used during testing time.
If you have any way of grepping all script references, that would be preferable. Maybe you can do a global search on {anything}.js, that would match most ways how to embed a JS file.
To find out what functions and javascript files are used in a project, you need code coverage tools, like JSCoverage or Code coverage for Firebug. These tools will return the functions used and the files used. Using these with an automated test suit like the Selenium or randomized testing should give you a fairly good idea which files are loaded.
If the files are loaded dynamically, you can also use Firebug or Fiddler to log the requests for the JS files.
Unfortunately if you want certainty, not just extremely high likeliness that you get with the above tools, you would have to generate a calling graph for your entire webapp, maybe using a Javascript Compiler, like Rhino...