Get JSON data/report from jasmine tests - javascript

I am creating an editable tester. I am running jasmine tests and I would like to get the result (JSON data from tests) of tests and represent it on my web page.
Is it possible, and if how?
I do not want to save it somewhere, and then read it. I want that tests return a response, that tells me what happened with tests.

You are asking about jasmine reporters. There are so many of them now, for instance:
jasmine-reporters
karma-jasmine-html-reporter
jasmine-jsreporter
jasmine-json-test-reporter

Related

Can I get performance testing in node.js like a unit testing

I need performance testing/tuning in nodejs.
In CI/CLI, like a unit test.(target to function call. not networking.)
I use mocha timeout() now.
var dictionary_handle;
it ("Dictionary.init_dictionary timeout", function(done) {
dictionary_handle = Dictionary.init_dictionary(dictionary_data);
done();
}).timeout(1000);
it ("Linad.initialize timeout", function(done) {
Linad.initialize(function(err){
done();
});
}).timeout(6000);
But it is not enough.
I need that function.
able using in CI.
execute multiple time
output performance metric information
I believe you're looking for a some form of microbenchmark module. There is a number of options and your requirements match them all so I cannot come up with the best candidate, you will need to perform your own investigation.
However given you have performance-testing tag added I can give you a generic piece of advice: when it comes to any form of performance testing - you need to make sure that your load test exactly mimics your application under test real life usage.
If your application under test would be a NodeJS-based web application - there are a lot of factors which need to be considered apart from single functions performance so if this is the case I would recommend considering a protocol-level based load testing tool, if you want to stick to JavaScript you can use something like k6 or consider another standalone free/open-source load testing solution which can simulate real users close enough with minimal efforts from your side.
Dmitri T is correct, you need to be careful with what and how you test. That being said, https://github.com/anywhichway/benchtest require almost no work to instrument existing unit tests, so it may be worth using.

WebDriver.io no console output

I am using the following stack to run several tests:
NodeJs
Selenium standalone
geckodriver thought I use chrome
webdriver.io
mocha
chai
So after all my first_test.js is:
describe ('Website url test ', () => {
it('should have a title ', () => {
browser.call((done) => {
browser.url('http://webdriver.io');
var title = browser.getTitle();
expect(title).to.be.equal('WebdriverIO - WebDriver bindings for Node.js')
done();
})
})
And the output in the console is:Incorrect console output
But it should be like this for the passing tests as well: Correct console output
Is something in Mocha config that I should change so that the passing tests would produce the same optical outcome?
This behavior was caused by the reporter chosen (in my case dot).
I changed to spec and I have a very verbose output now.
WebdriverIO supports a great variety of reporters:
Dot: which is the default reporter for WDIO, a lightweight console reporter that outputs a green, or red dot ('.') for a passing, respectively failing test case;
Spec: which just outputs in the console a step-by-step breakdown of the test cases you previously ran. This output will reside strictly in the console, unless you want to pipe your entire console log-stack via the logOutput: './<yourLogFolderPath>/' attribute from the wdio.conf.js file;
Json: which generates a .json report of the tests your previously ran. It's very well suited for people who already have a test results dashboard where they analyze their regression results (passing tests, failing tests, run-time, etc.) and just need to parse the data from somewhere. You can configure the path where you want the .json report to be generated via:
reporterOptions: {
outputDir: './<yourLogFolderPath>'
}
Note: The Json reporter will populate the path given with WDIO-<timestamp>.json reports. If you want to pipe said .json to some other software for parsing, then you will need to go inside the library and change the naming convention so that you always get your results in the same file as opposed to a dynamically generated one.
Allure: Allure is one of the best reporter choices, especially if you don't have the makings of a test results dashboard in place as it generates one for you. You can check out this answer for a step-by-step breakdown;
!!! BUT as a best practice, no reporter should outweigh the importance of setting your logLevel (inside the wdio.conf.js file) to debug (logLevel: 'debug') for wdio-v5, or verbose (logLevel: 'verbose') for wdio-v4.
When debugging (I presume that was the reason purpose with the reporting), it's crucial that you get to the root of the problem in the fastest way possible and that is by looking at the REST calls made by your tests during run-time.
Hope this give a clearer overview to people starting with WebdriverIO and who need more info regarding which of these reporters is best suited for what scenario/situation.
Cheers!

Passing cmd parameters to Jasmine tests

I would like to pass command line parameters to my Node.js/Jasmine tests. At the moment I have basic Jasmine file/dir structure set up (jasmine.json file and specs - as in the example: Jasmine documentation).
I run specs from command line by executing the following command:jasmine.
I would like to pass some command line parameter, so that I can use it in my specs. I would run tests with the following command:jasmine --param value or jasmine param_value.
Is it possible (how to do that)?
The parameter I want to pass is a password and I don't want to hardcode it - maybe you can suggest any better solution?
Thanks in advance!
First off in general if you want to send parameters to a test in jasmine you would use jasmine -- --param value or jasmine -- param_value. No that extra -- isn't a typo it tells jasmine to ignore everything after that and pass it on.
That said passwords and the command-line parameters don't mix. Virtually every shell has some form of history. That history will record every command just as its typed. This is major security no no. It leaves an unencrypted password recorded on disk.
The test itself should ask for the password if that needs to be supplied without hard-coding. This will at most generate a transient in memory temporary with the value. That is must safer then having it in a history file somewhere. node has modules such as readline that can do this without generating visible output for the password being entered.

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

Testing breeze application

I am doing an application with durandal, breeze, and knockout. I have started to implement some test. The first problem that I have had is to decide what I should test and what not. I know that I should test everything, but it is not always possible in a little company.
My second problem is how I can test my calls to the server. I have seen some information in the breeze page about testing. Also I have seen the DocCode example. But I would like to know more opinions about how I can do that.
My questions are:
What do I should test in the breeze calls?
I would like to test this, emulating the backend. Is it possible? Any example?
Any advice or comment would be great
Wow ... that's a big question!
There's is a little on this subject in the documentation. Not nearly enough to be sure.
I'm guessing that you are fairly new to JavaScript testing. If you've seen DocCode, you know that we use QUnit around here. Many prefer Jasmine, Mocha or something else; I can only speak to QUnit.
First step is learn QUnit. It's not hard. QUnit's own intro is good. I like Ben Alhman's slide deck.
Next I'd practice with small tests of your business logic that does NOT go over the wire. Could be any interesting logic in a ViewModel or perhaps some calculated property in a model (entity) object.
You can test a VM's interaction with a "DataContext" quite easily without going over the wire. Create a "FakeDataContext" and inject that into your tests instead of the real one. Alternatively you could "monkey patch" the real "DataContext" in strategic places that turn it into a fake.
When faking the DataContext, I find it useful to leverage Breeze's ability to confine queries to the local cache. The local cache serves as an in-memory surrogate for data that would otherwise have been retrieved from the server.
This can be as simple as setting the FetchStrategy of the manager's default QueryOptions ... perhaps like this
var queryOptions = new QueryOptions({
mergeStrategy: MergeStrategy.PreserveChanges,
fetchStrategy: FetchStrategy.FromCache
});
var entityManager = new EntityManager({
serviceName: "yourEndpoint",
queryOptions: queryOptions
});
Now your queries will all be directed to the cache (unless they have an explicit QueryStrategy of their own).
Now make it useful by populating the cache with test data. There are numerous examples of faked entities in DocCode. Here's an example of a faked Customer:
var testCustomer = manager.createEntity('Customer', {
// test values
CustomerID: testCustomerID,
CompanyName: testCustomerName,
...
}, breeze.EntityState.Unchanged); // as if fetched from the database
If I need the same test data repeatedly, I write a "Data Mother" that populates an EntityManager with test data for me.
I can do a lot of testing this way without ever hitting the server. The whole time I'm working with the Breeze entities in JavaScript ... much as I do in production code. I don't have to learn a mocking library or inject another tool.
Another approach - harder, lower level, but more powerful - is to replace the Breeze default AJAX adapter with a fake that returns test JSON values as if they had come from the server. You can use a tool like Fiddler to make fake JSON from actual payload snapshots. You can also use this trick for simulating server-side save behavior.
Update 3 May 2013
The DocCode Sample includes a new TestAjaxAdapter for simulating server responses as I just described. Check out the testAjaxAdapter.js and see how to use it in testAjaxAdapterTests.js. This particular version of DocCode is only on GitHub right now but it will be published officially in the release immediately after v.1.3.2.
... end of update; back to original post ...
Does faking the JSON stream within your fake AJAX adapter seem like too much of a PITA?
Break out your mad Breeze skills and write a custom JsonResultsAdapter to create those fakes. Have your fake AJAX adapter return an empty array for every query request. Then you can do anything you want in the extractData and visitNode methods of your JsonResultsAdapter.
I trust it is obvious that you can fake your server-side controller too. Of course your tests will still make trips "over the wire" to reach that controller.
Hope these clues are sufficient to get you headed in a satisfactory direction.
Update 30 April 2013
Breeze will need metadata to do its thing. Your metadata comes from the server. Calling the server for metadata would seem to defeat the purpose of running tests entirely disconnected.
So true. Now I'm not a stickler on this point. I don't mind hitting the server for metadata at the top of a test module ... exactly once ... and then running the rest of my tests without going to the server for metadata. But if you're a purist or you just don't want to do that, you can write the server-side metadata to a JavaScript file on the server and load that script statically on your test runner's HTML page just like any other script.
For an example of this technique, look at App_Data/WriteMetadataScriptFiles.cs which generates a JavaScript file for the Northwind model in the forthcoming (later this week) v.1.3.2 DocCode sample. The DocCode tests load JavaScript files dynamically with require.js. The metadataTests.js test file shows how to load the generated northwindMetadata.js with require. You don't have to be that clever if you're not using require.js.
Note to self: write some samples illustrating these techniques.

Categories

Resources