I try to refactor my code. I know that if I have several expectations they should be isolate in 'it'. I try to understand how I can write instead this:
describe('my scenario should make', function () {
var config = browser.params;
var url = config.listOfReferencesUrl,
grid,
numberField;
it('test1', function () {
browser.get(url);
browser.executeScript("icms.go('WEB_INQ_PROC', 'InquiryList', null, 0)");
grid = psGrid(by.css("table[class='n-grid']"));
numberField = grid.getQuickFilter(1);
numberField.click().sendKeys("Hello!");
since('fail1').expect(numberField.getInputText()).toEqual("");
});
it('test2', function () {
since('fail2').expect(numberField.getInputText()).toEqual("Hello!");
});
});
Something like this:
describe('my scenario should make', function () {
var config = browser.params;
var url = config.listOfReferencesUrl,
grid,
numberField;
*********Make this part of code ONES before all tests in spec ****
browser.get(url);
browser.executeScript("icms.go('WEB_INQ_PROC', 'InquiryList', null, 0)");
grid = psGrid(by.css("table[class='n-grid']"));
numberField = grid.getQuickFilter(1);
numberField.click().sendKeys("Hello!");
*******************************************************************
it('test1', function () {
since('fail1').expect(numberField.getInputText()).toEqual("");
});
it('test2', function () {
since('fail2').expect(numberField.getInputText()).toEqual("Hello!");
});
});
Maybe somebody have an idea how I can do this?
To answer your question, if you want to run your code once before all tests then use beforeAll() function available in Jasmine 2. Here's a sample -
beforeAll(function(){
//Write your code here that you need to run once before all specs
});
You can use beforeEach() function available in Jasmine to run it each time before a test spec. Here's a sample -
beforeEach(function(){
//Write your code here that you need to run everytime before each spec
});
If you are facing issue in getting these functions to work, then update your plugins to latest version and then try running it. Also use the framework: 'jasmine2' in your conf.js file
Hope this helps.
Related
I am writing UI tests and Im struggling to break my test up into smaller tests instead of having one long test run. I use faker to generate all the data needed, so that I dont have to add fixtures.
My current working test looks like this:
/// <reference types="cypress" />
import "cypress-audit/commands";
describe("Full system test", () => {
before(function () {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function () {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.UpdateProfile({});
cy.AddCompanyDetails({});
cy.AddTeamMember({});
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
});
What I would like to do is have the tests like this:
/// <reference types="cypress" />
import "cypress-audit/commands";
describe("Full system test", () => {
before(function () {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function () {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.contains("Vacancies").click();
cy.UpdateProfile({});
cy.contains("Vacancies").click();
cy.AddCompanyDetails({});
cy.contains("Vacancies").click();
cy.AddTeamMember({});
});
it("Creates a vacancy and adds candidates", function () {
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
The issue im having is that faker is going to generate new data if I break it up like I have in my second example. Also, I need to then sign back in every time I add a new test. Is there a way for me to continue where the last test ended ?
The reason I want to do this is because I want to see the tests broken up in the testing tool so that its easier to see exactly whats failing instead of having to work it out every time. Is there maybe an easier way to do this ?
I would like it to look like this in the testing tool:
You can add the data that you want at the top of the test to make sure that the same data is being transferred in the test.
/// <reference types="cypress" />
import "cypress-audit/commands";
var faker = require('faker');
var randomName = faker.name.findName();
var randomEmail = faker.internet.email();
describe("Full system test", () => {
before(function() {
this.User = require("../../faker/user");
});
it("Registers and creates profile", function() {
cy.visit(Cypress.env("home_page"));
cy.Register({});
cy.contains("Vacancies").click();
cy.UpdateProfile({});
cy.contains("Vacancies").click();
cy.AddCompanyDetails({});
cy.contains("Vacancies").click();
cy.AddTeamMember({});
});
it("Creates a vacancy and adds candidates", function() {
cy.CreateVacancy({});
cy.AddCandidate({});
cy.AddAdditionalDocs({});
});
})
Another way could be to move the before() hook with faker.js code to cypress/support/index.js since this will only work for Run all specs cypress open as the index file will be called once. But in the case of cypress run, the index.js file is called once before every spec file so the data will change.
I'm building a command-line application in NodeJS and I want to thoroughly test it using Jasmine.
I've implemented a promptUser() method which uses Node's readline.createInterface method to pose a question and pipe the response into a callback. I want to test that, given a user response of 'q', my module's quit() function is called.
However I'm struggling to test this. I don't really want to test the readline method directly, since I didn't write that code, but I reasoned that if I can create a listener on process.stdout.write then when enter command: is printed to the screen I can respond with process.stdin.write("q\n") and trigger the if/else logic.
I've simplified the code, but should explain what I'm trying to do:
Module source code:
var Cli = function() {
var rl = require('readline');
var self = this;
Cli.prototype.promptUser = function() {
var inputHandler = rl.createInterface(process.stdin, process.stdout);
inputHandler.question('enter command: ', function(answer) {
if (answer === 'q') {
self.quit();
};
});
};
Cli.prototype.quit = function() {
// doSomething
};
};
module.exports = Cli;
Jasmine test:
var Cli = require('Cli');
describe('My application.', function() {
beforeEach(function() {
cli = new Cli();
spyOn(cli, 'quit');
});
describe('Cli #promptUser', function() {
it('input of lower-case q calls cli.quit()', function() {
process.stdout.once('write', function() {
process.stdin.write("q\n");
});
cli.promptUser();
expect(cli.quit).toHaveBeenCalled();
});
});
});
I'm looking to either make this approach work or find a better way to test my code. I suspect there is probably a superior/more direct approach.
I'm new to unit testing, so please forrgive me if my question could be silly. I wrote an unit test using Mocha with PhantomJS and Chai as assertion library. The code that I want to test is the following function:
function speakingNotification(audioStream){
var options = {};
var speechEvents = hark(audioStream, options);
speechEvents.on('speaking', function() {
return 'speaking';
});
speechEvents.on('stopped_speaking', function() {
return 'stopped_speaking';
});
}
As you can see it takes an audioStream parameter as input and then use a librabry called hark.js https://github.com/otalk/hark for detecting speaking events. The function should return if the user is speaking or not.
So I wrote the following unit test:
describe('Testing speaking notification', function () {
describe('Sender', function(){
var audio = document.createElement('audio');
audio.src = 'data:audio/mp3;base64,//OkVA...'; //audio file with sound
var noAudio = document.createElement('audio');
noAudio.src = 'data:audio/mp3;base64,...'; //audio file with no sound
it('should have a function named "speakingNotification"', function() {
expect(speakingNotification).to.be.a('function');
});
it('speaking event', function () {
var a = speakingNotification(audio);
this.timeout( 10000 );
expect(a).to.equal('speaking');
});
it('stoppedSpeaking event', function () {
var a = speakingNotification(noAudio);
this.timeout( 10000 );
expect(a).to.equal('stopped_speaking');
});
});
});
The test fails and shows:
AssertionError: expected undefined to equal 'speaking'
AssertionError: expected undefined to equal 'stopped_speaking'
I also tried to use done() insted of the timeout, however the test fails and shows:
ReferenceError: Can't find variable: done
I searched for tutorials, however I can only find simple examples that don't help.
How can I write a correct test?
Given the following code, I have managed to write a test by making use of QUnit for the first part but was unable to test finder.doRoutefinding. How can I 'mock'the function finder.doRoutefinding? (Mockjax cannot be used here since no ajax calls are involved)
`finder.doSelectDestination = function(address)
{
finder.destination = address;
finder.doRoutefinding(
finder.departure,
finder.destination,
finder.whenRouteLoaded,
finder.showRoute);
}
test('Destination Selector',
function()
{
address="London";
finder.doSelectDestination(address);
equal(pathfinder.destination,address, "Succesful Destination Selection");
}
);
There are caveats, but you could simply replace the function with your mock:
var originalDoRoutefinding = finder.doRoutefinding;
finder.doRoutefinding = function() {
// Mock code here.
};
// Test code here.
finder.doRoutefinding = originalDoRoutefinding;
If that kind of thing works for you, you might consider using a library like Sinon.JS.
I'm trying to gather 3 tasks needed to debug in a 1. Of course, since nature of gulp is asynchronous, I have problems with that. So I searched and find a soulution to use run-sequence module for solving that issue. I tried the following code, but it doesn't seem to be working as intended. It's not getting synchronous.
Here's what I tried. Any thoughts guys? I don't want to run all this three commands to complete all the tasks. How can I do that?
var gulp = require('gulp'),
useref = require('gulp-useref'),
gulpif = require('gulp-if'),
debug = require('gulp-debug'),
rename = require("gulp-rename"),
replace = require('gulp-replace'),
runSequence = require('run-sequence'),
path = '../dotNet/VolleyManagement.UI';
gulp.task('debug', function () {
gulp.src('client/*.html')
.pipe(debug())
.pipe(gulp.dest(path + '/Areas/WebAPI/Views/Shared'));
});
gulp.task('rename', function () {
gulp.src(path + '/Areas/WebAPI/Views/Shared/index.html')
.pipe(rename('/Areas/WebAPI/Views/Shared/_Layout.cshtml'))
.pipe(gulp.dest(path));
gulp.src(path + '/Areas/WebAPI/Views/Shared/index.html', {read: false})
.pipe(clean({force: true}));
});
gulp.task('final', function(){
gulp.src([path + '/Areas/WebAPI/Views/Shared/_Layout.cshtml'])
.pipe(replace('href="', 'href="~/Content'))
.pipe(replace('src="', 'src="~/Scripts'))
.pipe(gulp.dest(path + '/Areas/WebAPI/Views/Shared/'));
});
gulp.task('debugAll', runSequence('debug', 'rename', 'final'));
In gulp you can actually set dependant task. Try this:
gulp.task('debug', function () {
//run debug task
});
gulp.task('rename',['debug'], function () {
//run rename once debug is done
});
I think you are not defining the 'debugAll' task right. Try like this:
gulp.task('debugAll', function () {
runSequence('debug', 'rename', 'final');
});
And also you need to return the stream for those tasks, just add 'return' in front of gulp.src for each of them: debug, rename, final. Here is the example for 'debug' task:
gulp.task('debug', function () {
return gulp.src('client/*.html')
.pipe(debug())
.pipe(gulp.dest(path + '/Areas/WebAPI/Views/Shared'));
});
Both items are mentioned in the docs: https://www.npmjs.com/package/run-sequence