Protactor js testing work flow - javascript

I've recently started getting into Angular (with node.js), and many of the tutorials suggest using Protactor, which looks amazing. One thing has me confused though.
I'm used to tests where test data is built before the test, the test is run, and the data is destroyed.
With protractor it seems you start your server, and have your tests run against that server. In the tutorials I've seen, this server is usually the dev environment (populated by seed data I assume). In my experience, the dev database changes as you play around and tweak your app. Furthermore, a protractor test might delete an object, meaning that for the test to be re-run the object would have to be built again.
When using Protractor, what is the standard practice for creating a test environment with before/after hooks for populate. Bonus points if you can point me to some good resources that answer my question.

Depends an how PRO you want to go. Are you interested in testing in dev only? Do you have other environments? How often do you want to test? I test if different environments. One of them has no data because the database is created before running the tests. Other environments have a lot of data.
I gave a talk in the Angular meetup in NY a few months back:
https://github.com/andresdominguez/protractor-meetup
Take a look at slide 35 of the presentation (the link is in the readme file).
I call the rest api directly to generate data for my tests. You can also run a script before you run your tests to make sure that some objects are present.

Related

How to unittest Firefox 57 WebExtensions?

The older Firefox "Add-ons" API had a built-in unittest layer sdk/test that allowed testing. This doesn't seem to be available any more.
Additionally the use of "package/require" allowed code to be separated into "js code-only" packages that were testable using node.js. The new, highly structured javascript doesn't share this.
My priorities are (highest to lowest):
Algorithms, "business logic", e.g. parsing input data - no APIs needed - just JavaScript
Internal logic - e.g. background scripts interacting with settings, etc.
UI interactions - I can live without this, but would be nice to test
So how do people test their WebExtensions?
Check out webextension-geckodriver for a worked example of functional testing.
If you want to test interaction with the webextension API you can either do it live (have a test page for your extension and get geckodriver to visit it, for example) or use a fake like sinon-webextension through webextension-jsdom.
To unit test algorithms, just import the functions using jest, mocha, or whatever node unit testing framework you prefer or add them to a test page that you can visit in the browser.
A complete, but old, worked example of webext testing is here: example-webextension.
An example of tests in a real webextension using another fake: vim-vixen
It is still possible to run unit tests in NodeJs.
To illustrate the idea, let us take a look at the Cliqz extension, whose source code is open (Github link: cliqz-oss/browser-core). In comparison other extensions that I have seen so far, the code base is quite large.
In other words, it is not a toy example, but a realistic use case. The drawback, of course, is that due to the complexity, it is harder to understand how the test setup works (how mocking works, the integration into the build system, etc).
To get an idea how tests look like, here is one example:
source: query-sanitizer.es
test: query-sanitizer-test.es
To explain the details of how the mocking works is hard because you would need to dive deep into the build system. From a high level perspective, you will notice that in the test it uses a function called describeModule, which does all the mocking of dependencies.
In the implementation of describeModule, you can see that it uses systemjs to dynamically load ES modules. That trick makes it possible to run the unit tests with NodeJs.
My priorities are (highest to lowest):
Algorithms, "business logic", e.g. parsing input data - no APIs needed - just JavaScript
For these kind of tests, the unit test infrastructure described above is the preferred way. For local development, NodeJs is used to run the tests.
Internal logic - e.g. background scripts interacting with settings, etc.
This is not so different. The idea is still that you can mock dependencies and then do classical unit tests.
It might require some work to allow dependencies to be replaced. As mentioned, as the code base in this example has to run in different environments (e.g., Firefox, Chrome, Edge, ReactNative), platform APIs have to be abstracted (that includes also browser APIs).
UI interactions - I can live without this, but would be nice to test
For testing the UI, there are additional integration tests. I do not want to go into details, but there are examples in the code.
What is important is that the integration tests are not executed with NodeJs, but they require a real browser environment (e.g., Firefox, Chrome). In addition, a local HTTP server is started which can be used to mock API calls.
As a side-note, linters are extremely useful and in comparison easy to setup. Also consider using a typed language (TypeScript), especially when the project becomes bigger and more people are working on it.
You will still need tests, as static analysis will not be able to find logical bugs. However, it helps to eliminate certain types of simple bugs like typos and the overhead (fixing linter errors or adding type annotations) is not very high.

How do I do Automating Ember testing with Dalek (setup/teardown of specific Ember components)

My TLDR; version of my question is "Is there a way I can integrate with qunit such that Dalek can get the correct context when it needs it, or conversely, can I get Dalek to run setup/teardown asset-pipeline-compiled Ember javascript to build a context for it to run tests on?"
Firstup Dalek look awesome! All my tests are currently written in qunit. I'm having some problems automating tests around a component I'm building in Ember. The component is a kind of WYSIWYG textarea.
(BTW, my qunit tests are being driven from a route within a rails application.)
To automate testing, my qunit scripts have a setup and tearDown that create a pristine textarea each time. Each test creates some content in the textarea, then interacts with it somehow doing some assertions on it.
That's all well and good, except that I require much better browser simulation than qunit can provide me with (and I'm really running out of patience for writing my own range-related browser-response simulation code).
The things I need to do mostly are:
1. Move the caret around using arrow keys, and type characters.
2. Click at specific points in the textarea (not x,y co-ords, but rather a specific points in the text).
It struck me that Dalek could totally help with this, but the way I'm doing this workflow, I think I'd either need Dalek to be remote controllable via my qunit tests, or else somehow rewrite my tests in Dalek, but to do that, I'd need to be able to get Dalek to use jQuery and Ember to create the component and data context for the setup/teardown, which I'm not even sure Dalek supports.
What I really need is part-integration, part-unit testing, and there doesn't seem to be a great answer in the JS/Rails/Ember testing space that will handle this set of conditions.
I fear DalekJS is not the tool you need right now, even if it is "my little tool" I suggest to use Karma (former called Testacular) - which was originally developed to test AngularJS applications: http://karma-runner.github.io/0.12/index.html
You could use it together with Protractor https://github.com/angular/protractor
It depends on selenium, but is fairly easy (compared to some other tools) to set up.
There is also a manual on how to use it in combination with Ember: http://karma-runner.github.io/0.10/plus/emberjs.html

How to do Integration Testing for (Angularjs) Web Apps

I'm developing an Webapp.
It consists of 2 parts. A node rest server and an angularjs client.
The app is structured this way: Rest Server <--> Api Module <--> Angular App
The server is currently well tested.
I have Unit Tests and Integration Tests.
The Integration Tests are accessing a real database and calling the rest api over http.
I think this is as high level as it can get for the server testing.
The integration tests run fast, too.
I'm pretty confident that the way I tested the server is sufficient for my use case and I'm happy with the results.
However I'm struggling how to test the angularjs app.
I have unit tests for the relevant directives and modules. Writing these wasn't an issue.
I would like to write integration tests that cover user scenarios.
Something like a signup scenario: The user visits the website, goes to the signup form, and submits the form with the data.
The angularjs team is moving from ng-scenarios to protractor.
Protractor is using Selenium to run the tests.
Therefore there are two scopes: The app scope and the test scope.
Now I can think of three different abstractions I could use.
And I'm not sure which one suites me best.
Mock the Api Module
Mock the Rest Server
Use the full server
Mock the Api Module
In this case I would need not to setup a server. All Interactions are running in the browser
Advantage:
No server is needed
Disadvantage:
The api is in the browser scope and I have to tamper with this.
I really like this solution, but I find it difficult to mock the Api.
The Api needs to be modified in the browsers scope.
Therefore I need to send the modification from the test to the browser.
This can be done, however I don't see how I could run assertions like mockedApi.method.wasCalledOnce() in the tests scope
Mock the Rest Server
Advantage:
Client would be unchanged
Only one scope to deal with
Disadvantage:
One has to setup the Rest Routes
I could create a complete Mock Rest Server in nodejs.
Protractor Tests are written in nodejs, thus the control of the server can be done in the test.
Before I run the test I can tell the server how to respond.
Something like this: server.onRequest({method: 'GET', url: '/'}).respondWith('hello world')
Then I could do assertions like wasCalledOnce
Use the full Server with Database
Each test is running with a complete server and can add elements to the database.
After each test one can look at the expected elements in the database
Advantage:
Can be pretty sure, that if these tests are running the app is functional in the tested use case
Disadvantage:
I already made a fairly intense integration test with the rest server. This feels like doing the same again.
Setup depends on the full server
Current Conclusion
Mocking the Api would separate the server and the client completely.
Using a Mock Api would be a higher level test, but would require a fake server
Doing a full integration test would give the best reliability, but this is also highly dependant on the server code
What should I pick? What would you do?
I think I answered this same question in the Protractor google group. I am much of the same mind as you about wanting no server but wanting all of my test code in one place (in Protractor) and not split between Protractor and the browser. To enable this, I took matters into my own hand and developed a proxy for the $httpBackend service which runs within Protractor. It allows one to configure the $httpBackend service as if it were running in Protractor. I have been working on it for a while now and its reasonably full featured at this point. It would be great if you could take a look and let me know if I am missing anything important.
https://github.com/kbaltrinic/http-backend-proxy
It is an excellent question, which has nothing to do with a particular tool. I had to face the same problem on a big "greenfield" (ie started from scratch) project.
There is a problem of vocabulary here : the word "mock" is used everywhere, and what you called "integration test" are more "full end-to-end automated functional testing". No offence here, it's just that a clear wording will help to solve the problem.
You actually suggested the correct answer yourself : #2 stub the rest server. #1 is feseable but will be soon too hard to develop and maintain, #3 is an excellent idea but has nothing to do with UI testing and UI validation.
To achieve a high reliability of your front-end, independently of your backend, just stub the rest server, i.e. develop a stupid simple REST server that will idempotent, i. e. will ALWAYS answer the same thing to one http request. Keeping the idempotence principle will make development and test, very, very easier than any other option.
Then for one test, you only check what is displayed on the screen (test the top) and what is send to the server (test the bottom), so that the full UI stack is tested only once.
The full answer to the question should deserve an entire blog article, but I hope you can feel what to do from what I suggest.
Best regards
Here is an approach for writing integration tests for your Angular code. The key concept is to structure your code in a way that lets you invoke the various functions in a way very similar to how it's consumed by the UI. Properly decoupling your code is important to be successful at this though:
More here: http://www.syntaxsuccess.com/viewarticle/angular-integration-tests
This is a great question. This is how I would do it:
As you already have the angular unit tests for the relevant directives and modules this is perfect.
The other thing that is perfect is that your server Integration Tests are accessing a real database and are also making sure the rest api over http works.
So why not just add some high level integration tests that include angular and your server at the same time.
If you can avoid mocking, why not save the work to maintain the extra code, if possible.
Also a good read: http://blog.ericbmerritt.com/2014/03/25/mocking-is-evil.html
Mocking the REST server is the best, cleaner option in my opinion. Try Mountebank (http://www.mbtest.org). An amazing Virtualization Service tool.

Get Javascript test output into hudson

I'm writing an automation program for a Web Application. I am accessing the Web Application through a javascript API and have wrapper functions with custom assertions that currently just write output to a table in an HTML page.
Now I need to get the data output into my hudson (https://hudson.dev.java.net/) automation, where I have a lot of flexibility when it comes to arranging, sharing and presenting the results.
When I wrote NUnit tests, the hudson-integration was impeccable. I saw there was a thing called JSUnit, but it is no longer actively maintained(?), so maybe I shouldn't spend too much time learning it?
I have seen that tools like Firebug can output javascript results to a console, though I don't know where to go from there. The console output seems to stay in firefox and come no further.
Any help or tips are most welcome.
Thanks!
/ Jakob
If I understand correctly, you want your Hudson build to run a test of your Web Application which is set up and running somewhere else. (This gets a little harder if you're also building your Web Application and want to set it up for a test run all inside Hudson.)
The easy option: As one of your build steps, retrieve the HTML page with your output and tell Hudson that the page is a build artifact. That way you can look at the test output manually.
Somewhat harder: change your test output (or pass a parameter to specify the format) to match the XML format used by NUnit -- see example XML output. This is a direct link to an XML file and may not display well in your browser; try viewing source or saving as text.
Update: On re-reading your question, it wasn't clear to me whether you were interested solely in Hudson integration (which my original answer assumed), or in other possibilities for testing frameworks.
Depending on what you want to test:
you might look at testing your Web Application with Selenium. I know there's a Hudson plugin for Selenium, but I've also noticed several questions here recently describing problems with Selenium+Hudson. I don't have any experience with the combination myself.
there are lots of javascript testing frameworks with different capabilities.

Developing UI in JavaScript using TDD Principles

I've had a lot of trouble trying to come up with the best way to properly follow TDD principles while developing UI in JavaScript. What's the best way to go about this?
Is it best to separate the visual from the functional? Do you develop the visual elements first, and then write tests and then code for functionality?
I've done some TDD with Javascript in the past, and what I had to do was make the distinction between Unit and Integration tests. Selenium will test your overall site, with the output from the server, its post backs, ajax calls, all of that. But for unit testing, none of that is important.
What you want is just the UI you are going to be interacting with, and your script. The tool you'll use for this is basically JsUnit, which takes an HTML document, with some Javascript functions on the page and executes them in the context of the page. So what you'll be doing is including the Stubbed out HTML on the page with your functions. From there,you can test the interaction of your script with the UI components in the isolated unit of the mocked HTML, your script, and your tests.
That may be a bit confusing so lets see if we can do a little test. Lets to some TDD to assume that after a component is loaded, a list of elements is colored based on the content of the LI.
tests.html
<html>
<head>
<script src="jsunit.js"></script>
<script src="mootools.js"></script>
<script src="yourcontrol.js"></script>
</head>
<body>
<ul id="mockList">
<li>red</li>
<li>green</li>
</ul>
</body>
<script>
function testListColor() {
assertNotEqual( $$("#mockList li")[0].getStyle("background-color", "red") );
var colorInst = new ColorCtrl( "mockList" );
assertEqual( $$("#mockList li")[0].getStyle("background-color", "red") );
}
</script>
</html>
Obviously TDD is a multi-step process, so for our control, we'll need multiple examples.
yourcontrol.js (step1)
function ColorCtrl( id ) {
/* Fail! */
}
yourcontrol.js (step2)
function ColorCtrl( id ) {
$$("#mockList li").forEach(function(item, index) {
item.setStyle("backgrond-color", item.getText());
});
/* Success! */
}
You can probably see the pain point here, you have to keep your mock HTML here on the page in sync with the structure of what your server controls will be. But it does get you a nice system for TDD'ing with JavaScript.
I've never successfully TDDed UI code. The closest we came was indeed to separate UI code as much as possible from the application logic. This is one reason why the model-view-controller pattern is useful - the model and controller can be TDDed without much trouble and without getting too complicated.
In my experience, the view was always left for our user-acceptance tests (we wrote web applications and our UATs used Java's HttpUnit). However, at this level it's really an integration test, without the test-in-isolation property we desire with TDD. Due to this setup, we had to write our controller/model tests/code first, then the UI and corresponding UAT. However, in the Swing GUI code I've been writing lately, I've been writing the GUI code first with stubs to explore my design of the front end, before adding to the controller/model/API. YMMV here though.
So to reiterate, the only advice I can give is what you already seem to suspect - separate your UI code from your logic as much as possible and TDD them.
See also: JavaScript unit test tools for TDD
I've found the MVP architecture to be very suitable for writing testable UIs. Your Presenter and Model classes can simply be 100% unit tested. You only have to worry about the View (which should be a dumb, thin layer only that fires events to the Presenter) for UI testing (with Selenium etc.)
Note that in the I'm talking about using MVP entirely in the UI context, without necessarily crossing to the server-side. Your UI can have its own Presenter and Model that lives entirely on the client-side. The Presenter drives the UI interaction/validation etc. logic while the Model keeps state information and provides a portal to the backend (where you can have a separate Model).
You should also take a look at the Presenter First TDD technique.
This is the primary reason I switched to the Google Web Toolkit ... I develop and test in Java and have a reasonable expectation that the compiled JavaScript will function properly on a variety of browsers. Since TDD is primarily a unit testing function, most of the project can be developed and tested before compilation and deployment.
Integration and Functional test suites verify that the resulting code is functioning as expected after it's deployed to a test server.
I'm just about to start doing Javascript TDD on a new project I am working on. My current plan is to use qunit to do the unit testing. While developing the tests can be run by simply refreshing the test page in a browser.
For continuous integration (and ensuring the tests run in all browsers), I will use Selenium to automatically load the test harness in each browser, and read the result. These tests will be run on every checkin to source control.
I am also going to use JSCoverage to get code coverage analysis of the tests. This will also be automated with Selenium.
I'm currently in the middle of setting this up. I'll update this answer with more exact details once I have the setup hammered out.
Testing tools:
qunit
JSCoverage
Selenium
What I do is to poke the Dom to see if I'm getting what I expect. A great side effect of this is that in making your tests fast, you also make your app fast.
I just released an open source toolkit which will help with JavaScript tdd immensely. It is a composition of many open source tools which gives you a working requirejs backbone app out of the box.
It provides single commands to run: dev web server, jasmine single browser test runner, jasmine js-test-driver multi browser test runner, and concatenization/minification for JavaScript and CSS. It also outputs an unminified version of your app for production debugging, precompiles your handlebar templates, and supports internationalization.
No setup is required. It just works.
http://github.com/davidjnelson/agilejs

Categories

Resources