There is an web-application, that needs to be tested. This application uses AJAX and jQuery. Tests have to be written for all possible interactions with the browser and client-side. There are some tools for this, for example, Selenium IDE, but I wonder if it is possible to use any headless browser.
So, requirements for the testing system are:
Query pages from the remote server, simulate browser behavior (basically we give the headless browser the URL, browser fetches the page and launches tests on it);
Inject tested JavaScript or test JavaScript already loaded on the remote page;
Use any of testing frameworks than can be integrated with any of CI software (Jasmine, Mocha etc.).
It is possible to use mocking techniques when dealing with AJAX requests, for example, but I'm trying to test real-life application. Hope that this question will be useful for anybody.
As far as I investigated this topic, there is no means of doing this so far.
In my case I have a server PHP application, that talks with the outside world using REST interface. My JavaScript code talks to server and performs some interface manipulations depending on responses. So, my goal is to test the JavaScript code, but it hardly relies on the server side. So, I have to ways of testing JavaScript:
Using mocks, consider looking up this article. You basically simulate you server-side API. There is a problem with this method - whenever you change your API, you have to perform corresponding changes in your mocks, so the testing set will be actual.
Calling JavaScript testing utilities directly from PHPUnit tests (or whatever server-side testing is used) - there are no solution for this yet, unfortunately. But this method will save a lot of developers time (no need to rewrite mocks for 100-200 example queries), also we can guide the server's behavior on-the-fly.
Please, give a feedback on the second approach. If it is really needed, I guess it make sense to implement it.
Related
I'm developing an Webapp.
It consists of 2 parts. A node rest server and an angularjs client.
The app is structured this way: Rest Server <--> Api Module <--> Angular App
The server is currently well tested.
I have Unit Tests and Integration Tests.
The Integration Tests are accessing a real database and calling the rest api over http.
I think this is as high level as it can get for the server testing.
The integration tests run fast, too.
I'm pretty confident that the way I tested the server is sufficient for my use case and I'm happy with the results.
However I'm struggling how to test the angularjs app.
I have unit tests for the relevant directives and modules. Writing these wasn't an issue.
I would like to write integration tests that cover user scenarios.
Something like a signup scenario: The user visits the website, goes to the signup form, and submits the form with the data.
The angularjs team is moving from ng-scenarios to protractor.
Protractor is using Selenium to run the tests.
Therefore there are two scopes: The app scope and the test scope.
Now I can think of three different abstractions I could use.
And I'm not sure which one suites me best.
Mock the Api Module
Mock the Rest Server
Use the full server
Mock the Api Module
In this case I would need not to setup a server. All Interactions are running in the browser
Advantage:
No server is needed
Disadvantage:
The api is in the browser scope and I have to tamper with this.
I really like this solution, but I find it difficult to mock the Api.
The Api needs to be modified in the browsers scope.
Therefore I need to send the modification from the test to the browser.
This can be done, however I don't see how I could run assertions like mockedApi.method.wasCalledOnce() in the tests scope
Mock the Rest Server
Advantage:
Client would be unchanged
Only one scope to deal with
Disadvantage:
One has to setup the Rest Routes
I could create a complete Mock Rest Server in nodejs.
Protractor Tests are written in nodejs, thus the control of the server can be done in the test.
Before I run the test I can tell the server how to respond.
Something like this: server.onRequest({method: 'GET', url: '/'}).respondWith('hello world')
Then I could do assertions like wasCalledOnce
Use the full Server with Database
Each test is running with a complete server and can add elements to the database.
After each test one can look at the expected elements in the database
Advantage:
Can be pretty sure, that if these tests are running the app is functional in the tested use case
Disadvantage:
I already made a fairly intense integration test with the rest server. This feels like doing the same again.
Setup depends on the full server
Current Conclusion
Mocking the Api would separate the server and the client completely.
Using a Mock Api would be a higher level test, but would require a fake server
Doing a full integration test would give the best reliability, but this is also highly dependant on the server code
What should I pick? What would you do?
I think I answered this same question in the Protractor google group. I am much of the same mind as you about wanting no server but wanting all of my test code in one place (in Protractor) and not split between Protractor and the browser. To enable this, I took matters into my own hand and developed a proxy for the $httpBackend service which runs within Protractor. It allows one to configure the $httpBackend service as if it were running in Protractor. I have been working on it for a while now and its reasonably full featured at this point. It would be great if you could take a look and let me know if I am missing anything important.
https://github.com/kbaltrinic/http-backend-proxy
It is an excellent question, which has nothing to do with a particular tool. I had to face the same problem on a big "greenfield" (ie started from scratch) project.
There is a problem of vocabulary here : the word "mock" is used everywhere, and what you called "integration test" are more "full end-to-end automated functional testing". No offence here, it's just that a clear wording will help to solve the problem.
You actually suggested the correct answer yourself : #2 stub the rest server. #1 is feseable but will be soon too hard to develop and maintain, #3 is an excellent idea but has nothing to do with UI testing and UI validation.
To achieve a high reliability of your front-end, independently of your backend, just stub the rest server, i.e. develop a stupid simple REST server that will idempotent, i. e. will ALWAYS answer the same thing to one http request. Keeping the idempotence principle will make development and test, very, very easier than any other option.
Then for one test, you only check what is displayed on the screen (test the top) and what is send to the server (test the bottom), so that the full UI stack is tested only once.
The full answer to the question should deserve an entire blog article, but I hope you can feel what to do from what I suggest.
Best regards
Here is an approach for writing integration tests for your Angular code. The key concept is to structure your code in a way that lets you invoke the various functions in a way very similar to how it's consumed by the UI. Properly decoupling your code is important to be successful at this though:
More here: http://www.syntaxsuccess.com/viewarticle/angular-integration-tests
This is a great question. This is how I would do it:
As you already have the angular unit tests for the relevant directives and modules this is perfect.
The other thing that is perfect is that your server Integration Tests are accessing a real database and are also making sure the rest api over http works.
So why not just add some high level integration tests that include angular and your server at the same time.
If you can avoid mocking, why not save the work to maintain the extra code, if possible.
Also a good read: http://blog.ericbmerritt.com/2014/03/25/mocking-is-evil.html
Mocking the REST server is the best, cleaner option in my opinion. Try Mountebank (http://www.mbtest.org). An amazing Virtualization Service tool.
Trying to understand using node.js for a web applications.
Are there basically 2 major uses cases, i.e.:
The entire system is written in node, so you have functions for login, logout, password recover, and whatever else the web app does. All of this is written in javascript?
You use node.js only for sending the client updates, to have a real-time effect on the app. But the rest of the application is written in e.g. rails or django
Please tell me if I understand this correctly:
In terms of other technologies used with node.js, you tend to see people using node.js as the backend server, socket.io on the client side to establish a cross-browser long running ajax call library, and then you might use backbone.js for your client mvc pattern.
Is this right?
Basically speaking, it is just a tool to run javascript code server side. What you do with it is up to you. Many are using it as a complementary system since it's relatively new, but it's perfectly possible to run an standalone app with node.js.
It's said to be particularly good at handling concurrent connections, which is why it is often recommended to handle real-time jobs within an app, but there is no "obligation" so to speak to use it for this specific use case, it's just one thing you can do.
As with everything, the best way to understand it is to use it, so don't be afraid to play around with it.
Use case for Node js as we are using in our Application
Skype like voice & video chat on chrome browser using node js
This is more of a curiosity really, to see if some one has done anything similar, or if at all it is possible.
I'm working on a project that will get notification through external notifications. Now I could go about doing this by having notifications coming to my server and have a comet setup between my client and server.
BUT
I was wondering if I could write server logic into my client and listen out for notifications from external sources. Immediately one issue I see is, external sources would need callback URL etc, which I dont know if you could do from client side (unless one could use the IP address in that way).
As you can see it is more ideas and discussions if such a thing was possible, this is somewhat inspired by P2P models whereby you wouldn't be mediating things through your central server.
Thanks in advance!
GWT compiles (nearly) Java source into JavaScript, so compiled GWT apps can't do anything that traditional JavaScript running in the browser cannot do. The major advantage of bringing Java into the picture isn't automatic access to any/all JVM classes, but the ability to not only maintain Java sources, which tend to be easier to refactor and test as well as keep consistent with the server, and to compile that statically defined code into JavaScript, performing all kinds of optimizations at compile time that aren't possible for normal JavaScript.
So no, while you can have some code shared by the client (in a browser) and the server (running in a JVM), you can't run Tomcat/Jetty/etc in the browser just by using GWT to compile the java code into JS.
As you point out, even if this was possible, it would be difficult to get different clients to talk back and forth, without also requiring that the browsers can see and connect at will to one another. BitTorrent and Skype have different ways for facilitating this, and currently browsers do not allow anything like this - they are designed to make connections to other servers, not to allow connections to be made to them.
Push notifications from the web server to the browser are probably the best way forward, either through wrapping comet or the like, or through an existing GWT library like Atmosphere (see https://github.com/Atmosphere/atmosphere/tree/master/samples/gwt-demo for a demo).
I'm starting a new Facebook canvas application so I can pick the technology I'm going to use. I've always been a fan of the .NET platform so I'm strongly considering it for this app. I think the work done in:
facebooksdk.codeplex.com
looks very promising. But my question is the following:
It's my understanding that when using an app framework like this (or PHP for that matter) with Facebook, whenever we have a call into the API to do some action (say post to the stream), the flow would be the following:
-User initiates request which is direceted to ASP.NET server
-ASP.NET server makes Facebook API call
so a total of three machines are involved.
Why wouldn't one use the Javascript SDK instead?
http://developers.facebook.com/docs/reference/javascript/FB.api
"Server-side calls are available via the JavaScript SDK that allow you to build rich applications that can make API calls against the Facebook servers directly from the user's browser. This can improve performance in many scenarios, as compared to making all calls from your server. It can also help reduce, or eliminate the need to proxy the requests thru your own servers, freeing them to do other things."
So as I see it, I'd be taking my ASP.NET server out of the equation, reducing the number of machines involved from three to two. My server is under less load and the user (likely) gets fatter performance.
Am I correct that using the Facebook C# SDK, we have this three machine scenario instead of the two machine scenario of the JS API?
Now I do understand that a web server framework like ASP.NET offers great benefits like great development tools, infrastructure for postbacks, etc, but do I have an incomplete picture here? Would it make sense to use the C# framework but still rely on the javascript sdk for most of the FB api calls? When should one use each?
Best,
-Ben
You should absolutely use the Javascript SDK when you can. You are going to get a lot better performance and your app will be more scalable. However, performance isn't always the only consideration. Some things are just easier on the server. Also, a lot of apps do offline (or delayed processing) of user data that doesn't involve direct interaction.
I don't think that there is a right or wrong place to use each SDK, but they definitely both have their place in a well built Facebook app. My advice would just be to use whichever is easier for each task. As your app grows you are going to learn where the bottlenecks are and where you need to really squeeze that extra bit of performance is needed by either moving stuff to the client (Javascript SDK) or moving stuff to be processed in the background (Facebook C# SDK).
Generally, we use the Javascript SDK for some authentication stuff and for most of the stuff with the user interface. The one exception to the UI stuff is when we are really concerned about handling errors. It is a lot easier to handler errors on the server than with the Javascript SDK. Errors I am talking about are things like errors from facebook or just general facebook downtime.
Like I said, in the beginning just use both and do whatever is easier for each task.
I have a website that uses AJAX heavily to communicate with the server. Now I want to do performance and stress testing using automatic scripts. Do you have any recommendations?
The functionality maybe, given a URL, hook up the page ready callback. In the callback I can emulate "click" to some button using the button's id property.
Thanks.
There are a bunch of tools you can use to automate the UX of your site to make sure that things work fine. I'll break them down arbitrarily.
The ones that come to mind are Sahi and Selenium. These allow you to automate clicking, submitting etc. similar to what GUI testing tools do and test your application.
Mechanize (Perl version(the original), Ruby version and python version) are used to write scripts that can interact with your website to simulate a user. They're not "GUI" based so don't rely on a browser. This might affect what you can do with Javascript. Another similar tool (although I don't have personal experience with it) is watir.
If you want to hammer your website (i.e. performance testing), they only thing I've come across is the Apache Benchmarker. It can generate reports on how much raw traffic your site can take before it comes crashing down. Assuming your callbacks are not stateful, you can use this to hammer them.
use Selenium...