Context
We're running some functional unit tests using theintern.io. Unfortunately, third party network calls randomly cause the page to time out, thus causing all of our unit tests to fail.
At a certain point, I'd like to cut the cord on all network calls to prevent the browser from hanging / tests from failing. I've tried window.stop(), but this causes the test to hang.
Question
How can I stop all network calls without also stopping javascript execution of my functional tests?
Firstly, window.stop won't stop javascript execution, in your case it might look like its doing so, if the tests are getting loaded dynamically.
window.stop is same as click stop button on your browser's address bar (Source: https://developer.mozilla.org/en-US/docs/Web/API/Window/stop).
Now coming to your questions, there are multiple ways to handle this, my suggestion would be to mock all the network calls. (Assuming the third party calls that you mentioned are all AJAX). This is also a good practice while writing unit tests. You can use sinon for this. (Source: http://sinonjs.org/)
You can give an expected response, or just give a 404 or anything else, and respond immediately to the requests. That way, your tests won't timeout. Hope it helps
{
beforeEach() {
server = sinon.fakeServer.create();
},
afterEach() {
server.restore()
},
tests: {
testA() {
server.respondWith("GET", /<regexPattern>/,[200, {"Content-Type": "text/plain"},""]);
server.respondImmediately = true;
}
}
}
Related
I am testing a web app with cypress.
I sign in in my beforeEach function. Then, in my different tests, I begin with cy.visit('mysite.com/url').
My problem is that after sign in, the website redirects to a specific page of the website. This redirection happens after the cy.visit of my tests. Therefore, my tests run on the redirection page, they fail.
The redirection seems to be not linked with any request I could wait for.
I end up with cy.wait(3000) but it is not very satisfying. My tests failed sometimes because the redirection might take more than 3 seconds. I do not want to increase that time because it would take too long to run my tests.
Is there a way to do something like:
while (!cy.url().eq('mysite.com/redirection-url')) {
cy.wait(300);
}
Cypress provides retry abilities on assertion. You can resolve the waiting issue for the redirection URL with the below change
cy.url().should('contain', '/redirection-url')
OR
cy.url().should('eq', 'mysite.com/redirection-url')
Here should assertion will wait till 4 seconds by default and retries cy.url()
You can change the default timeout by updating parameter in cypress.json file
{
"defaultCommandTimeout": 30000
}
Hope this will solve your issue.
If the redirect is to a different server, simply using cy.url does not work.
We solved the problem with the following code:
cy.origin('https://localhost:7256/',() => {
cy.url().should(
'contains',
'account/login');
})
where https://localhost:7256/ is the base URL of the server to which we are redirected.
Here is the code I am using expect(element(by.id('title')).getAttribute('value')).toMatch('Sample Title')
in local machine it is working perfectly fine and on server it is not with the following error.
Failed: Timed out waiting for asynchronous Angular tasks to finish after 11 seconds. This may be because the current page is not an Angular application. Please see the FAQ for more details: https://github.com/angular/protractor/blob/master/docs/timeouts.md#waiting-for-angular
While waiting for element with locator - Locator: By(css selector, *[id="title"])
and surprisingly sometimes these tests are working on server when I execute them alone.
to add to the question. I observed that protractor is able to find only one element in the tests and all the remaining are ignored with the error as above.
what could be the solution for this?
That could be application issue. Sometimes it happens that angular never reports to protractor that all tasks are done, so you might get this timeout error that you have.
http://www.protractortest.org/#/timeouts
AngularJS If your AngularJS application continuously polls $timeout or
$http, Protractor will wait indefinitely and time out. You should use
the $interval for anything that polls continuously (introduced in
Angular 1.2rc3).
Angular For Angular apps, Protractor will wait until the Angular Zone
stabilizes. This means long running async operations will block your
test from continuing. To work around this, run these tasks outside the
Angular zone. For example:
this.ngZone.runOutsideAngular(() => { setTimeout(() => {
// Changes here will not propagate into your view.
this.ngZone.run(() => {
// Run inside the ngZone to trigger change detection.
}); }, REALLY_LONG_DELAY); }); As an alternative to either of these options, you could disable waiting for Angular, see below.
As said in the error, it seems your app is not an angular. Isn't it?
If so, you need to use it:
browser.waitForAngularEnabled(false);
In a NodeJS 6.10.2/SailsJS 0.12.13 based JavaScript application I experience since several months a strange error behavior.
In a Sails controller, I try to retrieve a property of a literal object:
console.log(someObject.someProperty);
console.log("I am still here!");
However, in my case someObject is undefined. So, I'd expect to get an error like 'Cannot read property someProperty of undefined.' - and then either Node.js to stop completely or the code to go on (with the next console.log).
Instead, the code simply stops executing at that point and I get a strange warning: "(node:4822) Warning: Possible EventEmitter memory leak detected. 11 close listeners added. Use emitter.setMaxListeners() to increase limit." It is however, unpredictable how often this error occurs. Somethings only once, somethings about 20 times right after each other.
What I found out so for is that it is somehow connected to the question whether there was already a response or not. Consider the following:
mySailsControllerFunction: function(req, res) {
console.log(someObject.someProperty);
console.log("I am still here!");
res.json({"foo":"dahoo"});
}
This will result in Sending 500 ("Server Error") response: ReferenceError: someObject is not defined - exactly what I expect.
However, now I first send some response and then trying to access my non existing property, turning the code into:
mySailsControllerFunction: function(req, res) {
res.json({"foo":"dahoo"});
setTimeout(function () {
console.log("Yeah!");
console.log(someObject.someProperty);
console.log("I am still here!");
},1000);
}
then I often get simply nothing: 'Yeah!' displayed, but nothing comes afterwards. The event listener error is sometimes there, sometimes not. Very strange.
Additionally, and strange enough, the problem seems to be somehow connected to the time passed since the start of Sails. I put the code you see above inside a Sails controller function which is called immediately after the clients re-connect. I then played around with the timeout values, restarting the Sails server several times. Outcome: If I set the timeout to 1s, in 4 of 5 tests, I will get the correct error behavior. For 10 seconds it is about 50%, for 30s the error will always be ignored without any console output.
However, if I put my test code outside of the Sails controller, I always get the correct error behavior by Node. So, I'm quite sure this is a wrong behavior of Sails, not Node.
Disclaimer: I don't know Sails. So it may or may not be related, but my answer may offer a clue.
From the Sails documentation:
http://sailsjs.com/documentation/reference/response-res/res-json
This method is terminal, meaning it is generally the last line of code
your app should run for a given request (hence the advisory usage of
return throughout these docs).
Thus, when you use res.json({"foo":"dahoo"});, Sails probably sends the response back to the client, closing the call sequence, which, if it uses Promises or some other async mechanism, may kind of "swallow" further code, as also suggested in an above comment. This is probably internal coding in Sails, so it's not immediately obvious from the outside WHY your second code block specifically doesn't work.
So you should stick to the first pattern: access your property first, and put res.json() at the end of the controller function.
For reference: I finally solved that issue.
There were, somehow hidden in the code, process exit handlers defined:
process.on('exit', myErrorFunction.bind());
process.on('SIGINT', myErrorFunction.bind());
process.on('uncaughtException', myErrorFunction.bind());
The problem was: The function in which these lines were in was bound to a cronjob. So, each time the cronjob executed, new handlers were registered. So, my assumption above (before vs. after response) was wrong: In fact everything worked till the cronjob was executed for the first time. From then on, it didn't. And eventually, the warning was fired (correctly!).
I would have never found out without this answer: Make node show stack trace after EventEmitter warning
You have to add one line of code to get the stack trace:
process.on('warning', e => console.warn(e.stack));
Additionally, speaking of stack traces: In the Sails serverError response (api/api/responses/serverError.js), it is convenient to access it like this:
module.exports = function serverError (data, options) {
console.log(data.stack);
/* ... */
};
I created a Node.js API.
When this API gets called I return to the caller fairly quickly. Which is good.
But now I also want API to call or launch an different API or function or something that will go off and run on it's own. Kind of like calling a child process with child.unref(). In fact, I would use child.spawn() but I don't see how to have spawn() call another API. Maybe that alone would be my answer?
Of this other process, I don't care if it crashes or finishes without error.
So it doesn't need to be attached to anything. But if it does remain attached to the Node.js console then icing on the cake.
I'm still thinking about how to identify & what to do if the spawn somehow gets caught up in running a really long time. But ready to cross that part of this yet.
Your thoughts on what I might be able to do?
I guess I could child.spawn('node', [somescript])
What do you think?
I would have to explore if my cloud host will permit this too.
You need to specify exactly what the other spawned thing is supposed to do. If it is calling an HTTP API, with Node.js you should not launch a new process to do that. Node is built to run HTTP requests asynchronously.
The normal pattern, if you really need some stuff to happen in a different process, is to use something like a message queue, the cluster module, or other messaging/queue between processes that the worker will monitor, and the worker is usually set up to handle some particular task or set of tasks this way. It is pretty unusual to be spawning another process after receiving an HTTP request since launching new processes is pretty heavy-weight and can use up all of your server resources if you aren't careful, and due to Node's async capabilities usually isn't necessary especially for things mainly involving IO.
This is from a test API I built some time ago. Note I'm even passing a value into the script as a parameter.
router.put('/test', function (req, res, next) {
var u = req.body.u;
var cp = require('child_process');
var c = cp.spawn('node', ['yourtest.js', '"' + u + '"'], { detach: true });
c.unref();
res.sendStatus(200);
});
The yourtest.js script can be just about anything you want it to be. But I thought I would have enjoy learning more if I thought to first treat the script as a node.js console desktop app. FIRST get your yourtest.js script to run without error by manually running/testing it from your console's command line node yourstest.js yourparamtervalue THEN integrate it in to the child.spawn()
var u = process.argv[2];
console.log('f2u', u);
function f1() {
console.log('f1-hello');
}
function f2() {
console.log('f2-hello');
}
setTimeout(f2, 3000); // wait 3 second before execution f2(). I do this just for troubleshooting. You can watch node.exe open and then close in TaskManager if node.exe is running long enough.
f1();
I recently discovered Node js and I read in various articles that Node js is fast and can handle more requests than a Java server although Node js use a single thread.
I understood that Node is based on an event loop, each call to a remote api or a database is done with an async call so the main thread is never blocked and the server can continue to handle others client requests.
If I understood well, each portion of code that can take times should be processed with an async call otherwise the server will be blocked and it won't be able to handle others requests ?
var server = http.createServer(function (request, response) {
//CALL A METHOD WHICH CAN TAKE LONG TIME TO EXECUTE
slowSyncMethod();
//THE SERVER WILL STILL BE ABLE TO HANDLER OTHERS REQUESTS ??
response.writeHead(200, {"Content-Type":"text/plain"});
response.end("");
});
So if my understanding is correct, the above code is bad because the synchronous call to the slow method will block the Node js main thread ? Is Node js fast on condition that all the code that can take times are executed in an async manner ?
NodeJs is as fast as your hardware(vm) and the v8 that is running it. that being said, any heavy duty task like any type of media(music, image, video etc) file processing will definitively lock your application. so will computation on large collections thats why the async model is leveraged though events, and deferred invocations. that being said nothing stops you from spawning child processes to relegate heavy duty and asynchronously get back the result. But if you are finding your self in the need to do this for many tasks, maybe you should revisit your architecture.
I hope thhis helps