Facing timing issue with protractor. sometimes my protractor test cases fails due to network or performance issues. I have solved existing issues with browser.sleep(). Later I came to know about browser.wait().
What is the difference between them and which one is better for solving network or performance issues.
When it comes to dealing with timing issue, it is tempting and easy to put a "quick" browser.sleep() and move on.
The problem is, it would some day fail. There is no golden/generic rule on what sleep timeout to set and, hence, at some point due to network or performance or other issues, it might take more time for a page to load or element to become visible etc. Plus, most of the time, you would end up waiting more than you actually should.
browser.wait() on the other hand works differently. You provide an Expected Condition function for Protractor/WebDriverJS to execute and wait for the result of the function to evaluate to true. Protractor would continuously execute the function and stop once the result of the function evaluates to true or a configurable timeout has been reached.
There are multiple built-in Expected Conditions, but you can also create and use a custom one (sample here).
sleep: Schedules a command to make the driver sleep for the given amount of time
wait: Schedules a command to wait for a condition to hold or promise to be resolved.
Reference for detail: http://www.protractortest.org/#/api?view=webdriver.WebDriver.prototype.sleep
browser.sleep()
Schedules a command to make the driver sleep for the given amount of time.
browser.wait()
Schedules a command to wait for a condition to hold or promise to be resolved.
This function blocks WebDriver's control flow, not the javascript runtime. It will only delay future webdriver commands from being executed (e.g. it will cause Protractor to wait before sending future commands to the selenium server), and only when the webdriver control flow is enabled.
Documentation link http://www.protractortest.org/#/api
Related
I'm running an Angular app and when testing on protractor a click(), I don't know when should I resolve the promise with a then().
I found this on Protractor API:
A promise that will be resolved when the click command has completed.
So, should I use click().then() in every click?
So, should I use click().then() in every click?
Definitely not.
It's not needed because Protractor/WebDriverJS has this mechanism called "Control Flow" which is basically a queue of promises that need to be resolved:
WebDriverJS maintains a queue of pending promises, called the control
flow, to keep execution organized.
and Protractor waits for Angular naturally and out-of-the-box:
You no longer need to add waits and sleeps to your test. Protractor
can automatically execute the next step in your test the moment the
webpage finishes pending tasks, so you don’t have to worry about
waiting for your test and webpage to sync.
Which leads to a quite straight-forward testing code:
var elementToBePresent = element(by.css(".anotherelementclass")).isPresent();
expect(elementToBePresent.isPresent()).toBe(false);
element(by.css("#mybutton")).click();
expect(elementToBePresent.isPresent()).toBe(true);
Sometimes though, if you experience synchronization/timing issues, or your app under test is non-Angular, you may solve it by resolving the click() explicitly with then() and continue inside the click callback:
expect(elementToBePresent.isPresent()).toBe(false);
element(by.css("#mybutton")).click().then(function () {
expect(elementToBePresent.isPresent()).toBe(true);
});
There are also Explicit Waits to the rescue in these cases, but it's not relevant here.
Yes, you should.
Maybe right now it's not necessary, but maybe in next versions it is.
So, if click return a promise, you should use it.
http://www.protractortest.org/#/api?view=webdriver.WebElement.prototype.click
I need to perform many "setTimeouts" 60 seconds. Basically, I'm creating a database record, and 60 seconds from now, I need to check whether the database record was changed.
I don't want to implement a "job queue" since it's such a simple thing, and I definitely need to check it around the 60 second mark.
Is it reliable, or will it cause issues?
When you use setTimeout or setInterval the only guarantee that you get is that the code will not be executed before the programmed time.
It can however start somewhat later because other code that is being executed when the clock ticks (in other words other code will not be interrupted in the middle of the handling of an event to process a timeout or interval event).
If you don't have long blocking processing in your code it means that timed events will be reasonably accurate. If you are instead using long blocking calls then probably node is not the correct tool (it's designed around the idea of avoiding blocking "synch" calls).
you should try WorkerTimer.js it is more good for handling background processes and more accurate than the traditional setInterval or Timeout.
it is available as a node.js npm package.
https://www.npmjs.com/package/worker-timer
I'm testing an asynchronous piece of code with something that looks like this:
randomService.doSomething().then(function() {
console.log('I completed the operation!');
});
Surprisingly (to me) I've found that it only succeeds (ie console.log output is shown) when wrapped inside jasmine's runs function, like so:
var isDone = false;
runs(function() {
randomService.doSomething().then(function(data) {
console.log('I completed the operation!');
isDone = true;
});
});
waitsFor(function() {
return isDone;
}, 'Operation should be completed', 1000);
As I understood it, I thought waitsFor was only to delay the code, in other words I would use it if I had more code that I had to delay until after the asynchronous call completed - in other words, I would have thought that there'd be no reason for me to use runs and waitsFor since there's nothing that comes after this bit of code, right? That's the impression I got from reading this question: What do jasmine runs and waitsFor actually do? but obviously I've gotten myself mixed up at some point.
Does anyone have any thoughts on this?
EDIT:
Here is a Plunker with far more detail of the problem:
http://plnkr.co/edit/3qnuj5N9Thb2UdgoxYaD?p=preview
Note how the first test always passes, and the second test fails as it should.
Also, I'm sure I should have mentioned this before, but this is using angularJS, and Jasmine 1.3.
I think I found the issue. Here's the article: http://blogs.lessthandot.com/index.php/webdev/uidevelopment/javascript/testing-asynchronous-javascript-w-jasmine/
Essentially it's necessary because Jasmine doesn't wait for the asynchronous calls to finish before it completes a test. According to the article, if a call takes long enough and there are more tests later, an expect statement in an asynchronous callback in a previous test could finally execute in a different test entirely, after the original test completed.
Using runs and waitsFor solve the problem because they force jasmine to wait for the waitsFor to finish before proceeding to the next test; This is a moot point however because evidently Jasmine 2.0 addresses asynchronous testing in a better way than 1.3, obsoleting runs and waitsFor.
That's just how Jasmine works. The question you linked has an answer with a decent explanation:
Essentially, the runs() and waitFor() functions stuff an array with
their provided functions. The array is then processed by jamine
wherein the functions are invoked sequentially. Those functions
registered by runs() are expected to perform actual work while those
registered by waitFor() are expected to be 'latch' functions and will
be polled (invoked) every 10ms until they return true or the optional
registered timeout period expires. If the timeout period expires an
error is reported using the optional registered error message;
otherwise, the process continues with the next function in the array.
To sum it up, each waitsFor call must have a corresponding runs call. They work together. Calling waitsFor without a runs call somewhere before it does not make sense.
My revised plunker (see comments on this answer): http://plnkr.co/edit/9eL9d9uERre4Q17lWQmw
As you can see, I added $rootScope.$apply(); to the the timeout function you were testing with. This makes the console.log inside the promise callback run. HOWEVER it only runs if you ignore the other test with an xit, AND the expect after the console.log does not seem to be recognized as a Jasmine test (though it certainly must run, because the console.log does).
Very weird - I don't really understand why this is happening, but I think it has something to do with how Jasmine works behind the scenes, how it registers tests and whatnot. My understanding at this point is if you have an expect inside an async callback, Jasmine won't "recognize" it as a part of the test suite unless the initial async call was made inside a runs.
As for why this is, I don't know. I don't think it's worth trying to understand - I would just use runs and waitsFor and not worry about it, but that's just me. You can always dig through the source if you're feeling masochistic. Sorry I couldn't be of more help.
It seems that the QUnit functions stop() and start() allow to wait for asynchronous tests, but during that waiting period the whole test suite hangs. Is there a way to run asynchronous tests in a non-blocking fashion using QUnit?
Looking at the docs for asyncTest and stop, there's two reason's I can see that it's set up like that.
So that you aren't accidentally running two tests at a time which might conflict with something (ie, modifying the DOM and so changing each others' test results).
So that QUnit knows when the tests have finished. If it comes to the end of all the synchronous tests, then it'll write up the results, which you don't really want it to do if there are still async tests happening in the background.
So these are a good thing, and you probably don't actually want the async tests to not block as they run. You could probably do it by calling start immediately after the start of your async tests, but remember, JavaScript is actually single threaded (even though it sometimes gives the appearance of multi-threading), so this might cause unexpected results, as you can't guarantee when your async test will continue running... it might not (probably won't) be until after the other tests have finished and the results have been published.
I know I can get Selenium 2's webdriver to run JavaScript and get return values but so much asynchronous stuff is happening I would like JavaScript to talk to Selenium instead of the other way around. I have done some searching and haven't found anything like this. Do people just generally use implicitly_wait? That seems likely to fail since it's not possible to time everything? Perfect example would be to let Selenium know when an XHR completed or an asynchronous animation with undetermined execution time.
Is this possible? We're using Selenium 2 with Python on Saucelabs.
You should look into the execute_async_script() method (JavascriptExecutor.executeAsyncScript in Java, IJavaScriptExecutor.ExecuteAsyncScript() in .NET), which allows you to wait for a callback function. The callback function is automatically appended to the arguments array in your JavaScript function. So, assuming you have a JavaScript function already on the page that waits until the condition you want, you could do something like the following (Java code below, C# and Python code should be similar):
String script = "var callback = arguments[arguments.length - 1];"
+ "callback(myJavaScriptFunctionThatWaitsUntilReady());";
driver.manage().timeouts().setScriptTimeout(15, TimeUnit.SECONDS);
((JavascriptExecutor)driver).executeAsyncScript(script);
It might be possible to be even more clever and pass the callback function directly to an event that returns the proper data. You can find more information on the executeAsyncScript() function in the project JavaDocs, and can find sample code for this in the project source tree. There's a great example of waiting for an XHR to complete in the tests in this file.
If this isn't yet available in the version of the Python bindings available for use with SauceLabs, I would expect it to be available before long. Admittedly, in a sense, this is pushing the "poll for desired state" from your test case into JavaScript, but it would make your test more readable.
Theoretically it is possible, but I would advise against it.
The solution would probably have some jQuery running on the site that sets a variable to true when the JavaScript processing has finished.
Set selenium up to loop through a getEval until this variable becomes true and then do something in Selenium.
It would meet your requirements but it's a really bad idea. If for some reason your jQuery doesn't set the trigger variable to true (or whatever state you expect) Selenium will sit there indefinetly. You could put a really long timeout on it, but then what would be the different in just getting Selenium to do a getEval and wait for a specific element to appear?
It sounds like you are trying to overengineer your solution and it will cause you more pain in the future will very few additional benefits.
Not to be overly blunt, but if you want your App to talk to your Test Runner, then you're doing it wrong.
If you need to wait for an XHR to finish, you could try displaying a spinner and then test that the spinner has disappeared to indicate a successful request.
In regards to the animation, when the animation has completed, maybe its callback could add a class indicating that the animation has finished and then you could test for the existence of that class.
Testing animation with selenium is opening a can of worms. The tests can be quite brittle and cause many false positives.
The problem is to do that the calls are asynchronous, and difficult to track the behaviour and change in state of the page.
In my experience the asynchronous call can be so quick that the spinner is never displayed, and the state of the page may skip a state entirely (that Selenium can detect).
Waiting for the state of the page to transition can make the tests less brittle, however the false positives cannot be removed entirely.
I recommend manual testing for animation.