Cypress How to handle erors - javascript

I'm testing a lot of things but some of them are not too important(like caption text fail)I want to add optional parameter (if its wrong thats okay continue testing)
I used to work with Katalon Studio, it has Change failure options(stop,fail,continue) Can I make it with Cypress for my test cases.
Sample image

As Mikkel mentioned already Cypress doesn't like optional testing. There is a way how you could do that by using an if-statement as explained in this question: In Cypress, is there a way to avoid a failure depending on a daily message?
But to do that for every test you optionally want to test can make a big pile up of your code. So if you don't care if it succeeds or fails, just don't test it.
Another way you can try to be more resilient is by cutting up the tests further more. But you have to make sure that the scenarios don't rely on each other otherwise they will still fail.

Related

TDD basics - do I add or replace tests?

I'm completely new to TDD and am working my way through this article.
It's all very clear except for a basic thing that probably seems too obvious to mention:
Having run the first test (module exists), what do I do with my code before running the next one? Do I keep it so the next test includes results from the first one? Do I delete the original code? Or do I comment it out and only leave the current test uncommented?
Put another way, does my spec file end up as long list of tests which are run every time, or should it just contain the current test?
Quoting the same article linked to in the question.
Since I don’t have a failing test though, I won’t write any module
code. The rule is: No module code until there’s a failing test. So
what do I do? I write another test—which means thinking again.
Spec will end up with list of tests which are run every time to check for regression errors for every additional feature. If adding a new feature breaks something that was added before then the previous tests will indicate by failing the test.

Getting intermittent failures of tests in Chrome

Update 2:
After forgetting about this for a week (and being sick), I am still out of my depth here. The only news is that I reran the tests in Safari and Firefox, and now Safari always fails on these tests, and Firefox always times out. I assume I've changed something somewhere, but I have no idea where.
I'm also more and more certain there's a timing issue somewhere. Possibly simply code going async where it shouldn't, but more likely it's something being interrupted.
Update:
I'm less interested in finding the actual bug, and way more interested in why it's intermittent. If I can find out why that is, I can probably find the bug, or at least rewrite the code so it's avoided.
TL;DR:
I'm using Karma (with Webpack and Babel) to run tests in Chrome, and most of them are fine, but for some reason 7 tests get intermittent failures.
Details:
So! To work!
The six first tests MOSTLY succeed when I run it in the debug tab, and MIGHT fail. The failure percentage seems higher when running it normally, though. These six tests are related, as they all fail after running a specific method which functions as a safe delete() for some Backbone models. Basically it's meant to check and clear() all linked models in the model to be deleted, and return false if it's not able to do that.
And had the failures been 100%, I am sure I would find the error and wink it out, but the only thing I know is that it has to do with trying to access or change a model that has already been deleted, which seems like it's a timing thing...? Something being run asynchronously but shouldn't perhaps...? I have no idea how to fix it...
The seventh test is a little easier. It's using Jasmine-Jquery to check if a dom element (which starts out empty) gets another div inside after I change something. It's meant to test if Bootstrap's Alert-system is implemented correctly, but has been simplified heavily in order to try to find out why it fails. This test always fails if I run it as a gulp task, but always succeeds if I open the debug tab and rerun the test manually. So my hypothesis is that Chrome doesn't render the DOM correctly the first time, but fixes it if I rerun it in the debug tab...?
TMI:
When I say I open the debug tab and rerun the test manually, I am still inside the same 'gulp test' task, of course. I also use a 'gulp testonce', but the only change there is that it has singleRun enabled and the HTML reporter enabled. It shows the exact same pattern, though I can't check the debug page there, since the browser exits after the tests.
Output from one of the first 6 tests using the html reporter.
Chrome 47.0.2526 (Mac OS X 10.11.2) model library: sentences: no longer has any elements after deleting the sentence and both elements FAILED
Error: No such element
at Controller._delete (/Users/tom/dev/Designer/test/model.spec.js:1344:16 <- webpack:///src/lib/controller.js:107:12)
at Object.<anonymous> (/Users/tom/dev/Designer/test/model.spec.js:143:32 <- webpack:///test/model.spec.js:89:31)
Output from test 7 using the html reporter.
Website tests » Messaging system
Expected ({ 0: HTMLNode, length: 1, context: HTMLNode, selector: '#messagefield' }) not to be empty.
at Object.<anonymous> (/Users/tom/dev/Designer/test/website.spec.js:163:39 <- webpack:///test/website.spec.js:109:37)
Now, the first thing you should know is that I have of course tried other browsers, but Safari has the exact same pattern as Chrome, and Firefox gives me the same errors, but the error messages end up taking 80MB of diskspace in my html reporter and SO MUCH TIME to finish, if it even finishes. Most of the time it just disconnects - which ends up being faster.
So I ended up just using Chrome to try to find this specific bug, which has haunted my dreams now for a week.
Source
Tests:
https://dl.dropboxusercontent.com/u/117580/model.spec.js.html
https://dl.dropboxusercontent.com/u/117580/website.spec.js.html
Test output (Since the errors are intermittent, this is really just an example): https://dl.dropboxusercontent.com/u/117580/output.html
OK, all tests now succeed. I THINK this was the answer:
Some tests called controller, and some tests called window.controller. This included some reset() and remove() commands.
After doing a rewrite, I still had failures, so I did another rewrite. As part of that rewrite, I decided to make all calls through window.*, and after that rewrite all tests succeeded.

Passing a location.search parameter to a Jasmine (Karma) test

I have a javascript file that uses location.search for some logic. I want to test it using Karma. When I simply set the location (window.location.search = 'param=value' in the test), Karma complains I'm doing a full page reload. How do I pass a search parameter to my test?
Without seeing some code it's a little tricky to know what you exactly want, however it sounds like you want some sort of fixture/mock capability added to your tests. If you check out this other answer to a very similar problem you will see that it tells you to keep the test as a "unit".
Similar post with Answer
What this means is that we're not really concerned with testing the Window object, we'll assume Chrome or Firefox manufacturers will do this just fine for us. In your test you will be able to check and respond to your mock object and investigate that according to your logic.
When running in live code - as shown - the final step of actually handing over the location is dealt with by the browser.
In other words you are just checking your location setting logic and no other functionality. I hope this can work for you.

jasmine leaves dashes without sign of why

I'm having a problem with jasmine leaving tests unexecuted. Well, they don't appear in the list of text descriptions of the tests, there are just unhelpful images of dashes that signify a test should run. I'm running the entire test suite and have 12 / 2000-some tests not running for no apparent reason.
Is there a way to associate the actual name of the test with the icon? I would like to know where they are coming from and there isn't any indication of it currently
OKAY! FOUND IT!
well, i started by isolating the test, and I quickly realized I was setting the test suites this.id = 123, which was breaking lots of stuff. So if you have to keep track of an id for a test, make sure you namespace it better.

Capybara integration testing with asynchronous JavaScript

I have a Rails integration test that's failing, and I can't figure out why. I'm using Capybara with Selenium as the driver.
The test checks that page content has been removed after an AJAX call takes place. The relevant action is that a button is clicked, and that button click causes a section of the page to be removed via a jQuery remove() call. Here's an approximation of the integration testing code:
click_button("Remove stuff")
assert has_no_link?("This should be removed")
The assertion fails, implying that the link still exists.
I've been reading up on Capybara, and I know that you can extend the default wait time. I've extended it to a ridiculous value (20 seconds), and still the assertion fails.
When I follow the test process myself manually, the source of the page still shows the content, but the DOM does not (by viewing Firefox's DOM Inspector and looking for the element). Is this the issue? I've even tried inspecting the DOM while the tests are running in Firefox to check if the content was there, and it doesn't appear to be.
I have no idea how Capybara is still finding this link that no longer exists in the DOM. Is Capybara examining the source instead of the DOM and finding the link there? If so, I have no idea how to fix this test to make sure that the test passes. Refreshing the page would fix the issue, but that's not exactly what a user would do, so I hesitate to change the page just to make the test pass...
Would love any recommendations on how to approach this problem.
Thanks!
Thoughtbot has a great blog post on waiting for AJAX, which you can read here, though it is based on Rspec, and it looks like you are using TestUnit.
It works great for situations when Capybara doesn't quite wait long enough, but doesn't add unnecessarily long timeouts. I work mostly in Rspec now, but I think you can modify it by doing this:
# Create this file in test/support/wait_for_ajax.rb
module WaitForAjax
def wait_for_ajax
Timeout.timeout(Capybara.default_max_wait_time) do
loop until finished_all_ajax_requests?
end
end
def finished_all_ajax_requests?
page.evaluate_script('jQuery.active').zero?
end
end
You can either include it when needed in the individual test file, or use one of the strategies provided in this SO post for automatically including it every time.
Then, whenever you have a test that is not properly waiting for AJAX to finish, just insert the line wait_for_ajax. Using your code as an example:
click_button("Remove stuff")
wait_for_ajax
assert has_no_link?("This should be removed")
there was some method called wait_until, but it was deprecated recently and changed with synchronize method.
http://www.elabs.se/blog/53-why-wait_until-was-removed-from-capybara
https://github.com/jnicklas/capybara/blob/master/lib/capybara/node/base.rb#L44
For now I don't know how to use it exactly, but I'm waiting for answer for my question from the author, so I hope to resolve this problem soonly
There's a neat way to check that ajax requests are done, which I learned from this article. Instead of wait with some specific time, you can use the ajax $.active function (which is not in the actual API but is exposed so you can use it). $.active tells you the number of active connections to a server, so when it drops to zero you know the ajax request is done:
wait_until do
page.evaluate_script('$.active') == 0
end
If that doesn't work, then the issue is somewhere else (which judging from what you wrote seems likely). If the change is only happening in the DOM, then you have to make sure that javascript is enabled for your test/spec. In rspec, for example, you set :js => true to do that; in Cucumber you add a line above the scenario with #javascript. I don't use rails' default tests but there must be a setting to do the same.
Are you first testing something else with the link, and then testing that it is removed?
In other words, is your test something like:
has_no_link = $('#id_for_link')
//.... some test
click_button("Remove stuff")
assert has_no_link?("This should be removed")
If that's the case, then has_no_link will still point to the link. remove() will remove it from the DOM, but your variable still points to it in memory.
You should query for the link again in the DOM to see if you get a result.
I had to rewrite wait_until for a test that involved waiting for a callback triggered by streaming a youtube video to a certain point. Here's what I used:
# in spec_helper.rb
require "timeout"
def wait_until time=0.1
Timeout.timeout(Capybara.default_wait_time) do
sleep(time) until value = yield
value
end
end
Couple of approaches I thought of:
1: Check your capybara version and look for any bugs with your version
2: Maybe try doing a find on the link after the button is clicked
click_button("Remove stuff")
find(:xpath, '//a[text()='Link should be removed').should be_false
3: Use has_link? instead of has_no_link?
click_button("Remove stuff")
page.has_link?("Link should be removed").should be_false
You can always do this manually. Using #shioyama's idea:
def wait_for_ajax
timer_end = Time.now + 5.seconds
while page.evaluate_script('$.active') != 0
if Time.now > timer_end
fail "Page took more than 5 seconds to load via ajax"
end
sleep 0.1
end
end

Categories

Resources