I'm having a problem with jasmine leaving tests unexecuted. Well, they don't appear in the list of text descriptions of the tests, there are just unhelpful images of dashes that signify a test should run. I'm running the entire test suite and have 12 / 2000-some tests not running for no apparent reason.
Is there a way to associate the actual name of the test with the icon? I would like to know where they are coming from and there isn't any indication of it currently
OKAY! FOUND IT!
well, i started by isolating the test, and I quickly realized I was setting the test suites this.id = 123, which was breaking lots of stuff. So if you have to keep track of an id for a test, make sure you namespace it better.
Related
I'm testing a lot of things but some of them are not too important(like caption text fail)I want to add optional parameter (if its wrong thats okay continue testing)
I used to work with Katalon Studio, it has Change failure options(stop,fail,continue) Can I make it with Cypress for my test cases.
Sample image
As Mikkel mentioned already Cypress doesn't like optional testing. There is a way how you could do that by using an if-statement as explained in this question: In Cypress, is there a way to avoid a failure depending on a daily message?
But to do that for every test you optionally want to test can make a big pile up of your code. So if you don't care if it succeeds or fails, just don't test it.
Another way you can try to be more resilient is by cutting up the tests further more. But you have to make sure that the scenarios don't rely on each other otherwise they will still fail.
I'm completely new to TDD and am working my way through this article.
It's all very clear except for a basic thing that probably seems too obvious to mention:
Having run the first test (module exists), what do I do with my code before running the next one? Do I keep it so the next test includes results from the first one? Do I delete the original code? Or do I comment it out and only leave the current test uncommented?
Put another way, does my spec file end up as long list of tests which are run every time, or should it just contain the current test?
Quoting the same article linked to in the question.
Since I don’t have a failing test though, I won’t write any module
code. The rule is: No module code until there’s a failing test. So
what do I do? I write another test—which means thinking again.
Spec will end up with list of tests which are run every time to check for regression errors for every additional feature. If adding a new feature breaks something that was added before then the previous tests will indicate by failing the test.
Update 2:
After forgetting about this for a week (and being sick), I am still out of my depth here. The only news is that I reran the tests in Safari and Firefox, and now Safari always fails on these tests, and Firefox always times out. I assume I've changed something somewhere, but I have no idea where.
I'm also more and more certain there's a timing issue somewhere. Possibly simply code going async where it shouldn't, but more likely it's something being interrupted.
Update:
I'm less interested in finding the actual bug, and way more interested in why it's intermittent. If I can find out why that is, I can probably find the bug, or at least rewrite the code so it's avoided.
TL;DR:
I'm using Karma (with Webpack and Babel) to run tests in Chrome, and most of them are fine, but for some reason 7 tests get intermittent failures.
Details:
So! To work!
The six first tests MOSTLY succeed when I run it in the debug tab, and MIGHT fail. The failure percentage seems higher when running it normally, though. These six tests are related, as they all fail after running a specific method which functions as a safe delete() for some Backbone models. Basically it's meant to check and clear() all linked models in the model to be deleted, and return false if it's not able to do that.
And had the failures been 100%, I am sure I would find the error and wink it out, but the only thing I know is that it has to do with trying to access or change a model that has already been deleted, which seems like it's a timing thing...? Something being run asynchronously but shouldn't perhaps...? I have no idea how to fix it...
The seventh test is a little easier. It's using Jasmine-Jquery to check if a dom element (which starts out empty) gets another div inside after I change something. It's meant to test if Bootstrap's Alert-system is implemented correctly, but has been simplified heavily in order to try to find out why it fails. This test always fails if I run it as a gulp task, but always succeeds if I open the debug tab and rerun the test manually. So my hypothesis is that Chrome doesn't render the DOM correctly the first time, but fixes it if I rerun it in the debug tab...?
TMI:
When I say I open the debug tab and rerun the test manually, I am still inside the same 'gulp test' task, of course. I also use a 'gulp testonce', but the only change there is that it has singleRun enabled and the HTML reporter enabled. It shows the exact same pattern, though I can't check the debug page there, since the browser exits after the tests.
Output from one of the first 6 tests using the html reporter.
Chrome 47.0.2526 (Mac OS X 10.11.2) model library: sentences: no longer has any elements after deleting the sentence and both elements FAILED
Error: No such element
at Controller._delete (/Users/tom/dev/Designer/test/model.spec.js:1344:16 <- webpack:///src/lib/controller.js:107:12)
at Object.<anonymous> (/Users/tom/dev/Designer/test/model.spec.js:143:32 <- webpack:///test/model.spec.js:89:31)
Output from test 7 using the html reporter.
Website tests » Messaging system
Expected ({ 0: HTMLNode, length: 1, context: HTMLNode, selector: '#messagefield' }) not to be empty.
at Object.<anonymous> (/Users/tom/dev/Designer/test/website.spec.js:163:39 <- webpack:///test/website.spec.js:109:37)
Now, the first thing you should know is that I have of course tried other browsers, but Safari has the exact same pattern as Chrome, and Firefox gives me the same errors, but the error messages end up taking 80MB of diskspace in my html reporter and SO MUCH TIME to finish, if it even finishes. Most of the time it just disconnects - which ends up being faster.
So I ended up just using Chrome to try to find this specific bug, which has haunted my dreams now for a week.
Source
Tests:
https://dl.dropboxusercontent.com/u/117580/model.spec.js.html
https://dl.dropboxusercontent.com/u/117580/website.spec.js.html
Test output (Since the errors are intermittent, this is really just an example): https://dl.dropboxusercontent.com/u/117580/output.html
OK, all tests now succeed. I THINK this was the answer:
Some tests called controller, and some tests called window.controller. This included some reset() and remove() commands.
After doing a rewrite, I still had failures, so I did another rewrite. As part of that rewrite, I decided to make all calls through window.*, and after that rewrite all tests succeeded.
I have a javascript file that uses location.search for some logic. I want to test it using Karma. When I simply set the location (window.location.search = 'param=value' in the test), Karma complains I'm doing a full page reload. How do I pass a search parameter to my test?
Without seeing some code it's a little tricky to know what you exactly want, however it sounds like you want some sort of fixture/mock capability added to your tests. If you check out this other answer to a very similar problem you will see that it tells you to keep the test as a "unit".
Similar post with Answer
What this means is that we're not really concerned with testing the Window object, we'll assume Chrome or Firefox manufacturers will do this just fine for us. In your test you will be able to check and respond to your mock object and investigate that according to your logic.
When running in live code - as shown - the final step of actually handing over the location is dealt with by the browser.
In other words you are just checking your location setting logic and no other functionality. I hope this can work for you.
I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.