Log to terminal when QUnit test suite completes? - javascript

When my test suite completes, I need to output some stats, i. e. meta info about tests collected during test execution.
I'm trying this:
QUnit.done(() => console.log("some meta info here"))
This works when I run tests in the browser.
But when I run tests in the terminal, the console.log output is not displayed.
There's probably some debug flag, but it will enable all console.log messages and pollute the output greatly.
Instead, I need to output one specific message to the terminal, so that it's logged to CI.
PS console.log messages sent during test execution seem to make it into the terminal successfully.
PPS Using QUnit in an Ember CLI app, against Chrome headless.

This was a tricky one, as I've never had a need to interact with QUnit like, this, but here are my findings each step of the way:
Attempt 1:
That's a weird error, I thought I was passing a callback function :-\
Attempt 2:
After looking up the documentation for Qunit.log, I could see I was using it wrong. Switching to console.log shows the beginning message -- but not the ending message.
Attempt 3:
moduleDone will print something at the end -- but it also prints for every time you use the word module (after everything inside finishes running). So, I guess as a hack if QUnit.done never ends up working, you could keep track of the number of modules started, and modules done, make sure every started modules completes, and if that number is 0 at the end, your test suite is done?
Attempt 4
Turns out, this is only actually helpful for if you want to know the outermost module is done, cause it seems like multiple tests don't run in parallel (which is probably better anyway for test stability).
Attempt 5
https://github.com/qunitjs/qunit/issues/1308
It looks like an issue with the testem adapter :(

Related

Get tests running time with Jest

Is there any way to know how long my tests take without doing it programmatically with Jest?
To be clear, I know that if I add a variable to get the current time before each test and then log this when my test completes I'll get this information, but I want this automatically, maybe with some Jest configuration.
You shouldn't need any configuration to get the running time for your tests
PASS src/containers/Dashboard/Dashboard.test.tsx (12.902s)
That 12.902s in the brackets is the total time from when the test command was run.
If you want to see the running time per test you can run jest with the --verbose flag and it will show you the time for each test as well as the whole suite.
Dashboard Container
✓ render without crashing (1090ms)

Protractor non angular tests do not obey waits and sleeps until there is an error in the code

I am using Protractor and cucumber for automation tests on some non angular pages. I have set browser.ignoreSynchronization to true.
When I run a scenario only the first line which is browser.get(...) is executed correctly. I can see the URL loads fine. All following steps are not executed (as I don't see them run in browser) but I see all green and all passed in the results. None of the waits and sleeps in the code have any effect on execution.
However if there is an error somewhere in the code, lets say in the last step of scenario/stepdef I have wrong code browser.blah.something(); then I can see all sleeps and waits being obeyed.
I don't understand what is going on! Why does this erroneous code cause Protractor to obey timeouts? Why this weird behavior? Any idea? Also just wondering why browser.blah.something() doesn't cause compile time error (error before tests start)?
Those errors are most likely things like syntax or type errors, things that are parsed prior to execution and not failures in your tests.
There's a lot of reasons why your following lines are not working, we can't say what unless you show us the code.
My guess here is that the bunch of codes that follow your first line are promises. In fact, wait(http://www.protractortest.org/#/api?view=webdriver.WebDriver.prototype.wait) itself returns a promise.
Promises run asynchronously and not synchronously which is what you might have been expecting.
Here's a short example of what might be happening:
-> App accesses the url
-> App waits for 5 seconds (let's say this is a promise)
-> close the app
You might expect the app to access the url and then wait for 5 seconds then close but what will actually happen is the app will access the url then immediately close.
Why? because the wait for 5 seconds was executed on another thread and the main thread never waited for the 5 seconds to be done (javascript is single-threaded but... you could read about it somewhere)
To counter this, you can chain them (https://javascript.info/promise-chaining) or use async/await, depending on the es version you are following.
I won't delve into promises since that doesn't seem to be the target question here but in case promises are the reason, here's a great article to get started on it
And to answer why browser.something() is not giving an error, browser is actually ProtractorBrowser.prototype, I won't delve into it since it'll be a long answer but again, here's a great article
Try doing the following
console.log(browser)
browser.something = "abc"
console.log(browser)
the second log should show a new property, 'something' with a value of "abc"
It is better now to use
browser. waitForAngularEnabled(false)
instead of
browser.ignoreSynchronization = true
http://www.protractortest.org/#/api?view=ProtractorBrowser.prototype.waitForAngularEnabled
Also, try to put this into beforeEach()
describe('my suite', ()=>{
beforeEach(()=> {
browser.waitForAngularEnabled(false)
})
it('my test', ()=> {
...
})
})
I can suggest putting this into
onPrepare section of your config file or into beforeEach block. So it will be set before running all tests.
//Your protractor configuration file
let conf = {
// Some other options ...
onPrepare: () => {
browser.waitForAngularEnabled(false)
}
}

Getting intermittent failures of tests in Chrome

Update 2:
After forgetting about this for a week (and being sick), I am still out of my depth here. The only news is that I reran the tests in Safari and Firefox, and now Safari always fails on these tests, and Firefox always times out. I assume I've changed something somewhere, but I have no idea where.
I'm also more and more certain there's a timing issue somewhere. Possibly simply code going async where it shouldn't, but more likely it's something being interrupted.
Update:
I'm less interested in finding the actual bug, and way more interested in why it's intermittent. If I can find out why that is, I can probably find the bug, or at least rewrite the code so it's avoided.
TL;DR:
I'm using Karma (with Webpack and Babel) to run tests in Chrome, and most of them are fine, but for some reason 7 tests get intermittent failures.
Details:
So! To work!
The six first tests MOSTLY succeed when I run it in the debug tab, and MIGHT fail. The failure percentage seems higher when running it normally, though. These six tests are related, as they all fail after running a specific method which functions as a safe delete() for some Backbone models. Basically it's meant to check and clear() all linked models in the model to be deleted, and return false if it's not able to do that.
And had the failures been 100%, I am sure I would find the error and wink it out, but the only thing I know is that it has to do with trying to access or change a model that has already been deleted, which seems like it's a timing thing...? Something being run asynchronously but shouldn't perhaps...? I have no idea how to fix it...
The seventh test is a little easier. It's using Jasmine-Jquery to check if a dom element (which starts out empty) gets another div inside after I change something. It's meant to test if Bootstrap's Alert-system is implemented correctly, but has been simplified heavily in order to try to find out why it fails. This test always fails if I run it as a gulp task, but always succeeds if I open the debug tab and rerun the test manually. So my hypothesis is that Chrome doesn't render the DOM correctly the first time, but fixes it if I rerun it in the debug tab...?
TMI:
When I say I open the debug tab and rerun the test manually, I am still inside the same 'gulp test' task, of course. I also use a 'gulp testonce', but the only change there is that it has singleRun enabled and the HTML reporter enabled. It shows the exact same pattern, though I can't check the debug page there, since the browser exits after the tests.
Output from one of the first 6 tests using the html reporter.
Chrome 47.0.2526 (Mac OS X 10.11.2) model library: sentences: no longer has any elements after deleting the sentence and both elements FAILED
Error: No such element
at Controller._delete (/Users/tom/dev/Designer/test/model.spec.js:1344:16 <- webpack:///src/lib/controller.js:107:12)
at Object.<anonymous> (/Users/tom/dev/Designer/test/model.spec.js:143:32 <- webpack:///test/model.spec.js:89:31)
Output from test 7 using the html reporter.
Website tests » Messaging system
Expected ({ 0: HTMLNode, length: 1, context: HTMLNode, selector: '#messagefield' }) not to be empty.
at Object.<anonymous> (/Users/tom/dev/Designer/test/website.spec.js:163:39 <- webpack:///test/website.spec.js:109:37)
Now, the first thing you should know is that I have of course tried other browsers, but Safari has the exact same pattern as Chrome, and Firefox gives me the same errors, but the error messages end up taking 80MB of diskspace in my html reporter and SO MUCH TIME to finish, if it even finishes. Most of the time it just disconnects - which ends up being faster.
So I ended up just using Chrome to try to find this specific bug, which has haunted my dreams now for a week.
Source
Tests:
https://dl.dropboxusercontent.com/u/117580/model.spec.js.html
https://dl.dropboxusercontent.com/u/117580/website.spec.js.html
Test output (Since the errors are intermittent, this is really just an example): https://dl.dropboxusercontent.com/u/117580/output.html
OK, all tests now succeed. I THINK this was the answer:
Some tests called controller, and some tests called window.controller. This included some reset() and remove() commands.
After doing a rewrite, I still had failures, so I did another rewrite. As part of that rewrite, I decided to make all calls through window.*, and after that rewrite all tests succeeded.

Mocha 'before each hook' message in red. How do I know what specifically is wrong?

I have the following message, just before a failing test:
1) "before each" hook
That is the the entire message. It is in red, which makes me think there is something wrong with the before each hook, but I'm unsure of what the error is. It could be:
A failed timeout
A failed assertion
An Error being thrown
How do I know what the error is?
This particular beforeEach() normally executes perfectly fine.
I ran into this problem when in the beforeEach I accidentally called done() twice (I called it once at the end of the beforeEach, but also called it again via an async function called in the beforeEach).
When I ran the tests in watch mode I got the error message you described without any additional information; when I ran the tests normally I did not get any errors. I reported this on a related ticket.
How do I know what the error is?
Debug it just like you would any normal code. If you are making assertions inside a beforeEach callback, you are abusing the framework. Assertions belong in the it callbacks, so refactor that.
It's also probably not just forgetting to call done because mocha has a clear error message when that happens.
Thus your code is probably throwing an uncaught exception and you can use your favorite flavor of debugging to track it down. I like running mocha with --debug-brk and debugging with node-inspector, but some console.log statements should also suffice. Note passing only the relevant test file to mocha and using the describe.only or it.only techniques can keep the test suite small and focused while you track down the root cause.
This is happening due to the time limit exceeding. In mocha, 2000ms is the maximum time allocated for an async process. So to make your code successful, you to increase the time limit
It is usually done by using the code:
this.timeout(4000)
Note: you can't use the arrow function because you can't use the
this
option in arrow function.
For each of your tests, when you want to append the end call,
dont use:
.end(done())
use:
.end(done)

casperjs exit()/die() doesn't return to current directory

I'm fairly new to casperjs (running on phantomjs) - I'm sure I'm probably missing a basic programming element here; looking to see if anyone has some insight. At the end of my script I call casper.exit(), which does exit the script and seemingly steps back into the current directory, however the current directory is not displayed in the command window.
I don't think it's related to the script itself and I can replicate with even the most basic scripts. Below is a screenshot of the outcome:
Where the yellow circle is after the .exit() call, and I would be expecting to see the cd (underlined in red)
I've tried using casper.die() with similar results.
Although it's not a big deal, it might be confusing to someone less familiar with casper/phantom and the script itself.. I guess I'm left with a few questions:
Is this expected behavior from how phantomjs/casperjs is built?
If not, is this a 'bad' thing? (affecting memory, stack, etc.)
Is there a way to return to the CD using casper/phantom or some other method in the script itself?
Bonus question.. is there a difference between using casper.die() and casper.exit()? I see that .die() logs a status message but other than that is there a preferred method to stop script execution or is it just syntactics, as in PHP ?
It is the normal behavior of the casperjs executable on windows. This has likely something to do with the python part of the executable since phantomjs does not have this behavior.
Another indicator is that when casperjs is run through phantomjs like described here, there is no such behavior and I get a normal prompt after exit.
I would say, this is a cosmetic problem that can throw you off when you first encounter it. This isn't really a problem.
Regarding the Bonus question: die can be seen as a fancier exit since it calls exit itself, but it is a more controlled way to exit casper. There is an optional message that is written to stout in red and an additional die event handler. die also sets the execution time of the script.

Categories

Resources