Debuggin with mocha, load globals - javascript

My goal is to debug one of my tests. I'm using Mocha as a base, and SinonJS for spies, stubs and mocks. For some unknown reason my stub of the ajax method has stopped working. It worked a week ago, now the requests are sent and the stub does not track the calls.
I have these lines inside the outermost describe
let sandbox = sinon.sandbox.create();
let ajaxStub = undefined;
and then this:
beforeEach(function () {
ajaxStub = sandbox.stub($, 'ajax');
});
afterEach(function () {
sandbox.restore();
});
Anyway, my question is not what's wrong with this, I'm probably doing something extremely stupid elsewhere, and some debugging could probably solve it. My problem is with the debugging itself.
mocha --debug-brk --inspect ./test/mytest.js
This is what I run in command line to get the debugging session going.
My problem is to run the tests I'm currently using Gulp, with which I'm loading all my framework dependencies and all my globals - the libraries added this way include also jQuery and sinon
And of course, if I debug my tests using that command line, NodeJS does not load the required files in the environment, and at the first reference to sinon I get an exception.
I could create an html page in which I load required files and tests and run the test - then debug it manually with the browser inspector - but that's something that I would like to avoid. Is there anything more automated?
I'm not a NodeJS expert, I just roughly understand what it is and how it works, so I'm pretty confident there could be something I missed that can be of help :)
What I'm thinking about right now is a batch script to find the required files, but that's all I have.
Just an additional note: code base is really old and big, and I do not really have permission to refactor existing code into es6 modules.

I found a solution: I'm going to create a testDebugLoader.js script in which I will write which test I want to debug, and an array of paths to scripts I need to load.
Then loop trough the array, load each needed file, and call eval on the retrieved text.

Related

Firefox extension. How to access "browser" namespace from console?

I'm trying of the console to access the "browser" environment e.x. browser.cookies.getAll but this not defined anywhere except extension environment.
If make simple firefox addon (extension) with one .js file where browser API request:
browser.cookies.getAll({}).then(console.log);
get an array with an interactive preview.
Execute from extension
If execute this command in console
How to access "browser" namespace from console?
This is not possible, browser.* or chrome.* are not available on developer console because they need an extension's context to run and developer console runs commands on the context of current page.
The following approach requires learning/knowledge of unit testing and integration testing in JavaScript and node.js, the example provided is over-simplified, this is by no means production ready code.
A better approach for testing your extensions and debugging it is to write tests for it.
Choose a testing framework (Jest, Mocha + chai, etc) and set it up according to your needs
Install sinon-chrome package which provides you with stubs for browser.* methods/apis by running npm install --save-dev sinon-chrome
(Optional) Install webextensions-api-fake which provides you with mocks for browser.* methods/apis by running npm install --save-dev webextensions-api-fake
(Optional) Install webextensions-jsdom which helps you to write tests for your browser_action default_popup, sidebar_action default_panel or background page/scripts
Start writing tests by following the example below
In order to debug your extension, set a breakpoint in your IDE/Editor of choice and run the tests, the execution will stop at the breakpoint and you will have access the states of Objects and Variables at that time in execution. This will help you know what and how exactly things are executing and what's happening to the data you pass around in functions. There is no need for writing console.log statements everywhere for checking your output or variables, debuggers can help with that.
(Optional) webextensions-toolbox is another great tool for writing cross-browser extensions (Your extension will work on chrome, firefox, opera, edge) with the same code base. This also comes with hot-reloading of your extension page, so you don't have to hit refresh every time you make any changes.
By following this approach, it will improve your development workflow and will reduce the number of times you have to hit refresh on your browser.
Example usage of sinon-chrome stubs using jest testing framework.
Lets say you have written your code in yourModule.js then to test/verify that it works in
yourModule.test.js you write:
import browser from 'sinon-chrome';
import yourModule from './lib/yourModule';
describe('moduleName', () => {
beforeAll(() => {
// To make sure yourModule uses the stubbed version
global.browser = browser;
});
it('does something', async () => {
await yourModule();
// Lets assume your module creates two tabs
expect(browser.tabs.create.calledTwice).toBe(true);
// If you want to test how those browser methods where called
expect(browser.tabs.create.firstCall.calledWithExactly({
url: 'https://google.com',
})).toBe(true);
// Notice the usage of `.firstCall` here, this makes sure only the first time
// `browser.tabs.create` was called with the given args.
});
});
When you run this test using jest, yourModule will expect there to exist a global variable browser with the apis it uses which is only possible in a real browser, but we faked it using the sinon-chrome package, your module will execute in node.js environment as expected.
You don't need to run it in the browser to see changes. You just write tests, write code to pass those tests and when all tests pass. Check your extension by running it in the browser, at this point in time your extension will run as you'd expect it to. If you add another feature to yourModule and your tests fail you know exactly what went wrong.
However the above example only makes sure how browser.* methods/apis were called, for you to test the behavior of yourModule you'd need to mock those methods/apis, this is were the webextensions-api-fake package comes in. You can find example in its repo on github.
Examples for testing your browser_action default_popup, sidebar_action default_panel or background page/scripts are also provided in the webextensions-jsdom repo on github.

MongoDB map-reduce (via nodejs): How to include complex modules (with dependencies) in scopeObj?

I'm working on a complicated map-reduce process for a mongodb database. I've split some of the more complex code off into modules, which I then make available to my map/reduce/finalize functions by including it in my scopeObj like so:
const scopeObj = {
userCalculations: require('../lib/userCalculations')
}
function myMapFn() {
let userScore = userCalculations.overallScoreForUser(this)
emit({
'Key': this.userGroup
}, {
'UserCount': 1,
'Score': userScore
})
}
function myReduceFn(key, objArr) { /*...*/ }
db.collection('userdocs').mapReduce(
myMapFn,
myReduceFn,
{
scope: scopeObj,
query: {},
out: {
merge: 'userstats'
}
},
function (err, stats) {
return cb(err, stats);
}
)
...This all works fine. I had until recently thought it wasn't possible to include module code into a map-reduce scopeObj, but it turns out that was just because the modules I was trying to include all had dependencies on other modules. Completely standalone modules appear to work just fine.
Which brings me (finally) to my question. How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? One thought I had was using Browserify or something similar to pull all my dependencies into a single file, then include it somehow... but I'm not sure what the right way to do that would be. And I'm also not sure of the extent to which I'm risking severely bloating my map-reduce code, which (for obvious reasons) has got to be efficient.
Does anyone have experience doing something like this? How did it work out, if at all? Am I going down a bad path here?
UPDATE: A clarification of what the issue is I'm trying to overcome:
In the above code, require('../lib/userCalculations') is executed by Node -- it reads in the file ../lib/userCalculations.js and assigns the contents of that file's module.exports object to scopeObj.userCalculations. But let's say there's a call to require(...) somewhere within userCalculations.js. That call isn't actually executed yet. So, when I try to call userCalculations.overallScoreForUser() within the Map function, MongoDB attempts to execute the require function. And require isn't defined on mongo.
Browserify, for example, deals with this by compiling all the code from all the required modules into a single javascript file with no require calls, so it can be run in the browser. But that doesn't exactly work here, because I need to be the resulting code to itself be a module that I can use like I use userCalculations in the code sample. Maybe there's a weird way to run browserify that I'm not aware of? Or some other tool that just "flattens" a whole hierarchy of modules into a single module?
Hopefully that clarifies a bit.
As a generic response, the answer to your question: How can I -- or, for that matter, should I -- incorporate more complex modules, including things I've pulled from npm, into my map-reduce code? - is no, you can not safely include complex modules in node code you plan to send to MongoDB for mapReduce jobs.
You mentioned the problem yourself - nested require statements. Now, require is sync, but if you have nested functions inside, these require calls would not be executed until call time, and MongoDB VM would throw at this point.
Consider the following example of three files: data.json, dep.js and main.js.
// data.json - just something we require "lazily"
false
// dep.js -- equivalent of your userCalculations
module.exports = {
isValueTrue() {
// The problem: nested require
return require('./data.json');
}
}
// main.js - from here you send your mapReduce to MongoDB.
// require dependency instantly
const calc = require('./dep.js');
// require is synchronous, the effectis the same if you do:
// const calc = (function () {return require('./dep.js')})();
console.log('Calc is loaded.');
// Let's mess with unwary devs
require('fs').writeFileSync('./data.json', 'false');
// Is calc.isValueTrue() true or false here?
console.log(calc.isValueTrue());
As a general solution, this is not feasible. While vast majority of modules will likely not have nested require statements, HTTP calls, or even internal, service calls, global variables and similar, there are those who do. You cannot guarantee that this would work.
Now, as a your local implementation: e.g. you require exactly specific versions of NPM modules that you have tested well with this technique and you know it will work, or you published them yourself, it is somewhat feasible.
However, even if this case, if this is a team effort, there's bound to be a developer down the line who will not know where your dependency is used or how, use globals (not on purpose, but by ommission, e.g they wrongly calculate this) or simply not know the implications of whatever they are doing. If you have strong integration testing suite, you could guard against this, but the thing is, it's unpredictable. Personally I think that when you can choose between unpredictable and predictable, almost always you should use predictable.
Now, if you have an explicitly stated purpose for a certain library to be used in MongoDB mapReduce, this would work. You would have to guard well against ommisions and problems, and have strong testing infra, but I would make certain the purpose is explicit before feeling safe enough to do this. But of course, if you're using something that is so complex that you need several npm packages to do, maybe you can have those functions directly on MongoDB server, maybe you can do your mapReducing in something better suited for the purpose, or similar.
To conclude: As a purposefuly built library with explicit mission statement that it is to be used with node and MongoDB mapReduce, I would make sure my tests cover all my mission-critical and important functionality, and then import such npm package. Otherwise I would not use nor recommend this approach.

Protractor Accessibility reporting

I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.

How to develop a javascript library from an already existing npm module (codius)

never done this before.
I'm using https://github.com/codius/codius-host. CodiuĀ§ development has been abandoned, but I want to salvage part of it to use for my own project. I really need to be able to run codius commands from browser, so I need to develop a library or what you call it.
var codius = require('codius')
codius.upload({host: http://contract.host}
codius-host comes packed with command-line integration,
$ CODIUS_HOST=https://codius.host codius upload
How do I make a .js script do what the command-line command does ?
also posted on https://stackoverflow.com/questions/31126511/if-i-have-a-npm-tool-that-uses-comman-line-commands-how-can-i-create-a-javascri
hard time asking this questions since don't know where to start. help.
Assuming that you have access to the codius-host source code you should find the piece of code which manages the command line arguments. I am sure that they do handle the command and the command line arguments from an entry module/function and than later delegate the real job to a different module/function. What you need to do is to provide correct parameters to the functions which the function/module that handles command line argument calls with command line parameters.
In addition to that there are some nodejs libraries that might imitate a command line call from the program itself. One of those I know is shelljs:
https://www.npmjs.com/package/shelljs
You might want to check this out as well. With this one without bothering with the source code you might be able to imitate command line behaviour.

How to access mocha options from within a test file?

I am running mocha tests using gruntjs and grunt-simple-mocha:
https://github.com/yaymukund/grunt-simple-mocha
How can I access the options defined in my grunt.js file within each mocha test?
What I would like to accomplish, is to have some common configuration in my gruntfile, and use that in my tests.
The one way I found already is using global values, which is not very good, but works
inside grunt.js config
global.hell = 'hey you';
inside test
console.log(global.hell);
inspecting one more way now, maybe it will be better
--EDIT
No, seems it's the one I will stop at, if I don't want to end up with some black magic like in mocha-as-promised, because i don't have skills to write that.
--EDIT
Also you can take a look at this - https://github.com/visionmedia/mocha/wiki/Shared-Behaviours
you can share some object between tests, but not sure if it will help with grunt
As far as I'm aware there is no way to push any objects into your mocha suit. The only other interpretation I can think of for your question, you would like to load a common set of configs among your test files. I dont belive you can, other than at the very top of your test files loading a common config file to be availble to your test methods.

Categories

Resources