I'm trying of the console to access the "browser" environment e.x. browser.cookies.getAll but this not defined anywhere except extension environment.
If make simple firefox addon (extension) with one .js file where browser API request:
browser.cookies.getAll({}).then(console.log);
get an array with an interactive preview.
Execute from extension
If execute this command in console
How to access "browser" namespace from console?
This is not possible, browser.* or chrome.* are not available on developer console because they need an extension's context to run and developer console runs commands on the context of current page.
The following approach requires learning/knowledge of unit testing and integration testing in JavaScript and node.js, the example provided is over-simplified, this is by no means production ready code.
A better approach for testing your extensions and debugging it is to write tests for it.
Choose a testing framework (Jest, Mocha + chai, etc) and set it up according to your needs
Install sinon-chrome package which provides you with stubs for browser.* methods/apis by running npm install --save-dev sinon-chrome
(Optional) Install webextensions-api-fake which provides you with mocks for browser.* methods/apis by running npm install --save-dev webextensions-api-fake
(Optional) Install webextensions-jsdom which helps you to write tests for your browser_action default_popup, sidebar_action default_panel or background page/scripts
Start writing tests by following the example below
In order to debug your extension, set a breakpoint in your IDE/Editor of choice and run the tests, the execution will stop at the breakpoint and you will have access the states of Objects and Variables at that time in execution. This will help you know what and how exactly things are executing and what's happening to the data you pass around in functions. There is no need for writing console.log statements everywhere for checking your output or variables, debuggers can help with that.
(Optional) webextensions-toolbox is another great tool for writing cross-browser extensions (Your extension will work on chrome, firefox, opera, edge) with the same code base. This also comes with hot-reloading of your extension page, so you don't have to hit refresh every time you make any changes.
By following this approach, it will improve your development workflow and will reduce the number of times you have to hit refresh on your browser.
Example usage of sinon-chrome stubs using jest testing framework.
Lets say you have written your code in yourModule.js then to test/verify that it works in
yourModule.test.js you write:
import browser from 'sinon-chrome';
import yourModule from './lib/yourModule';
describe('moduleName', () => {
beforeAll(() => {
// To make sure yourModule uses the stubbed version
global.browser = browser;
});
it('does something', async () => {
await yourModule();
// Lets assume your module creates two tabs
expect(browser.tabs.create.calledTwice).toBe(true);
// If you want to test how those browser methods where called
expect(browser.tabs.create.firstCall.calledWithExactly({
url: 'https://google.com',
})).toBe(true);
// Notice the usage of `.firstCall` here, this makes sure only the first time
// `browser.tabs.create` was called with the given args.
});
});
When you run this test using jest, yourModule will expect there to exist a global variable browser with the apis it uses which is only possible in a real browser, but we faked it using the sinon-chrome package, your module will execute in node.js environment as expected.
You don't need to run it in the browser to see changes. You just write tests, write code to pass those tests and when all tests pass. Check your extension by running it in the browser, at this point in time your extension will run as you'd expect it to. If you add another feature to yourModule and your tests fail you know exactly what went wrong.
However the above example only makes sure how browser.* methods/apis were called, for you to test the behavior of yourModule you'd need to mock those methods/apis, this is were the webextensions-api-fake package comes in. You can find example in its repo on github.
Examples for testing your browser_action default_popup, sidebar_action default_panel or background page/scripts are also provided in the webextensions-jsdom repo on github.
Related
I have built a web app and I am trying to build out an automated unit testing suite so that I can more easily refactor the code.
I have javascript code that functions correctly on my website. Simple hypothetical code:
function timesTwo(x){x*=2;return x}
If I were writing my code for NodeJS I would add the tests to confirm that the code is working correctly. Below is an example using Wish and Mocha:
describe('timesTwo()', function() {
it('multiplies by two', function() {
var result = timesTwo(5);
wish(result === 10);
});
});
This code (including tests) works fine if I run it in node.JS to test it, but now it throws errors in my browser:
require is not defined
describe is not defined
wish is not defined
How can I create an automated test suite for my code in a way that doesn't throw errors in the browser?
If I'm not mistaken you're also loading the test code into your browser.
The thing is that require isn't something that's available in your browser. This means that the functions like describe and wish are simply not available since the require didn't include any code.
What you should do is keep the tests separate from your application code and only load the application code into your website. How you do this depends on your build system, template engine, etc
I noticed that error stack visible while using console.error has ability to make URIs hyperlinked. Such formatting takes place as well in browser Devtools as in IDE i use (WebStorm with integrated Git Bash terminal).
Example screen (this comes from Jest testing framework, but such hyperlinks are also provided in usual console.error calls):
As Node.js uses Chrome's V8 JS engine under the hood, i have tried to debug console.error internal implementation, but i hit the wall at this line of code - it uses stream to output the error message, which beforehand is just a string without any special formatting (except new lines) and afterwards becomes formatted output in console (including hyperlinks). I don't know what happens below there.
The Console.prototype.error method seems to be created by assigning Console.prototype.warn method to it, which is confusing. Also, Devtools console's inspect function won't navigate to the implementation of console methods while debugging Node.js, even despite the fact that Node's code is more accessible than the browser's one - i mean it's possible to debug Node's internals to some level, which is not a thing in the browser.
My question: is it possible to output hyperlinked content (my intention is to hyperlink path to a module) to the console output in a controllable way?
That question arised, because i am writing a Node.js wrapper on console methods i use in project and, as i had noticed the hyperlinks to files in call stacks, i wanted to use that feature inside my wrapper. I know there are libraries that wrap console and offer a ton of features, but i want to have a tiny wrapper, with just one or two features included. If somebody knows library that offers such controllable hyperlinking to the console output, i'd appreciate sharing a link, so i could look up for it's implementation.
IDE
As others have already mentioned in comments: links are a feature of the IDE/Text Editor. You can provide a file path, but it may not be clickable depending on your IDE/Text Editor. E.g., console.error(`${__filepath}:${line}:${column}`)
Browser (Chromium Based)
Yes, this is possible to some extent. Just prefix the file path with file:///.
// index.js
const line = 2;
const column = 9;
console.error('ERROR: ', `file:///${__filename}:${line}:${column}`.replace(/\\/g, '/'));
console.log('done'); // set a breakpoint here inside of devtools
Run node --inspect-brk index.js
Open chrome://inspect
Select Open dedicated DevTools for Node (this will make sources visible in Devtools > Sources > Node) or inspect under the node process.
Allow the code to run to your breakpoint. Check the console and click the link
NOTES
replace(/\\/g) will make sure the path is valid (at least on Windows)
This is one of the rare places that file:/// links work in the browser.
You could also generate sourcemaps and link to the sourceMappingURL
https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Use_a_source_map
My goal is to debug one of my tests. I'm using Mocha as a base, and SinonJS for spies, stubs and mocks. For some unknown reason my stub of the ajax method has stopped working. It worked a week ago, now the requests are sent and the stub does not track the calls.
I have these lines inside the outermost describe
let sandbox = sinon.sandbox.create();
let ajaxStub = undefined;
and then this:
beforeEach(function () {
ajaxStub = sandbox.stub($, 'ajax');
});
afterEach(function () {
sandbox.restore();
});
Anyway, my question is not what's wrong with this, I'm probably doing something extremely stupid elsewhere, and some debugging could probably solve it. My problem is with the debugging itself.
mocha --debug-brk --inspect ./test/mytest.js
This is what I run in command line to get the debugging session going.
My problem is to run the tests I'm currently using Gulp, with which I'm loading all my framework dependencies and all my globals - the libraries added this way include also jQuery and sinon
And of course, if I debug my tests using that command line, NodeJS does not load the required files in the environment, and at the first reference to sinon I get an exception.
I could create an html page in which I load required files and tests and run the test - then debug it manually with the browser inspector - but that's something that I would like to avoid. Is there anything more automated?
I'm not a NodeJS expert, I just roughly understand what it is and how it works, so I'm pretty confident there could be something I missed that can be of help :)
What I'm thinking about right now is a batch script to find the required files, but that's all I have.
Just an additional note: code base is really old and big, and I do not really have permission to refactor existing code into es6 modules.
I found a solution: I'm going to create a testDebugLoader.js script in which I will write which test I want to debug, and an array of paths to scripts I need to load.
Then loop trough the array, load each needed file, and call eval on the retrieved text.
I am trying to use Accessibility plugin that comes with Protractor. From what I see it does checking for a11y of last page that I am located.
Is there a way to have 2 test scripts executed one after another one and provide different reports or put all in one report but separated.
Example:
access.js
access1.js
Output file:
resultJsonOutputFile: 'result/result.json'
I tried to this way in conf.js:
specs: ['../test/access.js', '../test/access1.js'],
or
specs: ['../test/access*.js'],
but still get result for last script executed
I tried also creating suites:
suites: {
homepage: '../test/homepage/access.js',
catalogpage: '../test/catalogpage/access1.js'
},
but when I check JSON file, if 2 scripts executed, then 1st one is ok with no issues and provides error for 2nd script. However, if to run 1st script alone, Protractor provides errors
Also I tried to create in one js file as different scenarios, but still same issue
With the current implementation, the accessibility plugin is set to run exactly once per invocation of the Protractor runner, on the last page. So unfortunately, no modification of the suites or test files will make it run more than once.
You can create separate configuration files for each set of test files you'd like to run, or using shardTestFiles to make sure that each file is run in its own process. See the referenceConf for more details on sharding.
Alternatively, you could use aXe to do your accessibility testing. In order to use it with e2e tests in protractor and Webdriver, do the following:
npm install --save-dev axe-webdriverjs
Then in your e2e test files, you do:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
to get hold of the AxeBuilder and then wherever you need to run a test, you:
AxeBuilder(browser.driver)
.analyze(function (results) {
expect(results.violations.length).toBe(0);
});
The above example is using Jasmine but you can extrapolate for any other assertion library.
Also: there is a sample project you can clone and run here https://github.com/dylanb/UITestingFramework
Disclaimer: I am associated with the aXe project and therefore not neutral
I ran into that problem too - as another poster stays the plugin isn't really designed to operate that way.
I wrote a derivative of that plugin which does what you're looking for - protractor-axe-report-plugin.
You make a call to runAxeTest (or runAxeTestWithSelector) whenever you have a page open in the browser that you want to test, and it generates reports using the aXe engine.
Continuum can be used for your use case where it seems the accessibility plugin that comes with Protractor cannot. Here's some documentation on a Protractor-based sample project that uses Continuum. It can be downloaded from webaccessibility.com under 'Continuum for Protractor'. If you look at the source code of the sample project, it basically just boils down to this:
const continuum = require('../js/Continuum.js').Continuum;
continuum.setUp(driver, "../js/AccessEngine.community.js");
continuum.runAllTests().then(() => {
const accessibilityConcerns = continuum.getAccessibilityConcerns();
// accessibilityConcerns.length will be 0 if no accessibility concerns are found
});
(For more information on the above, you can check out the API documentation.)
You can execute that continuum.runAllTests bit wherever in your tests that you like. That includes multiple times from within the same test too, if desired, which if I understand correctly is ultimately what you're after.
Of course, no automated accessibility testing tool is a replacement for manual accessibility testing. It seems like you're just looking to get a baseline level of compliance right now though, so Continuum seems appropriate for your use case to tackle the low-hanging fruit.
I have set up jasmine tests for my app. In my app I have javascript that shouldn't work in IE8 :
var foo = Object.create(Array.prototype);
When I run mvn jasmine:bdd and I open the test page in Internet Explorer (with browserMode set to Internet Explorer 8), it fails as expected:
TypeError: Object doesn't support property or method 'create'
However, when I run:
mvn jasmine:test -DbrowserVersion=INTERNET_EXPLORER_8
All of my tests are successful. The logs specify that the browserVersion is set to INTERNET_EXPLORER_8.
I expected both to give me the same result. This is causing an issue with our ci testing since it has let out js errors that I wanted to catch.
Should this be working as I'm expecting and if not, what should I change?
Also, this is the best way I know how to test multiple browsers. Is there a better way that I'm missing?
EDIT
A coworker has tried to dash my hopes that the browserVersion wouldn't even catch such an error and suggested that it was only meant to change the header so that the tests could cover browser specific javascript as well (blocks of code that only run when the browser is a specific version). Is this accurate?
It seems very accurate that the jasmine tests browserVersion property does not change the internals of how the js is run, meaning that the quirks or each browser will not be tested. I have successfully transitioned to using js-test-driver for my tests which can run jasmine tests on actual browsers. This looks to be the right way to do this. Although if you're starting new down this path, I'd look at Karma instead (previously named testacular, and built as an improvement on js-test-driver).