Please correct my understanding for the below:
I've installed protractor flake
From the website we have 2 sets of
code
My assumption
I'm pretty sure the B part needs to be given in configuration.js file
of my protractor project but the A part where exactly should it be written.
As a separate file should i write it and then require them in the spec file which i'm running.I need exact steps as to achieve the above
The usage section which starts with below:
**var protractorFlake = require('protractor-flake')
// OR using es6 modules/typescript
import protractorFlake = require('protractor-flake')**
and ends with **process.exit(status)**
and the parsers section which starts with
module.exports = { till return [...failedSpecs]
As per the documentation,
Add dependency
npm i protractor-flake
# or globally for easier cli usage
npm i -g protractor-flake
Running tests
Option 1: Via the CLI:
# protractor-flake <protractor-flake-options> -- <options to be passed to protractor>
protractor-flake --parser standard --max-attempts=3 -- path/to/protractor.conf.js
Assuming that your conf.js file is in root directory.
Available command line options.
color?: string | boolean
choose color from here or set false to disable coloring
Usage : protractor-flake --parser standard --color=magenta --max-attempts=3 -- conf.js
protractorArgs?: string[]
protractorPath?: string: protractor location like this 'node_modules/.bin/protractor',
Usage : protractor-flake --parser standard --protractorPath=node_modules/.bin/protractor --max-attempts=3 -- conf.js
parser?: string: the name of one of the included parsers
Usage : protractor-flake --parser standard --color=magenta --max-attempts=3 -- conf.js
You can refer other options from here
Option 2: Programmatically
Create file in your root directory as flake and copy below snippet.
flake is a node script that uses protractor-flake to re-run failed tests. Note
that it reruns tests at the file level, so if one test fails, it will rerun all
the tests in that file.
Thanks Brian Ray to this repository
#!/usr/bin/env node
/**
*
* usage:
* `./flake conf.js [other protractor args]`
*/
const protractorFlake = require('protractor-flake');
// skip first two passed args (node and self)
let protractorArgs = process.argv.splice(2);
console.log(protractorArgs);
protractorFlake({
protractorPath: 'node_modules/.bin/protractor',
maxAttempts: 3,
parser: 'standard',
nodeBin: 'node',
protractorArgs: protractorArgs
}, (status, output) => {
process.exit(status);
});
After creating this file, for avoiding permission error's just run chmod +x ./flake
To run your test cases
./flake conf.js
If you are keeping specs in a test suite, just pass after conf.js.
./flake conf.js --suite smoke_test
Before you are running, check these Caveats
Related
I am not too good with front-end technologies... So if I have wrong expectations - please correct me or my code. I have created a repository with code that allows to reproduce issue. Here is the link:
https://github.com/ffatheranderson/webpack-issue-reproduction
as described in readme.md of the project:
========================================
What I expect? - I expect that after I execute npm run watch command - the generated
result/bundle.js file to have such lines:
...
var _environment = 'development';
var _ANOTHER_VARIABLE = "another variable value";
...
What is actual result? - after I execute npm run watch command - the generated
result/bundle.js file contains such lines:
...
var _environment = undefined;
var _ANOTHER_VARIABLE = "another variable value";
...
Why do I have such expectations? - because of these lines:
...
plugins: [
new webpack.DefinePlugin({
ENVIRONMENT: JSON.stringify(process.env.NODE_ENV),
ANOTHER_VARIABLE: JSON.stringify("another variable value"),
})
]
...
in webpack.config.js file.
As you can see variable _environment is not initialized with development value as it is promised
here: https://webpack.js.org/configuration/mode/
========================================
_environment is undefined because the environment variable NODE_ENV is undefined. You can solve this in one of three:
Invoking npm run watch --node-env=development: https://webpack.js.org/api/cli/#node-env
Exporting NODE_ENV in your current shell session:
$ export NODE_ENV=production; npm run watch
Updating your configuration to specify the value from some other source (e. g. an --env argument, a file on disk, hard-coding it, etc.)
I wrote Mocha tests in my previous project. The nice thing about it is the Istanbul code coverage tool. It is very useful and cool.
Now I am using pytest for my current project. Some services are nodejs apps. Now my question is, is there a way I can have code coverage for nodejs app when I am using pytest?
Combining coverage reports of istanbul and coverage tools has two major problems to solve:
Naturally, coverage does not know anything about javascript files, so we have to register them somehow in a coverage run - this will be done by implementing a custom plugin.
We have to convert istanbuls coverage report to a format coverage understands.
Setup
Because the code snippet would be too large to put in in the answer directly, I have prepared a git repository you can reproduce the test run with (of course you can reuse the code however you want):
$ git clone https://github.com/hoefling/stackoverflow-52124836
$ cd stackoverflow-52124836/
$ yarn install
Generate istanbul coverage report first:
$ yarn test
yarn run v1.9.4
warning package.json: No license field
$ istanbul cover _mocha js
Array
#length
✓ should be 0 when the array is empty
✓ should be 1 when the array has one element
✓ should be 2 when the array has two elements
Array
#indexOf()
✓ should return -1 when the value is not present
4 passing (5ms)
=============================================================================
Writing coverage object [/private/tmp/stackoverflow-52124836/coverage/coverage.json]
Writing coverage reports at [/private/tmp/stackoverflow-52124836/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 14/14 )
Branches : 100% ( 0/0 )
Functions : 100% ( 8/8 )
Lines : 100% ( 14/14 )
================================================================================
Now run python tests with pytest:
$ python -m pytest -sv --cov=py --cov=js --cov-report=term-missing
=================================== test session starts ===================================
platform darwin -- Python 3.6.4, pytest-3.7.3, py-1.5.4, pluggy-0.7.1 --
/Users/hoefling/.virtualenvs/stackoverflow/bin/python
cachedir: .pytest_cache
rootdir: /private/tmp/stackoverflow-52124836, inifile:
plugins: cov-2.5.1
collected 1 item
py/test_spam.py::test_spam PASSED
---------- coverage: platform darwin, python 3.6.4-final-0 -----------
Name Stmts Miss Cover Missing
-------------------------------------------------------
js/array.length.spec.js 14 0 100%
js/array.spec.js 8 0 100%
py/test_spam.py 2 0 100%
-------------------------------------------------------
TOTAL 24 0 100%
================================ 1 passed in 0.56 seconds =================================
js/py dirs
These are just some example test files to play with. To simplify the setup, istanbul collects the coverage over the test code.
mycov
Contains the plugin for coverage. For in-depth info on how to write plugins for coverage, refer to Plug-in classes; here I just explain the relevant spots:
class IstanbulPlugin(coverage.plugin.CoveragePlugin, coverage.plugin.FileTracer):
def file_reporter(self, filename):
return FileReporter(filename)
def file_tracer(self, filename):
return None
def find_executable_files(self, src_dir):
yield from (str(p) for p in pathlib.Path(src_dir).rglob('*.js')
if not any(d in p.parts for d in ('node_modules', 'coverage',)))
The plugin class does nothing besides searching for javascript files and registering them as executed ones in coverage run (find_executable_files method). It does not record code coverage at all! It also registers a simple file reporter impl for javascript files:
class FileReporter(coverage.plugin.FileReporter):
def source(self):
with open(self.filename) as fp:
js = fp.read()
return js
def lines(self):
return {i + 1 for i, line in enumerate(self.source().split(os.linesep)) if line.strip()}
The reporter returns the source code of javascript files as-is, executable lines are all code lines that are not empty.
Note 1:
This impl is not sufficient! For example, line and block comment lines will be counted as executable. You will need to adapt the lines method; best is to use some javascript code parser that extracts the info about the executable lines.
Note 2:
Both istanbul and coverage count lines from 1 and not from 0, thus the shift in line numbers.
Now you need to register the plugin via coverage_init:
# mycov/__init__.py
def coverage_init(reg, options):
reg.add_file_tracer(IstanbulPlugin())
and add the custom plugin in .coveragerc:
[run]
plugins = mycov
Append istanbul report to coverage
Now that coverage knows the javascript files are to be considered, we will merge the js coverage with the python one. The fixture append_istanbul_coverage in conftest.py is responsible for that.
#pytest.fixture(autouse=True)
def append_istanbul_coverage(cov):
yield
with open('coverage/coverage.json') as fp:
data = json.load(fp)
converted = {'lines': {item['path']: line_numbers(item) for item in data.values()}}
text = "!coverage.py: This is a private format, don't read it directly!" + json.dumps(converted)
istanbul_cov = coverage.data.CoverageData()
with io.StringIO(text) as fp:
istanbul_cov.read_fileobj(fp)
cov.data.update(istanbul_cov)
The fixture will be automatically executed once per test run, code after yield will be run when all tests are done. The cov fixture is provided by pytest-cov; we use it to access the current coverage data object. First, we read the istanbul coverage; then convert it to a string that can be understood by coverage - this is just plain json with a special message prepended. After that, all that is left is to update the current coverage data are we're done!
I'm using Browserify to bundle up my JS before pushing to my Bitbucket repo, and then using Codeship to test the build and push to Heroku.
I'm using Node/Express to serve my app, and in my index.jade I have a <script /> pointing to /dist/index.js.
A couple of times, I've mistakenly pushed my latest code with broken Browserify output, ie. the contents of /dist/index.js will be:
console.error('cannot find module XYZ')
And I've deployed this to my live app. UH OH.
I've put in a very rudimentary test which gets ran on Codeship which I'm hoping should avoid this in the future:
var exit = function() {
process.exit(1)
}
var success = function() {
process.exit(0)
}
var fs = require('fs')
var index
try {
index = fs.readFileSync(__dirname + '/../public/dist/index.js', 'utf-8')
} catch (e) {
exit()
}
if(!index){
exit()
}
var invalid = index.length < 1000
if(invalid){
return exit()
}
success()
I'm just checking if the file exists, and that the contents of the file is over 1000 characters.
Not sure if there's a specific answer to this, but would be a reasonable approach to making sure broken Browserify output never gets committed/deployed?
I haven't used Codeship before, but I have used other similar services. You haven't described how you push - I'm going to assume you're using git.
With git, this becomes easy: write a pre-push hook that will abort the push if something fails. Here's an example from a project I'm working on:
#!/bin/bash
# the protected branches
#
protected_branches='develop master'
# Check if we actually have commits to push
#
commits=`git log #{u}..`
if [ -z "$commits" ]; then
exit 0
fi
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
# is the current branch in the list of protected branchs? if so, then run the
# tests
#
if grep -q "$current_branch" <<< "$protected_branches"; then
# move into the dir containing the tests
#
pushd $(git rev-parse --show-toplevel)/contract >/dev/null
gulp test
RESULT=$?
# back to whatever dir we were in before
#
popd >/dev/null
if [ $RESULT -ne 0 ]; then
echo "-------- Failed Tests"
exit 1
fi
fi
exit 0
This is a modified version of a script I found in this blog post.
Basically, this script checks to see if I'm pushing one of the protected branches and, if so, runs my tests. If those test fail, then the push is aborted.
You could, of course, change the conditions under which the push is aborted. For example, write some code to check & see if your browserify bundle is correct and fail if it's not. You mention checking the length of your bundle - maybe something like length=$(ls -l | cut -c 30-34) and then check the value of length (sorry, I'm not a real bash guru).
The benefit of this approach is that the messed up code never leaves your local machine - you run the test locally and if it fails, the code doesn't get pushed. This is likely to be faster than running in on Codeship's service.
Following the lead of this question, I tried (naievely) to do this:
protractor test/features/protractor-conf.js --params.test_set=dev_test
and
protractor-conf.js:
exports.config = {
// ...
specs: [browser.params.test_set+'/*.feature'],
... but of course it doesn't work because browser is not defined at the time that the conf file is parse.
So how could I achieve this effect: passing a parameter to protractor that determines the specs?
Use the --specs command-line argument:
--specs Comma-separated list of files to test
protractor test/features/protractor-conf.js --specs=dev_test/*.feature
Note that dev_test/*.feature would be passed into the protractor's command-line-interface which would resolve the paths based on the current working directory (source code).
Is there a way to execute some code (in a file or from a string, doesn't really matter) before dropping into interactive mode in node.js?
For example, if I create a script __preamble__.js which contains:
console.log("preamble executed! poor guy!");
and a user types node __preamble__.js they get this output:
preamble executed! poor guy!
> [interactive mode]
Really old question but...
I was looking for something similar, I believe, and found out this.
You can open the REPL (typing node on your terminal) and then load a file.
Like this: .load ./script.js.
Press enter and the file content will be executed. Now everything created (object, variable, function) in your script will be available.
For example:
// script.js
var y = {
name: 'obj',
status: true
};
var x = setInterval(function () {
console.log('As time goes by...');
}, 5000);
On the REPL:
//REPL
.load ./script.js
Now you type on the REPL and interact with the "living code".
You can console.log(y) or clearInterval(x);
It will be a bit odd, cause "As time goes by..." keep showing up every five seconds (or so).
But it will work!
You can start a new repl in your Node software pretty easily:
var repl = require("repl");
var r = repl.start("node> ");
r.context.pause = pauseHTTP;
r.context.resume = resumeHTTP;
From within the REPL you can then call pause() or resume() and execute the functions pauseHTTP() and resumeHTTP() directly. Just assign whatever you want to expose to the REPL's context member.
This can be achieved with the current version of NodeJS (5.9.1):
$ node -i -e "console.log('A message')"
The -e flag evaluates the string and the -i flag begins the interactive mode.
You can read more in the referenced pull request
node -r allows you to require a module when REPL starts up. NODE_PATH sets the module search path. So you can run something like this on your command line:
NODE_PATH=. node -r myscript.js
This should put you in a REPL with your script loaded.
I've recently started a project to create an advanced interactive shell for Node and associated languages like CoffeeScript. One of the features is loading a file or string in the context of the interpreter at startup which takes into account the loaded language.
http://danielgtaylor.github.com/nesh/
Examples:
# Load a string (Javascript)
nesh -e 'var hello = function (name) { return "Hello, " + name; };'
# Load a string (CoffeeScript)
nesh -c -e 'hello = (name) -> "Hello, #{name}"'
# Load a file (Javascript)
nesh -e hello.js
# Load a file (CoffeeScript)
nesh -c -e hello.coffee
Then in the interpreter you can access the hello function.
Edit: Ignore this. #jaywalking101's answer is much better. Do that instead.
If you're running from inside a Bash shell (Linux, OS X, Cygwin), then
cat __preamble__.js - | node -i
will work. This also spews lots of noise from evaluating each line of preamble.js, but afterwords you land in an interactive shell in the context you want.
(The '-' to 'cat' just specifies "use standard input".)
Similar answer to #slacktracer, but if you are fine using global in your script, you can simply require it instead of (learning and) using .load.
Example lib.js:
global.x = 123;
Example node session:
$ node
> require('./lib')
{}
> x
123
As a nice side-effect, you don't even have to do the var x = require('x'); 0 dance, as module.exports remains an empty object and thus the require result will not fill up your screen with the module's content.
Vorpal.js was built to do just this. It provides an API for building an interactive CLI in the context of your application.
It includes plugins, and one of these is Vorpal-REPL. This lets you type repl and this will drop you into a REPL within the context of your application.
Example to implement:
var vorpal = require('vorpal')();
var repl = require('vorpal-repl');
vorpal.use(repl).show();
// Now you do your custom code...
// If you want to automatically jump
// into REPl mode, just do this:
vorpal.exec('repl');
That's all!
Disclaimer: I wrote Vorpal.
There isn't a way do this natively. You can either enter the node interactive shell node or run a script you have node myScrpt.js. #sarnold is right, in that if you want that for your app, you will need to make it yourself, and using the repl toolkit is helpful for that kind of thing
nit-tool lets you load a node module into the repl interactive and have access to inner module environment (join context) for development purposes
npm install nit-tool -g
First I tried
$ node --interactive foo.js
but it just runs foo.js, with no REPL.
If you're using export and import in your js, run npm init -y, then tell node that you're using modules with the "type": "module", line -
{
"name": "neomem",
"version": "1.0.0",
"description": "",
"type": "module",
"main": "home.js",
"keywords": [],
"author": "",
"license": "ISC"
}
Then you can run node and import a file with dynamic import -
$ node
Welcome to Node.js v18.1.0.
Type ".help" for more information.
> home = await import('./home.js')
[Module: null prototype] {
get: [AsyncFunction: get],
start: [AsyncFunction: start]
}
> home.get('hello')
Kind of a roundabout way of doing it - having a command line switch would be nice...