Outputing Logs in Reporter Cypress - javascript

I am writing a test kit on cypress for my web application. One thing I want to be able to do is display certain data about the state of the application appended to each test so I know the particulars of that user's configuration. For example, the url of the test, whether the user has certain permissions or not etc. I am using mochawesome for the reporting but am not entirely partial to it. Is there some way to add data that comes from cy.get, cy.find... commands to the reporting?

I figured it out. Add this to support/commands.js
import addContext from 'mochawesome/addContext';
Cypress.Commands.add('addContext', (context) => {
cy.once('test:after:run', (test) => addContext({ test }, context));
});
Then use the function through cy.addContext("whatever").
It will get added to the mochawesome test file.

Related

Resetting the database using Cypress (postgresql)

When I populate the database, I don't do it completely (for certain tests I chose to populate only when I get to them). In the end, I would like to reset the database, so that when I do the next populate (the next tests running), the database will be reset so that complications do not appear. How could I reset this database after running all tests (all specs)?
What I tried to do was to use the "after" hook with the "npx prisma migrate reset --force" command, but that resets the database after each file, not after all the specs have run.
One option would be to make a file in which to import all the test files, and when I run the file with all the imports, in the end, reset the database. The minus in this version is that every time I add a new test I have to include it in the imports from this file.
What would be the best practice?
If you have something that works in the after() hook, try moving it to the After Run API which is kind-of a hook, but as it says executes after the run.
Here's the basic shell from the docs:
const { defineConfig } = require('cypress')
module.exports = defineConfig({
// setupNodeEvents can be defined in either
// the e2e or component configuration
e2e: {
setupNodeEvents(on, config) {
on('after:run', (results) => {
/* ... */
})
}
}
})
One thing you should do after implementing is deliberately fail a test and see if the database is cleared. Some after() type hooks don't run in a fail scenario.
If you find that to be the case, instead add a before() at the top of the cypress/support/e2e.js file so that the database is cleared before each run.
Also, importing your setup functions there can save you importing them into each spec.

Conditional E2E tests with Cypress for a payment processor

We are using GoPay.com for payments in our app. It runs in two modes, full redirect to their site or iframe based solution to show the payment form inline. The conditions for display either of those is not exactly clear and might vary per a browser and who knows what else.
I need to interact with the payment form in tests to go through with it, but I am struggling how to do that. There is a sandbox environment, so it's ok to do test (free) payments.
Basically, I tried the following, but Cypress is not waiting for that page to load and fails right away.
cy.window().then(win => {
if (win.location.host.includes('gopay.com')) {
return win.document.querySelector('.main-body')
} else {
return // find form in iframe somehow
}
})
Also, I am unsure how to tackle the finding that form in the iframe.

jest testing pass variable to another test

I'm trying to assign a variable from one test to be accessed within another test. For example:
let user;
test('create new user', async () => {
const response = await createUser()
user = response
})
test('delete user', async () => {
const response = await deleteUser(user.id)
})
I understand that jest has a --runInBand option, however this still has user as undefined in "delete user". Any ideas how to accomplish this with jest? Thank you
Each test runs independently, and for good reason. Tests should be confirming isolated conditions, methods, and logic. All --runInBand does is run the tests in serially, but they still won't necessarily be able to share data objects the way you seem to be expecting.
Also, assuming these methods defer to a backend service of some kind, you're not going to easily be able to fully test the behavior of that system. It sounds like you want an end-to-end or integration testing framework, as opposed to a unit testing framework like Jest.
Keeping with Jest, you're likely going to need to mock whatever backend service is being called in createUser and deleteUser. Jest mocks can help replace external functions with new ones that create the types of conditions you want to test.
Alternatively or in addition, you might be able to stub your user object using beforeAll or beforeEach, creating sample data that allows you to test how deleteUser behaves when it's passed a particular object (likely bypassing whatever backend persistence with an aforementioned mock).

Hide/Disable firebase functions for client

I am new to firebase and wondering how to disable specific functions for the client to manually put in the browsers console.
Example:
function createRoomDB(roomID, name, mode, start, length, aname, opcount, secrettoken) {
firebase.database().ref('rooms/' + roomID).set({
name: name,
mode: mode,
start: start,
length: length,
aname: aname,
opcount: opcount,
secrettoken: secrettoken
});
}
(The names have nothing to do with my question.)
Long story short: I don't want users to simply use this command to create new data. I know that you can't hide code on front-end, but what are the easiest and most efficient ways to disable this hell of a backdoor?
I am planning to host this application on GitHub pages.
Since your code can access the database, there is no way to prevent other code that runs on the same environment to also access the database.
This means you have two options:
Make sure all code (no matter who wrote it) only can perform authorized operations on the database.
Run the code in a different environment.
For the first option, you'll want to look into Firebase security rules, which automatically run server-side and can enforce most requirements.
For the second option, you could for example run the code in Cloud Functions for Firebase, and call that from your API. This allows you to hide any secret values and code in a trusted environment, but does mean that you'll need to ensure only authorized users can call that Cloud Function.

How to to log JS errors from a client into kibana?

I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).
Right now I need to record all JavaScript client side errors into kibana also. What is the best way to do this?
EDIT: I have to add additional information to this question. As #Jackie Xu provide partial solution to my problem and as follows from my comment:
I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.
I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?
When you say client, I'm assuming here that you mean a logging client and not a web client.
First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.
The overall process will go like this:
Error occurs in your app
Log the error to file, socket, or over a network
Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan
node app index.js
var bunyan = require('bunyan');
var log = bunyan.createLogger({
name: 'myapp',
streams: [{
level: 'info',
stream: process.stdout // log INFO and above to stdout
}, {
level: 'error',
path: '/var/log/myapp-error.log' // log ERROR and above to a file
}]
});
// Log stuff like this
log.info({status: 'started'}, 'foo bar message');
// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
log.error(err);
res.send(500, 'An error occurred');
});
Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:
logstash config myapp.conf
# Input can read from many places, but here we're just reading the app error log
input {
file {
type => "my-app"
path => [ "/var/log/myapp/*.log" ]
codec => "json"
}
}
# Output can go many places, here we send to elasticsearch (pick one below)
output {
elasticsearch {
# Do this if elasticsearch is running somewhere else
host => "your.elasticsearch.hostname"
# Do this if elasticsearch is running on the same machine
host => "localhost"
# Do this if you want to run an embedded elastic search in logstash
embedded => true
}
}
Then start/restart logstash as such: bin/logstash agent -f myapp.conf web
Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.
If I understand correctly, the problem you have is not about sending your logs back to the server (or if it was #Jackie-xu provided some hints), but rather about how to send them to elastiscsearch the most efficiently.
Actually the vast majority of users of the classic stack Logstash/Elasticsearch/Kibana are used to having an application that logs into a file, then use Logstash's plugin for reading files to parse that file and send the result to ElasticSearch. Since #methai gave a good explanation about it I won't go any further this way.
But what I would like to bring on is that:
You are not forced to used Logstash.
Actually Logstash's main role is to collect the logs, parse them to identify their structure and recurrent field, and finally output them in a JSON format so that they can be sent to ElasticSearch. But since you are already manipulating javascript on the client side, one can easily imagine that you would talk directly to the Elasticsearch server.
For example once you have caught a javascript exception, you could do the folowing:
var xhr = new XMLHttpRequest();
xhr.open("PUT", http://your-elasticsearch-host:9292, true);
var data = {
lineNumber: lineNumber,
message: message,
url: url
}
xhr.send(JSON.stringify(data));
By doing this, you are directly talking from the client to the ElasticSearch Server. I can't imagine a simpler and faster way to do that (But note that this is just theory, I never tried myself, so reality could be more complex, especially if you want special fields like date timestamps to be generated ;)). In a production context you will probably have security issues, probably a proxy server between the client and the ES server, but the principle is there.
If you absolutely want to use Logstash you are not forced to use a file input
If, for the purpose of harmonizing, doing the same as everyone, or for using advanced logstash parsing confifuration you want to stick to Logstash, you should take a look at all the alternative inputs to the basic file input. For example I used to use a pipe myself, with a process in charge of collecting the logs and outputting these to the standard output. There is also the possibilty to read on an open tcp socket, and a lot more, you can even add your own.
You would have to catch all client side errors first (and send these to your server):
window.onerror = function (message, url, lineNumber) {
// Send error to server for storage
yourAjaxImplementation('http://domain.com/error-logger/', {
lineNumber: lineNumber,
message: message,
url: url
})
// Allow default error handling, set to true to disable
return false
}
Afterwards you can use NodeJS to write these error messages to a log. Logstash can collect these, and then you can use Kibana to visualise.
Note that according to Mozilla window.onerror doesn't appear to work for every error. You might want to switch to something like Sentry (if you don't want to pay, you can directly get the source from GitHub).
Logging errors trough the default built-in file logging allows your errors to be preserved and it also allows your kernel to optimize the writes for you.
If you really think that it is not fast enough (you get that many errors?) you could just put them into redis.
Logstash has a redis pub/sub input so you can store the errors in redis and logstash will pull them out and store them in your case in elasticsearch.
I'm presuming logstash/es are on another server, otherwise there really is no point doing this, es has to store the data on disc also, and it is not nearly as efficient as writing a logfile.
With whatever solution you go with, youll want to store the data, eg. writing it to disc. Append-only to a single (log) file is highly efficient, and when preserving data the only way you can handle more is to shard it across multiple discs/nodes.

Categories

Resources