I found this example of Smoke Test with ReactJS and Jest, but I can't understand it. Someone can explain it to me, please?
Well, first I can't see the difference between 'smoke tests' and 'unit tests' I see people say smoke tests are superficial and obvious. But every unit test isn't it? I mean, every one of them isn't made to check if things are working the way they should work? When a test isn't obvious and can be understanded not as a "smoke test", but as a "unit test"? Second, I'm starting with unit tests and I can't understand Jest mechanism. In this case, it creates a div trough "document.Element('div')" and then compare with my project div, "MyCard"?
Thanks in advance.
// MyCard.js
import React from "react";
const MyCard = () => {
const [counter, setCounter] = React.useState(0);
const handleClick = () => {
setCounter(counter++);
}
return (
<div>
<p>Counter: {counter}</p>
<button onClick={handleClick}>Increment</button>
</div>
);
}
export default MyCard;
//MyCard.test.js
import React from "react";
import ReactDOM from "react-dom";
import MyCard from "./MyCard";
it("renders without crashing", () => {
const div = document.createElement("div");
ReactDOM.render(<MyCard />, div);
});
I tried the example, it worked. But I can't understand why.
Why wouldn't it work? It's testing that the component just does something; quoting Wikipedia on smoke tests,
For example, a smoke test may address basic questions like "does the program run?", "does the user interface open?", or "does clicking the main button do anything?"
In this case, the test addresses the question "does it render at all".
So based on your comment there are two components to this question. What makes a smoke test different to a unit test? and What is this smoke test doing, and why?
A unit test has a very limited scope. A proper unit test will cover a single unit of a larger system. Compare that to, for example, an integration test or an end-to-end test which have a larger scope. Good, thorough unit tests will also cover all relevant aspects of the module they are testing, which can mean testing every line and every conditional. Writing and running a full suite of unit tests can verify a lot about the behaviour of your code, but it's possible to have a broken program in which every unit test passes, due to some of the integrations being broken, for example.
A smoke test, instead, is a low-effort sanity check to make sure that something works. It doesn't guarantee that it works well, or that there are no bugs, or that the behaviour or appearance are correct, just that that aspect of the system hasn't been completely broken. Smoke tests are less powerful than a full suite of tests but they tend to be faster to write, easier to update if the code is refactored or changed, and much better than no automated testing at all.
It can be useful to look at what would cause a smoke test to fail, in order to understand why it's useful. In this case the test is creating an empty element, then adding our custom component as a child element. The test passes as long as we are able to do this without ReactDOM.render() throwing an error. So when would this fail? Simply put, this fails if our element has invalid syntax. Let's say we accidentally delete the closing tag in the HTML - this would throw an error, the smoke test would fail, and we would know that the component needs looking at.
Is it as thorough as a proper set of unit tests where we check that the counter works as expected? No. At the same time, though, this is a very fast test to write, and even if we have other unit tests it can be good for debugging to know whether this test passes or fails. If we add another unit test that checks whether the counter is working, and that test fails but this one passes, we know the counter is broken. However if this test fails too, we know that the counter isn't the problem, it's the whole component.
Related
I'm currently using Jest to unit test my React components and Typescript functions. So far, it's been a big success at reducing regressions and I am happy with the vscode Jest plugin that lets me step through the unit test code to see variable values and steps before the failure.
However, the plugin/Jest (not sure which) doesn't allow me to step through my imported functions/components I'm trying to test; the debugging step-through only works in the unit test file.
For example:
// file.tsx (file with functionality I want to test)
export function addOne(x: number) {
return x + 1; // can't step through this
}
// file.test.tsx (the test file)
import addOne from './file';
....... test set up......
it('addOne adds to 3 to get 5', () => { // obviously a failing test
const start = 3; // can only step through this file
const result = addOne(start); // step into doesn't work here, will skip over to the expect
expect(result).toBe(5);
});
This was obviously a fake example, but I can only step through the steps in file.test.tsx and I can't step into the addOne function and see what is happening inside it. I can only see what it returns. In order for me to get around this, I had to place console.logs for all the variables in the imported functionality files (file.tsx) and it has been a pain.
Is there a way to configure Jest (or another unit testing library for react/typescript) to be able to step through these imported functions?
Thanks
Update:
I should clarify that when I mean I can't step into functions, I also mean that putting breakpoints in file.tsx doesn't work. The debugger does stop at breakpoints in file.test.tsx however, which I find strange I can't simply do the former.
You can use the debugger; statements supported by Node. It will stop code execution (breakpoint) and open gdb-like prompt that you can debug in using the terminal. You can find more information on this page. It also explains how to enable debugging in VSCode. Also, WebStorm IDE supports running Jest tests from the IDE and using the IDE-defined breakpoints.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have never written test cases so I am totally not sure on how to achieve this (and have never worked with travis as well).
I have react-native-formly npm library written entirely in javascript.
Every time dependent bot creates a request, before merging, I want it run test cases and only merge if the app loads and does not crashes. I have seen other open source repo have travis.yml which they used to achieve this but it is still vague to me on how I could achieve it.
Can someone guide me on how to achieve this? what kind of test cases/library I should use. I know there are libraries like jest for snapshot testing but I don't care much if the UI renders differently.
Added a PR for your repository. https://github.com/irohitb/rn-formly/pull/14
This will be quite a long answer, as the question lacks focus, since it's asking for react native jest setup, CI/CD and TDD which could take up multiple Q&A.
Every time dependent bot creates a request, before merging, I want it run test cases and only merge if the app loads and does not crashes. I have seen other open source repo have travis.yml which they used to achieve this but it is still vague to me on how I could achieve it.
Travis CI - you can follow the tutorial
for Circle CI - getting-started
for cypress - e2e cypress
Can someone guide me on how to achieve this? what kind of test cases/library I should use. I know there are libraries like jest for snapshot testing but I don't care much if the UI renders differently.
In this PR - we have introduced jest and react-native-testing-library.
jest is the standard React testing suite which is similar to mocha/chai/assert in the node ecosphere.
react-native-testing-library - allows us to query components and the value/text render that the component should have. You can find more info or the docs
in the PR - we have included one test to get you setup, you should be able to continue with the other components.
import React from "react";
import { render } from "react-native-testing-library";
import { InputText } from "../formComponent/text";
describe("render app components", () => {
it("should render text", () => {
const props = {
upsideEmit: () => {},
textInputStyle: [],
value: "Hello World",
};
const component = render(<InputText {...props} />);
expect(component.toJSON()).toMatchSnapshot();
expect(component.getByDisplayValue("Hello World")).toBeDefined();
});
});
We're able to
1. assert whether the component.toJSON matches the snapshot
expect(component.toJSON()).toMatchSnapshot();
2. given props with value `"Hello World" we can assert that the displayValue is rendered.
const props = {
upsideEmit: () => {},
textInputStyle: [],
value: "Hello World",
};
const component = render(<InputText {...props} />);
expect(component.getByDisplayValue("Hello World")).toBeDefined();
in relation to how you test your components - you should be able to:
check if the component renders.
determine based on the props, that the component renders.
2.1. success case.
2.2. error case.
2.3. normal case.
You may want to add integration tests (e.g. userflow
render component -> interact with component -> check above test cases) more details in terms of libraries can be found on this previous stackoverflow question
You need some kind of end-to-end test if you want to run the app, and ensure it doesn't crash.
Have a look at Cypress (http://cypress.io) or webdriverio (http://webdriver.io)
Cypress has docs on setting up with CI/CD https://docs.cypress.io/guides/guides/continuous-integration.html#Setting-up-CI
Edit... Sorry I missed the react-native part. The above would work for web apps, have a look at something like Detox for React Native (https://github.com/wix/Detox)
I'm testing a lot of things but some of them are not too important(like caption text fail)I want to add optional parameter (if its wrong thats okay continue testing)
I used to work with Katalon Studio, it has Change failure options(stop,fail,continue) Can I make it with Cypress for my test cases.
Sample image
As Mikkel mentioned already Cypress doesn't like optional testing. There is a way how you could do that by using an if-statement as explained in this question: In Cypress, is there a way to avoid a failure depending on a daily message?
But to do that for every test you optionally want to test can make a big pile up of your code. So if you don't care if it succeeds or fails, just don't test it.
Another way you can try to be more resilient is by cutting up the tests further more. But you have to make sure that the scenarios don't rely on each other otherwise they will still fail.
Recently I've become familiar with Jest library and unit test concepts and everything due to jest's documentation is right in my codes.
But I need to know what's difference between mocking and unmocking concept in Jest and other unit test libraries.
Thanks
Mock means to replace an instance with another one. In jest its used to replace the implementation of imported modules with your own one.
jest.mock('yourModule', () => {test: ()=> 'test'})
The main idea behind it, is to isolate your code in unit test, so that you only test one module without the influence of other parts of your application or external code. This has a bunch advantages. First of all if the code in one module breaks, only the test for this part will fail and not a all test for parts that just import this module. Second you can simplify the test itself as you dont need to start up a server that returns with specific data, which would also slow down your code.
The unmock feature is there cause of the automock feature, which was the default in the past. Automocking will replace all imported modules with default mock. As this make sense for some modules but is not wanted for lodash for example you could then unmock the mocking of them. So unmock is mostly needed with automock switched on to get the original implementation if needed.
I'm having a problem with jasmine leaving tests unexecuted. Well, they don't appear in the list of text descriptions of the tests, there are just unhelpful images of dashes that signify a test should run. I'm running the entire test suite and have 12 / 2000-some tests not running for no apparent reason.
Is there a way to associate the actual name of the test with the icon? I would like to know where they are coming from and there isn't any indication of it currently
OKAY! FOUND IT!
well, i started by isolating the test, and I quickly realized I was setting the test suites this.id = 123, which was breaking lots of stuff. So if you have to keep track of an id for a test, make sure you namespace it better.