I use ES6 extends frequently in projects, sometimes I modify the parent class, but forget to check its child classes (as the increase of developers, things seemed to get worse). Maybe someone couldn't know if the class has been inherited anywhere.
Are there any tools or ways which could help me to check the inheritance relationship of classes?
Unit Tests
Unit test your code! If a superclass changes behavior, it will likely break a unit test, so you know you messed up and you know to correct the offending subclasses.
If it broke functionality in your application, but not one of your unit tests, then your unit test coverage is not good enough, or you've missed some scenarios.
That's the number one thing you should do before any kind of refactoring! Unit test, unit test, and again unit test!
Text Search
If you're using any fancy IDEs you could search through javascript files for something like "extends MyChangedSuperClasss", assuming you colleagues don't use an arbitrary number of spaces between the keyword and the class name.
If you're not using a fancy IDE try to find a file manager that offers text search functions.
(Can't comment yet, so posted as an answer.)
instanceof is as close as you're going to get.
Also, bug-a-lot isn't wrong. Unit testing would go a long way in preventing such regressions.
Related
I come from the Java world, i.e. a typesafe world and am right now doing a couple of things for which I need client side execution using javascript.
I keep running into pretty hard to detect errors at times due to the non typification of JS and am wondering if there is any way to prevent it beforehand. E.g. setting sth like "use typification;" or via some tool that does these checks before executing like a compiler does.
E.g. last time was when I was creating a face in three.js. There depening on order of vertices a face is front-facing or not. I had mixed that up and then copy pasted parameters in which case I also copied a bracket too much so it ended up in the wrong place with just calling the method with one instead of three vertices which of course resulted in an error. However in line 2107 of three.js code and it took a while to figure out this little copy paste issue. Comparing to java the compiler would have directly complained that i try calling the method with 1 instead of 3 parameters...
Hope there is sth like it. Or do have some tips how to spot such things faster?
Cheers
Tom
There are various linting tools which you can use to scan your javascript files before you actually use them. The popular ones in the industry are JSLint, JSHint, JSCS and ESLint.
They come inbuilt with various rule sets which you can configure and you can also add your own rules.
You can compare these to checkstyles and PMD from the JAVA world.
You have a number of answers. But first, need to clarify: Java is not type-safe (see: NullPointerException, history of).
But to get closer to type-safety in any dynamic language you have the option to pepper your code with asserts. This can to some degree be automated, it may cause performance issues. This is the route I usually take, but I certainly wouldn't do it with three.js.
For JavaScript specifically, you have two additional options: TypeScript and Flow.
TypeScript is a dialect of JavaScript with type annotations that gets compiled down to plain JS. Flow is a static analyzer written in OCaml that tries to infer types in your JS code and check them.
According to node.js assert library documentation:
The module is intended for internal use by Node.js, but can be used in
application code via require('assert'). However, assert is not a
testing framework, and is not intended to be used as a general purpose
assertion library.
I was looking at Chai as an alternative assert library (no BDD API, only the assert API), and at the end I see that the assert functionality is very similar.
Why Chai's assert library is a better assert library? It does everything than node.js does (beside being just more rich in terms of assertion available, but that's just syntactic sugar-coating). Even simple things like the total count of assert executed is not available on both.
Am I missing something ?
UPDATE (April 2017): Node.js no longer warns people away from using assert so the answer below is now outdated. Leaving it for historical interest, though.
Here's the answer I posted to a very similar question on the Node.js issue tracker.
https://github.com/nodejs/node/issues/4532 and other issues allude to the reason the documentation recommends against using assert for unit testing: There are edge case bugs (or at least certainly surprises) and missing features.
A little more context: Knowing what we now know, if we were designing/building Node.js core all over again, the assert module would either not exist in Node.js or else consist of far fewer functions--quite possibly just assert() (which is currently an alias for assert.ok()).
The reasons for this, at least from my perspective, are:
all the stuff being done in assert could easily be done in userland
core efforts are better spent elsewhere than perfecting a unit testing module that can be done in userland
There's additional context that others may choose to add here or not (such as why, all things being equal, we would favor keeping core small and doing things in userland). But that's the so-called 30,000 foot view.
Since assert has been in Node.js for a long time and a lot of the ecosystem depends on it, we are unlikely (at least as best as I can tell at the current time) to ever remove assert.throws() and friends. It would break too much stuff. But we can discourage people from using assert and encourage them to use userland modules that are maintained by people who care deeply about them and who aggressively fix edge-case bugs and who add cool new features when it makes sense. So that's what that's all about.
True, if you're doing straightforward assertions with simple cases, assert probably will meet your needs. But if you ever outgrow assert, you'll be better off with chai or whatever. So we encourage people to start there. It's better for them (usually) and better for us (usually).
I hope this is helpful and answers your question.
I guess since nobody gave me any good feedback I'll try to provide some light on my original question after some time of working with both node.js assert and chai's assert.
The answer at the very end is that functionality-wise they are the same. The only reason why chai's assert exist is so if you read the code you can get a better understanding of the tests, but that's about it.
For example, testing for a null value with Node.js:
assert(foo === null);
And using chai:
assert.isNull(foo);
They are perfectly equivalent, and sticking to node.js assert limits your dependency list.
Disclaimer: I am the author of the assertthat module that I will refer to in this answer.
Basically, you can achieve all the things with Node's very own assert module that you can do with all the other modules out there, such as Should.js, expect.js or assertthat. Their main difference is the way of how you can express your intent.
Semantically speaking, the following lines of code are all equivalent to each other:
assert.areEqual(foo, bar);
foo.should.be.equal(bar);
expect(foo).to.be(bar);
assert.that(foo).is.EqualTo(bar);
Syntactically, there are two major differences:
First, the should syntax only works if foo is not equal to null or undefined, hence it's inferior to the other ones. Second, there is a difference in readability: While assert.that(...) reads like natural language, all the others don't.
After all, Chai is only a wrapper around a few assertion modules to make things easier for you.
So, to cut a long story short: No, there is no technical reason why to prefer one over the other, but readability and null compatibility may be reasons.
I hope this helps :-)
PS: Of course, internally, they may be implemented differently, so there may be subtle things e.g., how equality is checked. As said in the disclaimer, I'm the author of assertthat so I may be biased, but in the last few years I had the situation from time to time where assertthat was more reliable than the other ones, but as said, I may be biased.
Since noone mentioned it, I thought I would mention rockstar programmer Guillermo Rauch's article (link is to Web Archive backup) on why you should avoid expect-style frameworks in favor of plainer assert style testing.
Mind you, he is the author of expect.js, so he has once thought otherwise. So have I.
He makes an elaborate argument, but basically it's about reducing the mental burden of API overload. I can never remember which dialect of should and expect I am writing. Was it .includes(foo).to.be.true() or was it .includes(foo).to.be.true or was it ...
TJ Holowaychuck wrote a nice assert library called better-assert to get better output which you might check out.
The Google Closure Compiler is a powerful compiler and minifier for JS, which gives a lot of optimization options such as renaming variables, removing dead codes, collapsing variable declarations, rewriting control flow structures and etc.
What I want is to separately apply one or some of these optimizations on an input JS program. For example, I may want to rename variables with short names, but not to remove dead codes. How can I achieve this kind of detailed compilation pass control? Does the source code of CC expose specific interfaces to do this customization, or I should write my own pass(If so, how am I supposed to start?).
The command line features do offer several options for controlling the compilation, but are insufficient to fit what I want above. Since the source code is kinda complicated and few detailed design documentation can be found, I am truly stuck here. Any insights would be appreciated, thanks :)
Take a look at DefaultPassConfig. That class lists all the passes that are run during the compilation, based on what options are set in the CompilerOptions. Some of the CompilerOptions can be controlled from the command line, but we generally try to keep the compiler relatively simple and easy to use, and not ask users to make decisions about a bunch of different compiler flags. Plus, there are some passes that actually increase the code size, but they do it in such a way that it makes it easier for some later pass to decrease it afterwards.
Of course if you're just experimenting with the compiler or trying to understand how it works, you can turn on and off whichever passes you want, either by adding new flags, or just modifying DefaultPassConfig directly.
Recently I try to apply strict OO and testing to my application. And there are few things I would like to ask:
As javascript is not type restricted, for parameter input, do you need to write unit test to check if is null, undefined or different type?
Part of the last question, if yes, is it necessary to write unit test for constructor input parameter validation?
Should I try to apply strict OO to javascript OO. For example, all class should encapsulate their instance property (using var) and have getter/setter in closure instead?
From how I feel, trying to cover all the above will result in distraction and inefficient testing.
What is your suggestion?
Many thanks,
As javascript is not type restricted, for parameter input, do you need to write unit test to check if is null, undefined or different type?
It's up to you how defensive you want your code to be. If a requirement of your code is to gracefully handle nonsense arguments then you should write a test that tests that graceful handling. If you have code that handles it, you should have a test that covers it. If you don't have code to handle nonsense input, there isn't much reason to test it.
Part of the last question, if yes, is it necessary to write unit test for constructor input parameter validation?
Really, it's the same answer. If you have code that validates your inputs, you should have tests that cover that code.
Should I try to apply strict OO to javascript OO. For example, all class should encapsulate their instance property (using var) and have getter/setter in closure instead?
Javascript is not a traditional OO language. Only using strictly OO patterns would be doing yourself a disservice since JavaScript's most expressive features are around things on functional side of things. Having truly "private" member variables "using var", for instance, fundamentally changes the way you have to structure your code around those variables. This may not be a compromise worth making.
It's probably wise to architect a large codebase using primarily a class based approach. But don't be afraid to use functional features where appropriate. As long as you have the test coverage either way, of course. But this is primarily opinion at this point.
I have function in controller of my directive:
$scope.getPossibleWin = function() {
return betslipService.getPossibleWin()
}
betslipService injected into controller
Now I'm thinking how to test $scope.getPossibleWin:
Test that $scope.getPossibleWin calls betslipService.getPossibleWin
Test that $scope.getPossibleWin return correct value (but this already tested in betslipService!)
Test that $scope.getPossibleWin simply exist
What is best practices in wrappers testing?
Option 2 is the best, option 1 I am not very convinced about. I don't have experience with Javascript so I'm not sure why you should have to verify that a function exists (option 3).
You can find more information on it here but the reason that you should indeed add a test for this method is to prevent yourself from breaking anything in the future. If you only rely on that one method 5 layers deep in your application, it could be that one day you add code in a higher layer which changes the result but it is not being tested. Or at some level some code has a side-effect which disturbs the result that came from the bowels of your codebase.
Therefore I would suggest you to make a test for each (relevant) level. The question what should I test exactly is probably a little bit preference-oriented but I would argue that the very least you should do is testing whether it returns the correct value, as layed out above.
Should you test that it calls that specific inner method? You could, but that isn't exactly something you do in unit-testing because then you would be testing against the unit's internal workings. You don't care how it works inside, you just care that the function gives you the response that you expected. By coupling these two in your unit-test, you'll end up with a broken test for non-broken code when you decide to refactor the internal workings.