Node.js assert library vs. other assert libraries - javascript

According to node.js assert library documentation:
The module is intended for internal use by Node.js, but can be used in
application code via require('assert'). However, assert is not a
testing framework, and is not intended to be used as a general purpose
assertion library.
I was looking at Chai as an alternative assert library (no BDD API, only the assert API), and at the end I see that the assert functionality is very similar.
Why Chai's assert library is a better assert library? It does everything than node.js does (beside being just more rich in terms of assertion available, but that's just syntactic sugar-coating). Even simple things like the total count of assert executed is not available on both.
Am I missing something ?

UPDATE (April 2017): Node.js no longer warns people away from using assert so the answer below is now outdated. Leaving it for historical interest, though.
Here's the answer I posted to a very similar question on the Node.js issue tracker.
https://github.com/nodejs/node/issues/4532 and other issues allude to the reason the documentation recommends against using assert for unit testing: There are edge case bugs (or at least certainly surprises) and missing features.
A little more context: Knowing what we now know, if we were designing/building Node.js core all over again, the assert module would either not exist in Node.js or else consist of far fewer functions--quite possibly just assert() (which is currently an alias for assert.ok()).
The reasons for this, at least from my perspective, are:
all the stuff being done in assert could easily be done in userland
core efforts are better spent elsewhere than perfecting a unit testing module that can be done in userland
There's additional context that others may choose to add here or not (such as why, all things being equal, we would favor keeping core small and doing things in userland). But that's the so-called 30,000 foot view.
Since assert has been in Node.js for a long time and a lot of the ecosystem depends on it, we are unlikely (at least as best as I can tell at the current time) to ever remove assert.throws() and friends. It would break too much stuff. But we can discourage people from using assert and encourage them to use userland modules that are maintained by people who care deeply about them and who aggressively fix edge-case bugs and who add cool new features when it makes sense. So that's what that's all about.
True, if you're doing straightforward assertions with simple cases, assert probably will meet your needs. But if you ever outgrow assert, you'll be better off with chai or whatever. So we encourage people to start there. It's better for them (usually) and better for us (usually).
I hope this is helpful and answers your question.

I guess since nobody gave me any good feedback I'll try to provide some light on my original question after some time of working with both node.js assert and chai's assert.
The answer at the very end is that functionality-wise they are the same. The only reason why chai's assert exist is so if you read the code you can get a better understanding of the tests, but that's about it.
For example, testing for a null value with Node.js:
assert(foo === null);
And using chai:
assert.isNull(foo);
They are perfectly equivalent, and sticking to node.js assert limits your dependency list.

Disclaimer: I am the author of the assertthat module that I will refer to in this answer.
Basically, you can achieve all the things with Node's very own assert module that you can do with all the other modules out there, such as Should.js, expect.js or assertthat. Their main difference is the way of how you can express your intent.
Semantically speaking, the following lines of code are all equivalent to each other:
assert.areEqual(foo, bar);
foo.should.be.equal(bar);
expect(foo).to.be(bar);
assert.that(foo).is.EqualTo(bar);
Syntactically, there are two major differences:
First, the should syntax only works if foo is not equal to null or undefined, hence it's inferior to the other ones. Second, there is a difference in readability: While assert.that(...) reads like natural language, all the others don't.
After all, Chai is only a wrapper around a few assertion modules to make things easier for you.
So, to cut a long story short: No, there is no technical reason why to prefer one over the other, but readability and null compatibility may be reasons.
I hope this helps :-)
PS: Of course, internally, they may be implemented differently, so there may be subtle things e.g., how equality is checked. As said in the disclaimer, I'm the author of assertthat so I may be biased, but in the last few years I had the situation from time to time where assertthat was more reliable than the other ones, but as said, I may be biased.

Since noone mentioned it, I thought I would mention rockstar programmer Guillermo Rauch's article (link is to Web Archive backup) on why you should avoid expect-style frameworks in favor of plainer assert style testing.
Mind you, he is the author of expect.js, so he has once thought otherwise. So have I.
He makes an elaborate argument, but basically it's about reducing the mental burden of API overload. I can never remember which dialect of should and expect I am writing. Was it .includes(foo).to.be.true() or was it .includes(foo).to.be.true or was it ...
TJ Holowaychuck wrote a nice assert library called better-assert to get better output which you might check out.

Related

Started playing with Coffeescript - couple of basic questions

1 - Method Chaining
I really love the way you can call functions without polluting your code with brackets, but the following inconsistency really bothers me;
$(this).attr("id").data "foo"
Method chaining like this pretty much requires me to use brackets up till the last method in the chain, this seems pretty inconsistent and makes my OCD sense tingle like crazy.. am I miss-understanding something here? Is there a more consistent but clean approach (ie. aside from reverting to using brackets everywhere).
2 - Compiler config?
I use coffee --watch to have it automatically compile the files, however the --help shows very few arguments I can give to change it's behaviour. For one thing I'd like to change the tab size of the resulting javascript. Is there any way to do this?
1. Chaining
No, it really isn't much cleaner than javascript, as far as syntax goes. And lots of people are complaining about it. I think you just have to bite the bullet and accept that you have to know javascript to use coffeescript, and that not all the warts of javascript are solved (yet, anyway). Personally I prefer the d3 or jQuery solution of judicious indenting:
$(this)
.attr('id')
.data('foo')
2. Compiler config
There aren't any configs apart from the '--bare' options that I'm aware of. Buts its a compiler, not a formatter. You can send your compiled code all through JS Beautfy (or Uglify for that matter). If you plan on doing this, I highly recommend using a Cakefile. Check out this link for how you can work with the coffee compiler.
No, you need the parentheses if you want to do chaining. I wish it wasn't so, but it is
Not that I know of. What you see in --help is what you get
But CoffeeScript is open source, so you can always hack around with it.
Another solution to your OCD consistency tingling, is to always include parenthesis for method/function arguments. Chaining isn't the only situation where you need to include them. My personal preference would be for the optional omission of parenthesis to be removed from the language but that's probably too extreme for most CoffeeScript users. Instead, I'm choosing to ignore this one "feature" of CS and encouraging my collaborators to do the same. I make the case for it here.

Is "monkey patching" really that bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Some languages like Ruby and JavaScript have open classes which allow you to modify interfaces of even core classes like numbers, strings, arrays, etc. Obviously doing so could confuse others who are familiar with the API but is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition (and if you don't need all of jQuery). Or your Ruby arrays might benefit from a "sum" convenience method which uses "inject". As long as the changes are isolated to your systems (e.g. not part of a software package you release for distribution) is there a good reason not to take advantage of this language feature?
Monkey-patching, like many tools in the programming toolbox, can be used both for good and for evil. The question is where, on balance, such tools tend to be most used. In my experience with Ruby the balance weighs heavily on the "evil" side.
So what's an "evil" use of monkey-patching? Well, monkey-patching in general leaves you wide open to major, potentially undiagnosable clashes. I have a class A. I have some kind of monkey-patching module MB that patches A to include method1, method2 and method3. I have another monkey-patching module MC that also patches A to include a method2, method3 and method4. Now I'm in a bind. I call instance_of_A.method2: whose method gets called? The answer to that can depend on a lot of factors:
In which order did I bring in the patching modules?
Are the patches applied right off or in some kind of conditional circumstance?
AAAAAAARGH! THE SPIDERS ARE EATING MY EYEBALLS OUT FROM THE INSIDE!
OK, so #3 is perhaps a tad over-melodramatic....
Anyway, that's the problem with monkey-patching: horrible clashing problems. Given the highly-dynamic nature of the languages that typically support it you're already faced with a lot of potential "spooky action at a distance" problems; monkey-patching just adds to these.
Having monkey-patching available is nice if you're a responsible developer. Unfortunately, IME, what tends to happen is that someone sees monkey-patching and says, "Sweet! I'll just monkey-patch this in instead of checking to see if other mechanisms might not be more appropriate." This is a situation roughly analogous to Lisp code bases created by people who reach for macros before they think of just doing it as a function.
Wikipedia has a short summary of the pitfalls of monkey-patching:
http://en.wikipedia.org/wiki/Monkey_patch#Pitfalls
There's a time and place for everything, also for monkey-patching. Experienced developers have many techniques up their sleeves and learn when to use them. It's seldom a technique per se that's "evil", just inconsiderate use of it.
With regards to Javascript:
is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
Yes. Worst-case, even if you don't alter existing behavior, you could damage the future syntax of the language.
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
If you mutate a built-in object like Array on your own browser, on your own computer, that's fine. (This is a very useful technique for userscripts.) If you mutate a built-in object on your public-facing site, that's less fine - it may eventually result in problems like the above. If you happen to control a big site (like stackoverflow.com) and you mutate a built-in object, you can almost guarantee that browsers will refuse to implement new features/methods which break your site (because then users of that browser will not be able to use your site, and they will be more likely to migrate to a different browser). (see here for an explanation of these sorts of interactions between the specification writers and browser makers)
All that said, with regards to the specific example in your question:
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition
This is a very common and trustworthy technique, called a polyfill.
A polyfill is code that implements a feature on web browsers that do not support the feature. Most often, it refers to a JavaScript library that implements an HTML5 web standard, either an established standard (supported by some browsers) on older browsers, or a proposed standard (not supported by any browsers) on existing browsers
For example, if you wrote a polyfill for Array.prototype.map (or, to take a newer example, for Array.prototype.flatMap) which was perfectly in line with the official Stage 4 specification, and then ran code that defined Array.prototype.flatMap on browsers which didn't have it already:
if (!Array.prototype.flatMap) {
Array.prototype.flatMap = function(...
// ...
}
}
If your implementation is correct, this is perfectly fine, and is very commonly done all over the web so that obsolete browsers can understand newer methods. polyfill.io is a common service for this sort of thing.
As long as the changes are isolated to
your systems (e.g. not part of a
software package you release for
distribution) is there a good reason
not to take advantage of this language
feature?
As a lone developer on an isolated problem there are no issues with extending or altering native objects. Also on larger projects this is a team choice that should be made.
Personally I dislike having native objects in javascript altered but it's a common practice and it's a valid choice to make. If your going to write a library or code that is meant to be used by other's I would heavily avoid it.
It is however a valid design choice to allow the user to set a config flag which states please overwrite native objects with your convenience methods because there's so convenient.
To illustrate a JavaScript specific pitfall.
Array.protoype.map = function map() { ... };
var a = [2];
for (var k in a) {
console.log(a[k]);
}
// 2, function map() { ... }
This issue can be avoided by using ES5 which allows you to inject non-enumerable properties into an object.
This is mainly a high level design choice and everyone needs to be aware / agreeing on this.
It's perfectly reasonable to use "monkey patching" to correct a specific, known problem where the alternative would be to wait for a patch to fix it. That means temporarily taking on responsibility for fixing something until there's a "proper", formally released fix that you can deploy.
A considered opinion by Gilad Bracha on Monkey Patching: http://gbracha.blogspot.com/2008/03/monkey-patching.html
The conditions your describe -- adding (not changing) existing behavior, and not releasing your code to the outside world -- seem relatively safe. Problems could come up, however, if the next version of Ruby or JavaScript or Rails changes their API. For example, what if some future version of jQuery checks to see if Array.map is already defined, and assumes it's the EMCA5Script version of map when in actuality it's your monkey-patch?
Similarly, what if you define "sum" in Ruby, and one day you decide you want to use that ruby code in Rails or add the Active Support gem to your project. Active Support also defines a sum method (on Enumerable), so there's a clash.

Does JavaScript (ECMAScript5) Strict Mode offer significant performance advantages to merit widespread use?

I'm reading up a bit on using Strict Mode for JavaScript and it seems that, generally speaking, the idea is to force a more rigid set of rules onto the coder to ensure that the JS engine can optimise the code better. It almost feels like the JavaScript equivalent of "Option Explicit" in Visual Basic.
If this is basically the net effect of applying Strict Mode to my code, would the performance difference be such that it would be worth applying out of habit rather than case-by-case? Are there other advantages besides code stability that might be worth considering?
What are some of the key reasons I would want to apply Strict Mode to my scripts?
Well, strict mode code can certainly perform better because it removes issues that made optimization harder, for example, from the top of my head:
The with statement was removed (Really difficult -if not impossible- to optimize).
No more undeclared assignments, and other prohibitions, e.g. (delete varName;)
eval does not introduce variable/function declarations into the local scope.
arguments.callee was removed, (difficult to optimize (e.g. function inlining))
The arguments object index named properties are not anymore dynamically mapped to the named formal parameters.
I think the reasons to use it were spelled out well by John Resig, http://ejohn.org/blog/ecmascript-5-strict-mode-json-and-more/, and it appears Firefox will be supporting it, http://whereswalden.com/2010/09/08/new-es5-strict-mode-support-now-with-poison-pills/, so it may be useful to look at, at least for libraries.
But, basically, it is to help prevent some common programming errors, but for some people losing the eval may be reason not to use it, and for me not having unnamed anonymous functions will be difficult, but, anything that can help reduce errors may be worthwhile.
I don't know if the performance would be worthy it, but I guess your results may vary. I suppose it depends on your script. But that doesn't mean to be the main point, but reducing your time in maintaining your code. So anything that makes save you time (and money) maintaining your code, and makes it faster, is golden.
I have been corrected, and, sadly, it doesn't include strong typing. Many years were spent by researchers to enforce typing to detect errors at compile time, and now we have to trust we are code is good, or verify it by hand or unit testing. IMHO, the time spent in unit testing is usually scarce in many places, and it should not be spent on things that could be done by the compiler.

Squeezing performance out of v8

Are there any good tutorials on how to write fast, efficient code for v8 (specifically, for node.js)?
What structures should I avoid using? What are the idioms that v8 optimises well?
From my experience:
It does inlining
Function call overhead is minimal (inlining)
What is expensive is to pass huge strings to functions, since those need to be copied and from my experience V8 isn't always as smart as it could be in this case
Scope lookup is expensive (surprise)
Don't do tricks e.g. I have a binary encoder for JS Object, cranking out some extra performance with bit shifting there (instead of Math.floor) latest Crankshaft (yes alpha, but still) runs the code 30% slower
Don't use magic. eval, arguments.callee etc. Those pretty much kill any optimization since code can no longer be inlined
Some of the new ES5 stuff e.g. .bind() is really slow in V8 at the moment
Somehow new Object() and new Array() are a bit faster currently (MICROoptimization, unless you're writing some crazy encoder stick with {} and [])
My rules:
Write good code
Write working code
Write code that works in strict mode (support still has to land, but when it does further optimization can be applied by V8)
If you're an JS expert and your already applying all good practices to your code, there's hardly anything you can do to improve performance.
If you encounter performance issues:
Verify them
Change the code / algorithm
And as a last resort: Write a C++ extension (and watch every commit to ry/node on GitHub since nobody cares whether some internal changes break your build)
The docs give a great answer: http://code.google.com/apis/v8/design.html
Understanding V8 is a set of slides from nodecamp.eu and gives very some interesting tips. In particular, I found the notes on avoiding "dictionary mode" useful i.e. it helps if you keep the "shape" of objects constant and don't add arbitrary properties to them.
You should also run node with --crankshaft --trace-opt --trace-bailout (the --crankshaft is only needed on 64-bit platforms e.g. OS X) to see whether V8 is "bailing" on JITing certain functions. There is a ton of other trace options including --trace-gc and various other GC tracing, which can be useful for optimisation.
Let me know if you have any specific questions about the slides above as they're a bit concise. :-) They're not mine but I've done some research about the areas they cover.

Speed comparison of Cappuccinos obj_msgSend() vs. normal JavaScript-call avaiable?

As you know Cappuccino implements the dispatch mechanism of Objective-C / Smalltalk to send messages to objects (~call their methods) in a special method called objj_msgSend.
[someObject someMethodToInvocate: aParameter];
Obviously this introduces some overhead and therefor speed-loss. I'd like to know if somebody can provide a speed comparison between this Message Sending and the normal way to execute a method in JavaScript…
someObject.someMethodToInvocate(aParameter);
In your comments you say you're wondering 'in general' in the context of Cappuccino applications. In that case the test is easy: run any Cappuccino application, such as GitHub Issues, and judge for yourself if its slow or not. Try scrolling in the main table, select a few entries and so on. That'll tell you if Cappuccino is fast or slow 'in general' as objj_msgSend is used extensively in any use case you can think of in an application like this.
If you're actually thinking of something more specific after all, note that nothing about Cappuccino forces you to use message passing. Just like in Objective-C you can always 'drop down to the metal' - pure JavaScript in this case - when you need to do something more performance intensive. If you have a tight loop, and you don't require the additional functionality provided by objj_msgSend, simply call functions directly. Objective-J won't mind.
objj_msgSend is for my simple tests of pure method calling about 2–2.5 times slower than a direct call.
That is actually quite good, given the advanced features it makes possible.
This is coming two years too late, but this is a slightly invalid question (in no way saying that makes it a bad question). There is really no point questioning the speed of objj_msgSend, not when you are assuming that it is a Smalltalk/Obj-C/Obj-J specific feature.
Javascript has ALWAYS had this ability.
Lookup: the call() AND apply() methods... (a quick google search will bring up articles like this -> http://vikasrao.wordpress.com/2011/06/09/javascripts-call-and-apply-methods/ )
It is the same issue with jQuery/Prototype/etc..., they are all fine and dandy and useful. But they hurt the development community because everyone relies on these frameworks instead of learning the core language features that make any language useful.
Do yourself and the development community a favor and LEARN YOUR LANGUAGES, NOT FRAMEWORKS. If you know the languages you use, the frameworks you use are irrelevant, use them or just build them yourself, because at that point you should be able to.
Hope that came off as helpful and not condescending, thats not my intention. :)

Categories

Resources