Javascript optional type hinting - javascript

When a programming language is statically typed, the compiler can be more precise about memory allocation and thus be generally more performant (with all other things equal).
I believe ES4 introduced optional type hinting (from what I understand, Adobe had a huge part in contributing to its spec due to actionscript). Does javascript officially support type hinting as a result? Will ES6 support optional type hinting for native variables?
If Javascript does support type hinting, are there any benchmarks that show how it pays off in terms of performance? I have not seen an open source project use this yet.

My understanding, from listening to many Javascript talks on the various sites, is that type-hinting won't do as much to help as people think it will.
In short, most Javascript objects tend to have the same "shape", if you will. That is, they will have the same properties created in the same order. This "shape" can be thought of as the "type" of the object.
An example:
function Point(x, y) {
this.x = x;
this.y = y;
}
All objects made from "Point" will have the same "shape" and the newer internal Javascript engines can do some fancy games to get faster lookup.
In Chrome (perhaps others), they use a high-bit flag to indicate if the rest of the number is an integer or a pointer.
With all of these fancy things going on, that just leaves typing for the human coders. I, for one, really like not having to worry about type and wouldn't use that feature.
You are semi-correct, though. Type hinting is a part of ActionScript 3 which is a derivative of ECMAScript -- but hinting has never made it into the standard. AFAIK, outside of wishful thinking, it hasn't been discussed.
This video describes things in far more detail:
http://www.youtube.com/watch?v=FrufJFBSoQY

I'm late, but since no one really answered you questions regarding the standards, I'll jump in.
Yes, type hinting was discussed as part of ECMAScript 4, and it looked like it was going to be the future of JavaScript... until ES4 bit the dust. ECMAScript 4 was abandoned and never finalized. ECMAScript 5 (the current standard) did not contain many of the things that were planned for ECMAScript 4 (including type hinting), and was really just a quickly beefed up version of the ECMAScript 3.1 draft -- to get some helpful features out the door in the wake of ES4's untimely demise.
As you mentioned, now they're working on churning out ECMAScript 6 (which has some totally awesome features!), but don't expect to see type hinting. The Adobe guys have, to a degree, parted ways with the ECMAScript committee, and the ES committee doesn't seem interested in bringing it back (I think for good reason).
If it is something you want, you might want to check out TypeScript. It's a brand new Microsoft project which is basically an attempt to be ES6+types. It's a superset of JavaScript (almost identical except for the inclusion of types), and it compiles to runnable JavaScript.

JavaScript JIT compilers have to do some pretty fancy stuff to determine the types of expressions and variables, since types are crucial to many optimizations. But the JavaScript compiler writers have spent the last five years doing all that work. The compilers are really smart now. Optional static types therefore would not improve the speed of a typical program.
Surprisingly, type annotations in ActionScript sometimes make the compiled code slower by requiring a type check (or implicit conversion) when a value is passed from untyped code to typed code.
There are other reasons you might want static types in a programming language, but the ECMAScript standards committee has no interest in adding them to JS.

ES7 (Not coming soon) has a new feature called guard might be the one you are asking.
The syntax now is a little similar to ES4 and TypeScript.
All use : and append the type to the variable.
But its not confirm syntax.

Javascript is prototype-based, so the 'type' of an object is entirely dynamic and able to change through its lifetime.
Have a look at Ben Firshman's findings on Javascript performance in regards to object types - http://jsconf.eu/2010/speaker/lessons_learnt_pushing_browser.html

Related

JavaScript: why are some functions not methods?

I was once asked by a student why we write:
parseInt(something)
something.toLowerCase()
that is, why one has the variable as a parameter, while the other is applied to the variable.
I explained that while toLowerCase is a method of string objects, parseInt wasn’t designed that way. OK, so it’s window.parseInt, but that just makes it a method of a different object.
But it struck me as an inconsistency — why are some string or other functions not methods of their corresponding objects?
The question is why? Is there a technical reason why parseInt and other functions are not methods, or is that just a historical quirk?
In general, Javascript was designed in a hurry, so questioning each individual design decision isn't always a productive use of your time.
Having said that, for parseInt in particular, the reason is simple to explain: it accepts pretty much any arbitrary type, like:
parseInt(undefined) // NaN
Since you cannot implement undefined.parseInt(), the only way to do it is to implement it as a static function.
As of ECMAScript 2015, parseInt has been mirrored in Number.parseInt, where it arguably makes more sense than on window. For backwards compatibility window.parseInt continues to exist though.
In this specific case it makes sense with respect to encapsulation.
Consider parseInt() - it is taking a value of an unknown type from an unknown location and extracting an integer value from it. Which object are you going to have it a method of? All of them?
String.toUpperCase() should only take a string as input (else something which may be cast as a string) and will return a string. This is well encapsulated within a small subset of cases, and since values are not strongly typed it seems logical to not have it as a global function.
As for the rest of JavaScript I have no idea nor do I have insight into the real reason it was done this way, but for these specific examples it appears to me to be a reasonable design decision.
The development progress of the JavaScript language is quite fast in recent years. With that in mind, a lot of things are still in the API due to backward compatibility - historical reasons as you said. Although I can't say that's the only reason.
In JavaScript, you can approach a problem not just with Object oriented paradigm (where methods of objects usually shares a common state). Another, functional approach can be applied quite easily without getting into too much trouble with JavaScript language.
JavaScript gives great power to its users with many possibilities of approaching a problem. There is a saying: "With Great Power Comes Great Responsibility".

Naming conventions for JSON-serializable variables

What naming conventions are people using for variables that hold JSON-serializable objects? We'd like the name of the variable to remind us to only store information in the object that can be serialized into JSON without losing information. Applications include HTTP sessions, non-searchable database columns, data logging, and serializing restorable application state.
The obvious contenders, at least from the perspective of someone programming in Javascript for node.js, seem to fall short:
prefixJSON - JSON is actually a serialization syntax, so this
would properly be a string in JSON format, not an object
prefixInfo - info is often used in node.js for any kind of map,
including ones that take functions and instances of ES6 classes.
prefixMap - Same issue as the info suffix
prefixData - Doesn't really suggest a constraint on the type
The best I can do is prefixJSONInfo, prefixJSONData, or prefixJSONObject, but I was hoping for something more succinct and readable. The prefix may be lengthy and descriptive too.
This question mainly applies to programming languages that support variant-type variables, such as Javascript. These variables are meant to hold a mishmash, but the programmer needs to be reminded to limit the types of values that are thrown into the mishmash.
In this case, it seems that a specific acronym might serve a good purpose.
JSONSerializable
written:
JSS or Jss
Examples:
prefixJss
customerDataJss
sessionInfoJss
Or maybe even:
JSONSerializableObject
prefixJSO
prefixJso
My $0.02. This might be more opinion based, as I don't think there are any globally accepted standards for these things. There are best practices, such as you mentioned (like short but descriptive). It's up to the developer or team to determine which short and descriptive versions they like (team preference).
Trust me tho, I understand how difficult naming things is sometimes. :)

Javascript used to be a lisp? [duplicate]

A friend of mine drew my attention the welcome message of 4th European Lisp Symposium:
... implementation and application of
any of the Lisp dialects, including
Common Lisp, Scheme, Emacs Lisp,
AutoLisp, ISLISP, Dylan, Clojure,
ACL2, ECMAScript, ...
and then asked if ECMAScript is really a dialect of Lisp. Can it really be considered so? Why?
Is there a well defined and clear-cut set of criteria to help us detect whether a language is a dialect of Lisp? Or is being a dialect taken in a very loose sense (and in that case can we add Python, Perl, Haskell, etc. to the list of Lisp dialects?)
Brendan Eich wanted to do a Scheme-like language for Netscape, but reality intervened and he ended up having to make do with something that looked vaguely like C and Java for "normal" people, but which worked like a functional language.
Personally I think it's an unnecessary stretch to call ECMAScript "Lisp", but to each his own. The key thing about a real Lisp seems like the characteristic that data structure notation and code notation are the same, and that's not true about ECMAScript (or Ruby or Python or any other dynamic functional language that's not Lisp).
Caveat: I have no Lisp credentials :-)
It's not. It's got a lot of functional roots, but so do plenty of other non-lisp languages nowadays, as you pointed out.
Lisps have one remaining characteristic that make them lisps, which is that lisp code is written in terms of lisp data structures (homoiconicity). This is what enables lisps powerful macro system, and why it looks so bizzare to non-lispers. A function call is just a list, where the first element in the list is the name of the function.
Since lisp code is just lisp data, it's possible to do some extremely powerful stuff with metaprogramming, that just can't be done in other languages. Many lisps, even modern ones like clojure, are largely implemented in themselves as a set of macros.
Even though I wouldn't call JavaScript a Lisp, it is, in my humble opinion, more akin to the Lisp way of doing things than most mainstream languages (even functional ones).
For one, just like Lisp, it's, in essence, a simple, imperative language based on the untyped lambda calculus that is fit to be driven by a REPL.
Second, it's easy to embed literal data (including code in the form of lambda expressions) in JavaScript, since a subset of it is equivalent to JSON. This is a common Lisp pattern.
Third, its model of values and types is very lispy. It's object-oriented in a broad sense of the word in that all values have a concept of identity, but it's not particularly object-oriented in most narrower senses of the word. Just as in Lisp, objects are typed and very dynamic. Code is usually split into units of functions, not classes.
In fact, there are a couple of (more or less) recent developments in the JavaScript world that make the language feel pretty lispy at times. Take jQuery, for example. Embedding CSS selectors as a sublanguage is a pretty Lisp-like approach, in my opinion. Or consider ECMAScript Harmony's metaobject protocol: It really looks like a direct port of Common Lisp's (much more so than either Python's or Ruby's metaobject systems!). The list goes on.
JavaScript does lack macros and a sensible implementation of a REPL with editor integration, which is unfortunate. Certainly, influences from other languages are very much visible as well (and not necessarily in a bad way). Still, there is a significant amount of cultural compatibility between the Lisp and JavaScript camps. Some of it may be coincidental (like the recent rise of JavaScript JIT compilation), some systematic, but it's definitely there.
If you call ECMAScript Lisp, you're basically asserting that any dynamic language is Lisp. Since we already have "dynamic language", you're reducing "Lisp" to a useless synonym for it instead of allowing it to have a more specific meaning.
Lisp should properly refer to a language with certain attributes.
A language is Lisp if:
Its source code is tree-structured data, which has a straightforward printed notation as nested lists. Every possible tree structure has a rendering in the corresponding notation and is susceptible to being given a meaning as a construct; the notation itself doesn't have to be extended to extend the language.
The tree-structured data is a principal data structure in the language itself, which makes programs susceptible to manipulation by programs.
The language has symbol data type. Symbols have a printed representation which is interned: when two or more instances of the same printed notation for a symbol appear in the notation, they all denote the same object.
A symbol object's principal virtue is that it is different from all other symbols. Symbols are paired with various other entities in various ways in the semantics of Lisp programs, and thereby serve as names for those entities.
For instance, dialect of Lisp typically have variables, just like other languages. In Lisp, variables are denoted by symbols (the objects in memory) rather than textual names. When part of a Lisp program defines some variable a, the syntax for that a is a symbol object and not the character string "a", which is just that symbol's name for the purposes of printing. A reference to the variable, the expression written as a elsewhere in the program, is also an on object. Because of the way symbols work, it is the same object; this object sameness then connects the reference to the definition. Object sameness might be implemented as pointer equality at the machine level. We know that two symbol values are the same because they are pointers to the same memory location in the heap (an object of symbol type).
Case in point: the NewLisp dialect which has a non-traditional memory management for most data types, including nested lists, makes an exception for symbols by making them behave in the above way. Without this, it wouldn't be Lisp. Quote: "Objects in newLISP (excluding symbols and contexts) are passed by value copy to other user-defined functions. As a result, each newLISP object only requires one reference." [emphasis mine]. Passing symbols too, as by value copy, would destroy their identity: a function receiving a symbol wouldn't be getting the original one, and therefore not correctly receiving its identity.
Compound expressions in a Lisp language—those which are not simple primaries like numbers or strings—consist of a simple list, whose first element is a symbol indicating the operation. The remaining elements, if any, are argument expressions. The Lisp dialect applies some sort of evaluation strategy to reduce the expression to a value, and evoke any side effects it may have.
I would tentatively argue that lists being made of binary cells that hold pairs of values, terminated by a special empty list object, probably should be considered part of the definition of Lisp: the whole business of being able to make a new list out of an existing one by "consing" a new item to the front, and the easy recursion on the "first" and "rest" of a list, and so on.
And then I would stop right there. Some people believe that Lisp systems have to be interactive: provide an environment with a listener, in which everything is mutable, and can be redefined at any time and so on. Some believe that Lisps have to have first-class functions: that there has to be a lambda operator and so on. Staunch traditionalists might even insists that there have to be car and cdr functions, the dotted pair notation supporting improper lists, and that lists have to be made up of cells, and terminated by specifically the symbol nil denoting the empty list, and also a Boolean false. Insisting on car and cdr allows Scheme to be a Lisp, but nil being the list terminator and false rules
The more we shovel into the definition of "Lisp dialect", though, the more it becomes political; people get upset that their favorite dialect (perhaps which they created themselves) is being excluded on some technicality. Insisting on car and cdr allows Scheme to be a Lisp, but nil being the list terminator and false rules it out. What, Scheme not a Lisp?
So, based on the above, ECMAScript isn't a dialect of Lisp. However, an ECMAScript implementation contains functionality which can be exposed as a Lisp dialect and numerous such dialects have been developed. Someone who needs wants ECMAScript to be considered a Lisp for some emotional reasons should perhaps be content with that: that the semantics to support Lisp is there, and just needs a suitable interface to that semantics, which can be developed in ECMAScript and which can interoperate with ECMAScript code.
No it's not.
In order to be considered a Lisp, one has to be homoiconic, which ECMAscript is not.
Not a 'dialect'. I learned LISP in the 70's and haven't used it since, but when I learned JavaScript recently I found myself thinking it was LISP-like. I think that's due to 2 factors: (1) JSON is a list-like associative structures and (2) it's seems as though JS 'objects' are essentially JSON. So even though you don't write JS programs in JSON as you would write LISP in lists, you kind of almost do.
So my answer is that there are enough similarities that programmers familiar with LISP will be reminded of it when they use JavaScript. Statements like JS = LISP in a Java suit are only expressing that feeling. I believe that's all there is to it.
Yes, it is. Quoting Crockford:
"JavaScript has much in common with Scheme. It is a dynamic language. It has a flexible datatype (arrays) that can easily simulate s-expressions. And most importantly, functions are lambdas.
Because of this deep similarity, all of the functions in [recursive programming primer] 'The Little Schemer' can be written in JavaScript."
http://www.crockford.com/javascript/little.html
On the subject of homoiconicity, I would recommend searching that word along with JavaScript. Saying that it is "not homoiconic" is true but not the end of the story.
I think that ECMAScript is a dialect of LISP in the same sense that English is a dialect of French. There are commonalities, but you'll have trouble with assignments in one armed only with knowledge of the other :)
I find it interesting that only one of the three keynote presentations highlighted for the 4th European Lisp Symposium directly concerns Lisp (the other two being about x86/JVM/Python and Scala).
"dialect" is definitely stretching it too far. Still, as someone who has learned and used Python, Javascript, and Scheme, Javascript clearly has a far Lisp-ier feel to it (and Coffeescript probably even more so) than Python.
As for why the European Lisp Symposium would want to portray Javascript as a Lisp, obviously they want to piggyback on the popularity of the Javascript for which the programmer population is many, many times larger than all the rest of the Lisp dialects in their list combined.

Can regular JavaScript be converted to asm.js, or is it only to speed up statically-typed low-level languages?

I have read the question How to test and develop with asm.js?, and the accepted answer gives a link to http://kripken.github.com/mloc_emscripten_talk/#/.
The conclusion of that slide show is that "Statically-typed languages and especially C/C++ can be compiled effectively to JavaScript", so we can "expect the speed of compiled C/C++ to get to just 2X slower than native code, or better, later this year".
But what about non-statically-typed languages, such as regular JavaScript itself? Can it be compiled to asm.js?
Can JavaScript itself be compiled to asm.js?
Not really, because of its dynamic nature. It's the same problem as when trying to compile it to C or even to native code - you actually would need to ship a VM with it to take care of those non-static aspects. At least, such a VM is possible:
js.js is a JavaScript interpreter in JavaScript. Instead of trying to create an interpreter from scratch, SpiderMonkey is compiled into LLVM and then emscripten translates the output into JavaScript.
But if asmjs code runs faster than regular JS, then it makes sense to compile JS to asmjs, no?
No. asm.js is a quite restricted subset of JS that can be easily translated to bytecode. Yet you first would need to break down all the advanced features of JS to that subset for getting this advantage - a quite complicated task imo. But JavaScript engines are designed and optimized to translate all those advanced features directly into bytecode - so why bother about an intermediate step like asm.js? Js.js claims to be around 200 times slower than "native" JS.
And what about non-statically-typed languages in general?
The slideshow talks about that from …Just C/C++? onwards. Specifically:
Dynamic Languages
Entire C/C++ runtimes can be compiled and the original language
interpreted with proper semantics, but this is not lightweight
Source-to-source compilers from such languages to JavaScript ignore
semantic differences (for example, numeric types)
Actually, these languages depend on special VMs to be efficient
Source-to-source compilers for them lose out on the optimizations done in those VMs
In response to the general question "is it possible?" then the answer is that sure, both JavaScript and the asm.js subset are Turing complete so a translation exists.
Whether one should do this and expect a performance benefit is a different question. The short answer is "no, you shouldn't." I liken this to trying to compress a compressed file; yes, it is possible to run the compression algorithm, but in general you should not expect the resulting file to be smaller.
The short answer: The performance cost of dynamically-typed languages comes from the meaning of the code; a statically-typed program with an equivalent meaning would carry the same costs.
To understand this, it is important to understand why asm.js offers a performance benefit at all; or, more generally, why statically-typed languages perform better than dynamically-typed ones. The short answer is "run-time type checking takes time," and a longer answer would include the improved feasibility of optimizing statically-typed code. For example:
function a(x) { return x + 1; }
function b(x) { return x - 1; }
function c(x, y) { return a(x) + b(y); }
If x and y are both known to be integers, I can optimize function c to a couple of machine code instructions. If they could be integers or strings, the optimization problem becomes much harder; I have to treat these as string appends in some cases, and addition in other cases. In particular, there are four possible interpretations of the addition operation that occurs in c; it could be addition, or string append, or two different variants of coerce-to-string-and-append. As you add more possible types, the number of possible permutations grows; in the worst case for a dynamically-typed language, you have k^n possible interpretations of an expression involving n terms which could each have any number of k types. In a statically typed language, k=1, so there is always 1 interpretation of any given expression. Because of this, optimizers are fundamentally more efficient at optimizing statically-typed code than dynamically-typed code: There are fewer permutations to consider when searching for opportunities to optimize.
The point here is that when converting from dynamically-typed code to statically-typed code (as you'd be doing when going from JavaScript to asm.js), you have to account for the semantics of the original code. Meaning the type-checking still occurs (it's just now been spelled out statically-typed code) and all those permutations are still present to stifle the compiler.
A few facts about asm.js, which hopefully make the concept clear:
Yes you can write the asm.js dialect by hand.
If you did look at the examples for asm.js, they are very far from being user friendly. Obviously Javascript is not the front end language for creating this code.
Translating vanilla Javascript to asm.js dialect is not possible.
Think about it - if you already could translate standard Javascript in a fully statically manner, why would there be a need for asm.js? The sole existance of asm.js means that the Javascript JIT people at some people gave up on their promise that Javascript will get faster without any effort from the developer.
There are several reasons for this, but let's just say it would be really hard for the JIT to understand a dynamic language as good as a static compiler. And then probably for the developers to fully understand the JIT.
In the end it boils down to using the right tool for the task. If you want static, very performant code, use C / C++ ( / Java ) - if you want a dynamic language, use Javascript, Python, ...
asm.js has been created by the need of have a small subset of javascript which can be easily optimized. If you can have a way to convert javascript to javascript/asm.js, asm.js is not needed anymore - that method can be inserted in js interpreters directly.
In theory, it is possible to convert / compile / transpile any JavaScript operation to asm.js if it can be expressed with the limited subset of the language present in asm.js. In practice, however, there is no tool capable of converting ordinary JavaScript to asm.js at the moment (June, 2017).
Either way, it would make more sense to convert a language with static typing to asm.js, because static typing is a requirement of asm.js and the lack thereof one of the features of ordinary JavaScript that makes it exceptionally hard to compile to asm.js.
Back in 2013, when asm.js was hot, there has been an attempt to compile a statically typed language similar to JavaScript, but both the language and the compiler seem to have been abandoned.
Today, in 2017, JavaScipt subsets TypeScript and Flow would be suitable candidates for conversion to asm.js, but the core dev teams of neither language is interested in such conversion. LLJS had a fork that could compile to asm.js, but that project is pretty much dead. ThinScript is a much more recent attempt and is based on TypeScript, but it doesn't appear to be active either.
So, the best and easiest way to produce asm.js code is still to write your code in C/C++ and convert / compile / transpile it. However, it remains to be seen whether we'll even want to do this in the forseeable future. Web Assembly may soon replace asm.js altogether and there's already popping up TypeScript-like languages like TurboScript and AssemblyScript that convert to Web Assembly. In fact, TurboScript was originally based on ThinScript and used to compile to asm.js, but they appear to have abandoned this feature.
It may be possible to convert regular JavaScript to asm.js by first compiling it to C or C++, and then compiling the generated code to asm.js using Emscripten. I'm not sure if this would be practical, but it's an interesting concept nonetheless.
There is also a compiler called NectarJS that compiles JavaScript to WebAssembly and ASM.js.
check this http://badassjs.com/post/43420901994/asm-js-a-low-level-highly-optimizable-subset-of
basically you need check that your code would be asm.js compatible (no coercion or type casting, you need to manage the memory, etc). The idea behind this is write your code in javascript, detect the bottle neck and do the changes in your code for use asm.js and aot compilation instead jit and dynamic compilation...is a bit PITA but you can still use javascript or other languages like c++ or better..in a near future, lljs.....

Is it acceptable style for Node.js libraries to rely on object key order?

Enumerating the keys of javascript objects replays the keys in the order of insertion:
> for (key in {'z':1,'a':1,'b'}) { console.log(key); }
z
a
b
This is not part of the standard, but is widely implemented (as discussed here):
ECMA-262 does not specify enumeration order. The de facto standard is to match
insertion order, which V8 also does, but with one exception:
V8 gives no guarantees on the enumeration order for array indices (i.e., a property
name that can be parsed as a 32-bit unsigned integer).
Is it acceptable practice to rely on this behavior when constructing Node.js libraries?
Absolutely not! It's not a matter of style so much as a matter of correctness.
If you depend on this "de facto" standard your code might fail on an ECMA-262 5th Ed. compliant interpreter because that spec does not specify the enumeration order. Moreover, the V8 engine might change its behavior in the future, say in the interest of performance, e.g.
Definitely do not rely on the order of the keys. If the standard doesn't specify an order, then implementations are free to do as they please. Hash tables often underlie objects like these, and you have no way of knowing when one might be used. Javascript has many implementations, and they are all competing to be the fastest. Key order will vary between implementations, if not now, then in the future.
No. Rely on the ECMAScript standard, or you'll have to argue with the developers about whether a "de facto standard" exists like the people on that bug.
It's not advised to rely on it naively.
You should also do your best to stick to the spec/standard.
However there are often cases where the spec or standard limits what you can do. I'm not sure in programming I've encountered many implementations that deviate or extend the specification often for reasons such as the specification doesn't cater to everything.
Sometime people using specifics of an implementation might have test cases for that, though it's hard to make a reliable test case for beys being in order. It most succeed by accident or rather it's difficult behavior to reliably produce.
If you do rely on an implementation specific then you must document that. If your project requires portability (code to run on other people's setups out of your control and you want maximum compatibility) then in this case it's not a good idea to rely on an implementation specific such as key order.
Where you do have full control of the implementation being used then it's entirely up to you which implementation specifics you use while keeping in mind you may be forced to cater to portability due to the common need or desire to upgrade implementation.
The best form of documentation for cases like this is inline, in the code itself, often with the intention of at least making it easy to identify areas to be changed should you switch from an implementation guaranteeing order to one not doing so.
You can make up the format you like but it can be something like...
/** #portability: insertion_ordered_keys */
for(let key in object) console.log();
You might even wrap such cases up in code:
forEachKeyInOrderOfInsertion(object, console.log)
Again, likely something less overly verbose but enough to identify cases dependent on that.
For where your implementation guarantees key order you're just trans late that to the same as the original for.
You can use a JS function for that with platform detection, templating like CPP, transpiling, etc. You might also want to wrap the object creation and to be very careful about things crossing boundaries. If something loses order before reaching you (like JSON decode of input from a client over the network) then you'll likely not have a solution to that solely withing your library, this can even be just if someone else is calling your library.
Though you'll likely not need those, just make cases where you do something that might break later as a minimum and document that potential exists.
An obvious exception to that is if the implementation guarantees consistency. In that case you will probably be wasting your time decorating everything if it's not really a variability and is already documented via the implementation. The implementation often is a spec or has its own, you can choose to stick to that rather than a more generalised spec.
Ultimately in each case you'll need to make a judgement call, you may also choose to take a chance. As long as you're fully aware of the potential problems including the potential of wasting time avoiding problems you wont necessarily actually have, that is you know all the stakes and have considered your circumstances, then it's up to you what to do. There's no "should" or "shouldn't", it's case specific.
If you're making node.js public libraries or libraries to be widely distributed beyond the scope of your control then I'd say it's not good to rely on implementation specifics. Instead at least have a disclaimer with the release notes that the library is only catering to your stack and that if people want to use it for others then can fix and put in a pull request. Otherwise if not documented, it should be fixed.

Categories

Resources