Assigning JavaScript primitives to their named equivalent variable like "constants" - javascript

I was looking at the source code to qTip 2 and saw the following:
// Munge the primitives - Paul Irish tip
var TRUE = true,
FALSE = false,
NULL = null;
I can't come up with a reason you should ever do this, and have a strong feeling that it would just encourage bad coding habits. Say a developer makes a typo in a Yoda condition like if (TRUE = someCondition()), then TRUE could very well end up actually meaning false, or you might end up assigning someObject to NULL.
I guess I'm just wondering if there's some redeeming quality for this practice that I'm missing, or if this is just a plain old Bad Idea™

The goal of this is just to improve compression, Paul Irish himself calls it as an "Anti-Pattern".
He describes it as "Good for compression and scope chain traversal" on the following presentation:
jQuery Anti-Patterns for Performance & Compression (slide 46)
On scope chain traversal, we won't see any improvement on literals as null, false, true, since the scope chain is not inspected, they are just literals.
On other identifiers as undefined or windows the scope chain traversal is indeed inspected.

You could do this for the sake of code compression. For example, YUI Compressor is not going to touch true and false, but it could replace all occurrences of, for example, TRUE with A, saving four characters per occurrence. For example, before compression:
if (foo === null) {
bar = true;
}
After compression, assuming the compressor replaces TRUE with a and NULL with c:
if(foo===c){bar=a;}
Versus this, after compression with no "munging of primitives":
if(foo===null){bar=true;}
The bad-coding-habits danger that you quite correctly cite in your question may outweigh the small savings in additional compression. It depends on how desperate you are to save a few dozen or perhaps a few hundred bytes.
Personally, I would (almost) never do this. Too dangerous.

I believe this is recommended for compression.
These shortcut variables will be compressed when munged, resulting in smaller files. However, your noted drawbacks are most certainly valid points!

Related

Performance of Symbols vs number literal?

I've been using object literals as a poor man's enum, something like this:
let enum = {
option1: Symbol("o1"),
option2: Symbol("o2"),
option3: Symbol("o3")
};
let item = enum.option2;
if(item === enum.option2) { console.log("Item is Option 2!") }
I use Symbol because I think it makes more semantic sense than using numbers -- in this case I don't really care about which value the "enum" carries, I just want to check equality -- but am slightly worried about performance considerations of doing it that way. Am I putting a bigger strain on the processor if I keep using Symbols in place of integers?
No, symbols are primitive values just like numbers and should be compared equally fast. The only downside might be that you have to use a variable to refer to them instead of a trusted literal, but if your variables are const and never assigned, an optimising compiler should be able to inline symbol values as well.
In any case, you should definitely use what makes more sense semantically, and helps you with development performance. Execution speed is secondary, and the difference here will be negligible.

Assignment expression in while condition is a bad practice?

This article explains why I have a warning if I use a code like this:
var htmlCollection = document.getElementsByClassName("class-name"),
i = htmlCollection.length,
htmlElement;
// Because htmlCollection is Live, we use a reverse iteration.
while (htmlElement = htmlCollection[--i]) { // **Warning?! Why?!**
htmlElement.classList.remove("class-name");
}
But this no explaination about « why is a bad practice to assignment expression in a while condition? ».
I also read this stackoverflow answers that point this practice as good. So...
There is a performance problem with while (element = element.parentNode) syntax-like or is just a style-code recommandation?
By the way, seems the « --i » operator is also a bad practice. I read in this article :
The ++ (increment) and -- (decrement) operators have been known to contribute to bad code by encouraging excessive trickiness.
It's some sort of joke?
There should be no performance problems with it (arguably, indexing with prefix increment can be slightly slower than postfix increment, due to issues with CPU pipelines; this is a microoptimization so ridiculously micro that it almost certainly means nothing in the context of JS engine overhead, even in C the compiler is likely to reorder expressions if it can to ensure it's not stalled waiting on the increment).
Either way, the main argument against assignment in a conditional is basically that most of the time when you do it, it's a mistake (you meant == or in JS, ===). Some code checkers (and C# requires this as a language feature to avoid accidents) are satisfied if you wrap the assignment in an additional layer of parens, to say, "Yup, I really meant to assign" (which is also necessary when you're comparing the result of the assignment to some other value; omitting the parens would instead compare, then assign a boolean, which even more likely to be wrong).
Some people have a hate on for increment/decrement operators used as part of larger expressions, because remembering order of operations is hard I guess, and because C programmers have been known to write horrible things like ++*++var and the like. I ignore these people; just don't use it for excessively tricky things.
As an orthogonal approach, and possible 'cleaner/clearer' there is:
// var htmlCollection = document.getElementsByClassName("class-name");
var htmlCollection = document.querySelectorAll('.class-name');
for(let htmlElement of htmlCollection) {
htmlElement.classList.remove("class-name");
}
as a method of iterating over DOM elements.
UPDATED to include suggestion from ShadowRanger below.

How does Bluebird's util.toFastProperties function make an object's properties "fast"?

In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different

How reliable are the polyfill/shim implementations on MDN

I have been looking through the polyfill implementations on the Mozilla Developer Network (MDN) as I require a few of these for a library. I know shim.js exists, but I'm not using that.
It seems that the polyfills are not consistent in code styling. It almost appears that they are written by the community in an almost "wiki" style.
Take for example String.prototype.contains
if(!('contains' in String.prototype)) {
String.prototype.contains = function(str, startIndex) {
return -1 !== String.prototype.indexOf.call(this, str, startIndex);
}
}
it seems more logical to me to implement this as such:
if(!String.prototype.contains) {
String.prototype.contains = function(str, startIndex) {
return this.indexOf(str, startIndex) !== -1;
}
}
Given that JavaScript is a size critical language (in that everything should be as small as possible for network transmission), my example should be favourable to the example on MDN as this saves a few bytes.
As the title suggests, I want to know how reliable the code is on MDN, and should I modify this as necessary to provide really clean, tiny implementations where possible?
It seems that your question refers to the article on String.contains().
Yes, MDN is a wiki so the quality of its content (including code examples) can vary. However, the content on general web topics (as opposed to extension development for example) is usually pretty good. Still, you shouldn't forget to use common sense.
The polyfill suggested on MDN and your version differ in three points:
!('contains' in String.prototype) vs. !String.prototype.contains to check whether a property exists: The former is clearly preferable. The in operator merely looks up a property, there are no side-effects. !String.prototype.contains on the other hand will actually retrieve the value of that property and convert it to a boolean value. Not only is this marginally slower, some property values like 0 will be wrongly coerced to false. You probably won't notice the difference with functions but this might become a real issue when polyfilling other property types.
-1 !== foo vs. foo !== -1 for comparisons: This is a matter of taste but some people prefer the former variant. The advantage of always putting the constant first in comparisons is that you won't unintentionally turn a comparison into an assignment: writing -1 = foo when you meant -1 == foo will cause an error. On the other hand, foo = -1 instead of foo == -1 will succeed and noticing that issue in your code might take a while. Obviously, if you choose to adapt that style you need to use it consistently throughout all your code.
String.prototype.indexOf.call vs. this.indexOf: The former guards against the situation that the indexOf method on the this object is overwritten. As a result, it is closer to the behavior of the native String.contains() function. Consider this example:
var a = "foo";
a.indexOf = function() {something_weird};
alert(a.contains("f"));
The native implementation of String.contains and a polyfill using String.prototype.indexOf.call will work even if this.indexOf is overwritten - a polyfill using this.indexOf however will fail.
Altogether, the code provided on MDN has a few more fail-safes. Whether these are required in your individual scenario is not given of course. However, dropping them to save a few bytes is the wrong approach to optimization ("premature optimization is the root of all evil"). Personally, I prefer good style over efficiency unless the difference in performance is known to be relevant.

Using &&'s short-circuiting as an if statement?

I saw this line in the jQuery.form.js source code:
g && $.event.trigger("ajaxComplete", [xhr, s]);
My first thought was wtf??
My next thought was, I can't decide if that's ugly or elegant.
I'm not a Javascript guru by any means so my question is 2-fold. First I want to confirm I understand it properly. Is the above line equivalent to:
if (g) {
$.event.trigger("ajaxComplete", [xhr, s]);
}
And secondly is this common / accepted practice in Javascript? On the one hand it's succinct, but on the other it can be a bit cryptic if you haven't seen it before.
Yes, your two examples are equivalent. It works like this in pretty much all languages, but it's become rather idiomatic in Javascript. Personally I think it's good in some situations but can be abused in others. It's definitely shorter though, which can be important to minimize Javascript load times.
Also see Can somebody explain how John Resig's pretty.js JavaScript works?
It's standard, but neither JSLint nor JSHint like it:
Expected an assignment or function call and instead saw an expression.
You must be careful because this short-circuiting can be bypassed if there is an || in the conditional:
false && true || true
> true
To avoid this, be sure to group the conditionals:
false && (true || true)
> false
Yes, it's equivalent to an if as you wrote. It's certainly not an uncommon practice. Whether it's accepted depends on who is (or isn't) doing the accepting...
Yes, you understand it (in that context); yes, it is standard practice in JavaScript.
By default, it will trigger a jshint warning:
[jshint] Expected an assignment or function call and instead saw an expression. (W030) [W030]
However personally, I prefer the short-circuit version, it looks more declarative and has "less control logic", might be a misconception though.

Categories

Resources