limiting side effects when programming javascript in the browser - javascript

Limiting side effects when programming in the browser with javascript is quite tricky.
I can do things like not accessing member variables like in this silly example:
let number = 0;
const inc = (n) => {
number = number + n;
return number;
};
console.log(inc(1)); //=> 1
console.log(inc(1)); //=> 2
But what other things can I do to reduce side effects in my javascript?

Of course you can avoid side effects by careful programming. I assume your question is about how to prevent them. Your ability to do so is severely limited by the nature of the language. Here are some approaches:
Use web workers. See MDN. Web workers run in another global context that is different from the current window.:
Isolate certain kinds of logic inside iframes. Use cross-window messaging to communicate with the iframe.
Immutability libraries. See https://github.com/facebook/immutable-js. Also http://bahmutov.calepin.co/avoid-side-effects-with-immutable-data-structures.html.
Lock down your objects with Object.freeze, Object.seal, or Object.preventExtensions. In the same vein, create read-only properties on objects using Object.defineProperty with getters but no setters, or with the writeable property set to false.
Use Object.observe to get asynchronous reports on various types of changes to objects and their properties, upon which you could throw an error or take other action.
If available, use Proxy for complete control over access to an object.
For considerations on preventing access to window, see javascript sandbox a module to prevent reference to Window. Also http://dean.edwards.name/weblog/2006/11/sandbox/. Finally, Semi-sandboxing Javascript eval.
It is useful to distinguish between inward side effects and outward side effects. Inward side effects are where some other code intrudes on the state of my component. Outward side effects are where my code intrudes on the state of some other component. Inward side effects can be prevented via the IIFEs mentioned in other answers, or by ES6 modules. But the more serious concern is outward side effects, which are going to require one of the approaches mentioned above.

Just what jumps to my mind thinking about your question:
Don't pollute the global namespace. Use 'var' or 'let', those keywords limit your variables to the local scope.
"By reducing your global footprint to a single name, you significantly reduce the chance of bad interactions with other applications, widgets, or libraries." - Douglas Crockford
Use semicolons
The comment section of this article provides some good (real life) reasons to always use semicolons.
Don't declare String, Number or Boolean Objects(in case you were ever tempted to)
var m = new Number(2);
var n = 2;
m === n; // will be false because n is a Number and m is an object
"use strict"; is your friend. Enabling strict mode is a good idea, but please don't add it to existing code since it might break something and you can not really declare strict only on lexical scopes or individual scripts as stated here
Declare variables first. One common side effect is that people are not aware about JavaScript's Hoisting. Hoisting searches your block for variable declaration and moves them all together to the top of your block.
function(){
var x = 3;
// visible variables at runtime at this point: x,y,i !
// some code....
var y = 1; // y will be moved to top!
for( var i = 0; i < 10; i++ ){ // i will be moved to top!
// some code...
}
}
Here is discussed what hoisting actually means and to what kind of 'unexpected behaviour' it may lead.
use '===' instead of '=='. There are many good reasons for this and this is one of the most common 'side effects' or 'errors' in JavaScript.
For more details see this great answer on SO, but let me give you a quick demonstration:
'' == '0' // false
0 == '' // true
// can be avoided by using '==='
'' === '0' // false
0 === '' // false
Make use of IIFE. An IIFE (Immediately Invoked Function Expression) lets you declare an anonymus function which will call itself. This way you can create a new lexical scope and don't have to worry about the global namespace.
Be careful with prototypes. Keep in mind that JavaScript objects of the same class share the same prototype. Changing a prototype means changing the behaviour of all instances of the class. (Even those which are used by other scripts/frameworks) like it happened here
Object.prototype.foo = function(){...} // careful!
Those are the 'side effects' that came to my mind. Of course there is way more to take care of (meaningful variable names, consistent code style etc...) but I don't consider those things as 'side effects' since they make your code hard to maintain, but won't break it immediately.

My favorite trick is to just use a language that compiles to javascript, instead of using javascript.
However, two important tricks you can do :
start your file with "use strict";. This will turn on validation of your code and prevent usage of undeclared variables. Yes, that's a special string that the browser will know how to deal with.
use functions when needed. Javascript cannot do normal scopes, but for functions it works fine, so get (function() { })(); in your muscle memory.
Normal CS fundamentals also apply: separate your code into logical pieces, name your variables properly, always explicitly initialize variables when you need them, etc.

Related

AngularJS - Best practice: model properties on view or function calls?

For quite sometime I've been wondering about this question: when working with AngularJS, should I use directly the model object properties on the view or can I use a function to get the that property value?
I've been doing some minor home projects in Angular, and (specially working with read-only directives or controllers) I tend to create scope functions to access and display scope objects and their properties values on the views, but performance-wise, is this a good way to go?
This way seems easier for maintaining the view code, since, if for some reason the object is changed (due to a server implementation or any other particular reason), I only have to change the directive's JS code, instead of the HTML.
Here's an example:
//this goes inside directive's link function
scope.getPropertyX = function() {
return scope.object.subobject.propX;
}
in the view I could simply do
<span>{{ getPropertyX() }}</span>
instead of
<span>{{ object.subobject.propX }}</span>
which is harder to maintain, amidst the HTML clutter that sometimes it's involved.
Another case is using scope functions to test properties values for evaluations on a ng-if, instead of using directly that test expression:
scope.testCondition = function() {
return scope.obj.subobj.propX === 1 && scope.obj.subobj.propY === 2 && ...;
}
So, are there any pros/cons of this approach? Could you provide me with some insight on this issue? It's been bothering me lately, on how an heavy app might behave when, for example a directive can get really complex, and on top of that could be used inside a ng-repeat that could generate hundreds or thousands of its instances.
Thank you
I don't think creating functions for all of your properties is a good idea. Not just will there be more function calls being made every digest cycle to see if the function return value has changed but it really seems less readable and maintainable to me. It could add a lot of unnecessary code to your controllers and is sort of making your controller into a view model. Your second case seems perfectly fine, complex operations seems like exactly what you would want your controller to handle.
As for performance it does make a difference according to a test I wrote (fiddle, tried to use jsperf but couldn't get different setup per test). The results are almost twice as fast, i.e. 223,000 digests/sec using properties versus 120,000 digests/sec using getter functions. Watches are created for bindings that use angular's $parse.
One thing to think about is inheritance. If you uncomment the ng-repeat list in the fiddle and inspect the scope of one of the elements you can see what I'm talking about. Each child scope that is created inherits the parent scope's properties. For objects it inherits a reference, so if you have 50 properties on your object it only copies the object reference value to the child scope. If you have 50 manually created functions it will copy each of those function to each child scope that it inherits from. The timings are slower for both methods, 126,000 digests/sec for properties and 80,000 digests/sec with getter functions.
I really don't see how it would be any easier for maintaining your code and it seems more difficult to me. If you want to not have to touch your HTML if the server object changes it would probably be better to do that in a javascript object instead of putting getter functions directly on your scope, i.e.:
$scope.obj = new MyObject(obj); // MyObject class
In addition, Angular 2.0 will be using Object.observe() which should increase performance even more, but would not improve the performance using getter functions on your scope.
It looks like this code is all executed for each function call. It calls contextGetter(), fnGetter(), and ensureSafeFn(), as well as ensureSafeObject() for each argument, for the scope itself and for the return value.
return function $parseFunctionCall(scope, locals) {
var context = contextGetter ? contextGetter(scope, locals) : scope;
var fn = fnGetter(scope, locals, context) || noop;
if (args) {
var i = argsFn.length;
while (i--) {
args[i] = ensureSafeObject(argsFn[i](scope, locals), expressionText);
}
}
ensureSafeObject(context, expressionText);
ensureSafeFunction(fn, expressionText);
// IE stupidity! (IE doesn't have apply for some native functions)
var v = fn.apply
? fn.apply(context, args)
: fn(args[0], args[1], args[2], args[3], args[4]);
return ensureSafeObject(v, expressionText);
};
},
By contrast, simple properties are compiled down to something like this:
(function(s,l /**/) {
if(s == null) return undefined;
s=((l&&l.hasOwnProperty("obj"))?l:s).obj;
if(s == null) return undefined;
s=s.subobj;
if(s == null) return undefined;
s=s.A;
return s;
})
Performance wise - it is likely to matter little
Jason Goemaat did a great job providing a Benchmarking Fiddle. Where you can change the last line from:
setTimeout(function() { benchmark(1); }, 500);
to
setTimeout(function() { benchmark(0); }, 500);
to see the difference.
But he also frames the answer as properties are twice as fast as function calls. In fact, on my mid-2014 MacBook Pro, properties are three times faster.
But equally, the difference between calling a function or accessing the property directly is 0.00001 seconds - or 10 microseconds.
This means that if you have 100 getters, they will be slower by 1ms compared to having 100 properties accessed.
Just to put things in context, the time it takes sensory input (a photon hitting the retina) to reach our consciousness is 300ms (yes, conscious reality is 300ms delayed). So you'd need 30,000 getters on a single view to get the same delay.
Code quality wise - it could matter a great deal
In the days of assembler, software was looked like this:
A collection of executable lines of code.
But nowadays, specifically for software that have even the slightest level of complexity, the view is:
A social interaction between communication objects.
The latter is concerned much more about how behaviour is established via communication objects, than the actual low-level implementation. In turn, the communication is granted by an interface, which is typically achieved using the query or command principle. What matters, is the interface of (or the contract between) collaborating objects, not the low-level implementation.
By inspecting properties directly, you are hit the internals of an object bypassing its interface, thus couple the caller to the callee.
Obviously, with getters, you may rightly ask "what's the point". Well, consider these changes to implantation that won't affect the interface:
Checking for edge cases, such as whether the property is defined at all.
Change from getName() returning first + last names rather than just the name property.
Deciding to store the property in mutable construct.
So even an implementation seemingly simple may change in a way that if using getters would only require single change, where without will require many changes.
I vote getters
So I argue that unless you have a profiled case for optimisation, you should use getters.
Whatever you put in the {{}} is going be evaluated A LOT. It has to be evaluated each digest cycle in order to know if the value changed or not. Thus, one very important rule of angular is to make sure you don't have expensive operations in any $watches, including those registered through {{}}.
Now the difference between referencing the property directly or having a function do nothing else but return it, to me, seems negligible. (Please correct me if I'm wrong)
So, as long as your functions aren't performing expensive operations, I think it's really a matter of personal preference.

How does Bluebird's util.toFastProperties function make an object's properties "fast"?

In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different

Does adding window as a prefix to the global objects speed up accessing global objects? [duplicate]

I'm kind of curious about what the best practice is when referencing the 'global' namespace in javascript, which is merely a shortcut to the window object (or vice versia depending on how you look at it).
I want to know if:
var answer = Math.floor(value);
is better or worse than:
var answer = window.Math.floor(value);
Is one better or worse, even slightly, for performance, resource usage, or compatibility?
Does one have a slighter higher cost? (Something like an extra pointer or something)
Edit note: While I am a readability over performance nazi in most situations, in this case I am ignoring the differences in readability to focus solely on performance.
First of all, never compare things like these for performance reasons. Math.round is obviously easier on the eyes than window.Math.round, and you wouldn't see a noticeable performance increase by using one or the other. So don't obfuscate your code for very slight performance increases.
However, if you're just curious about which one is faster... I'm not sure how the global scope is looked up "under the hood", but I would guess that accessing window is just the same as accessing Math (window and Math live on the same level, as evidenced by window.window.window.Math.round working). Thus, accessing window.Math would be slower.
Also, the way variables are looked up, you would see a performance increase by doing var round = Math.round; and calling round(1.23), since all names are first looked up in the current local scope, then the scope above the current one, and so on, all the way up to the global scope. Every scope level adds a very slight overhead.
But again, don't do these optimizations unless you're sure they will make a noticeable difference. Readable, understandable code is important for it to work the way it should, now and in the future.
Here's a full profiling using Firebug:
<!DOCTYPE html>
<html>
<head>
<title>Benchmark scope lookup</title>
</head>
<body>
<script>
function bench_window_Math_round() {
for (var i = 0; i < 100000; i++) {
window.Math.round(1.23);
}
}
function bench_Math_round() {
for (var i = 0; i < 100000; i++) {
Math.round(1.23);
}
}
function bench_round() {
for (var i = 0, round = Math.round; i < 100000; i++) {
round(1.23);
}
}
console.log('Profiling will begin in 3 seconds...');
setTimeout(function () {
console.profile();
for (var i = 0; i < 10; i++) {
bench_window_Math_round();
bench_Math_round();
bench_round();
}
console.profileEnd();
}, 3000);
</script>
</body>
</html>
My results:
Time shows total for 100,000 * 10 calls, Avg/Min/Max show time for 100,000 calls.
Calls Percent Own Time Time Avg Min Max
bench_window_Math_round
10 86.36% 1114.73ms 1114.73ms 111.473ms 110.827ms 114.018ms
bench_Math_round
10 8.21% 106.04ms 106.04ms 10.604ms 10.252ms 13.446ms
bench_round
10 5.43% 70.08ms 70.08ms 7.008ms 6.884ms 7.092ms
As you can see, window.Math is a really bad idea. I guess accessing the global window object adds additional overhead. However, the difference between accessing the Math object from the global scope, and just accessing a local variable with a reference to the Math.round function isn't very great... Keep in mind that this is 100,000 calls, and the difference is only 3.6ms. Even with one million calls you'd only see a 36ms difference.
Things to think about with the above profiling code:
The functions are actually looked up from another scope, which adds overhead (barely noticable though, I tried importing the functions into the anonymous function).
The actual Math.round function adds overhead (I'm guessing about 6ms in 100,000 calls).
This can be an interest question if you want to know how the Scope Chain and the Identifier Resolution process works.
The scope chain is a list of objects that are searched when evaluating an identifier, those objects are not accessible by code, only its properties (identifiers) can be accessed.
At first, in global code, the scope chain is created and initialised to contain only the global object.
The subsequent objects in the chain are created when you enter in function execution context and by the with statement and catch clause, both also introduce objects into the chain.
For example:
// global code
var var1 = 1, var2 = 2;
(function () { // one
var var3 = 3;
(function () { // two
var var4 = 4;
with ({var5: 5}) { // three
alert(var1);
}
})();
})();
In the above code, the scope chain will contain different objects in different levels, for example, at the lowest level, within the with statement, if you use the var1 or var2 variables, the scope chain will contain 4 objects that will be needed to inspect in order to get that identifier: the one introduced by the with statement, the two functions, and finally the global object.
You also need to know that window is just a property that exists in the global object and it points to the global object itself. window is introduced by browsers, and in other environments often it isn't available.
In conclusion, when you use window, since it is just an identifier (is not a reserved word or anything like that) and it needs to pass all the resolution process in order to get the global object, window.Math needs an additional step that is made by the dot (.) property accessor.
JS performance differs widely from browser to browser.
My advice: benchmark it. Just put it in a for loop, let it run a few million times, and time it.... see what you get. Be sure to share your results!
(As you've said) Math.floor will probably just be a shortcut for window.Math (as window is a Javascript global object) in most Javascript implementations such as V8.
Spidermonkey and V8 will be so heavily optimised for common usage that it shouldn't be a concern.
For readability my preference would be to use Math.floor, the difference in speed will be so insignificant it's not worth worrying about ever. If you're doing a 100,000 floors it's probably time to switch that logic out of the client.
You may want to have a nose around the v8 source there's some interesting comments there about shaving nanoseconds off functions such as this int.Parse() one.
// Some people use parseInt instead of Math.floor. This
// optimization makes parseInt on a Smi 12 times faster (60ns
// vs 800ns). The following optimization makes parseInt on a
// non-Smi number 9 times faster (230ns vs 2070ns). Together
// they make parseInt on a string 1.4% slower (274ns vs 270ns).
As far as I understand JavaScript logic, everything you refer to as something is searched in the global variable scope. In browser implementations, the window object is the global object. Hence, when you are asking for window.Math you actually have to de-reference what window means, then get its properties and find Math there. If you simply ask for Math, the first place where it is sought, is the global object.
So, yes- calling Math.something will be faster than window.Math.something.
D. Crockeford talks about it in his lecture http://video.yahoo.com/watch/111593/1710507, as far as I recall, it's in the 3rd part of the video.
If Math.round() is being called in a local/function scope the interpreter is going to have to check first for a local var then in the global/window space. So in local scope my guess would be that window.Math.round() would be very slightly faster. This isn't assembly, or C or C++, so I wouldn't worry about which is faster for performance reasons, but if out of curiosity, sure, benchmark it.

Should I use a var per each require? [duplicate]

Reading Crockfords The Elements of JavaScript Style I notice he prefers defining variables like this:
var first='foo', second='bar', third='...';
What, if any benefit does that method provide over this:
var first='foo';
var second='bar';
var third='...';
Obviously the latter requires more typing but aside from aesthetics I'm wondering if there is a performance benefit gained by defining with the former style.
Aside of aesthetics, and download footprint, another reason could be that the var statement is subject to hoisting. This means that regardless of where a variable is placed within a function, it is moved to the top of the scope in which it is defined.
E.g:
var outside_scope = "outside scope";
function f1() {
alert(outside_scope) ;
var outside_scope = "inside scope";
}
f1();
Gets interpreted into:
var outside_scope = "outside scope";
function f1() {
var outside_scope; // is undefined
alert(outside_scope) ;
outside_scope = "inside scope";
}
f1();
Because of that, and the function-scope only that JavaScript has, is why Crockford recommends to declare all the variables at the top of the function in a single var statement, to resemble what will really happen when the code is actually executed.
Since JavaScript is generally downloaded to the client browser, brevity is actually quite a valuable attribute. The more bytes you have to download, the slower it gets. So yes, there is a reason apart from aesthetics, if not a massive one.
Similarly, you'll see people preferring shorter variable names to longer.
Personally, I don't bother minimising whitespace, as there are minimisers that can do this sort of thing for you (for example in YUI), and lack of indentation and spacing leads to less maintainable code.
No difference in semantics and no measurable difference in performance. Write whichever is clearest.
For simple examples like:
var first= 'foo', second= 'bar', third= 'bof';
the concise single-statement construct is probably a win for readability. On the other hand you can take this much too far and start writing half your program inside a single var statement. Here's a random example plucked from the jQuery source:
var name = match[1],
result = Expr.attrHandle[ name ] ?
Expr.attrHandle[ name ]( elem ) :
elem[ name ] != null ?
elem[ name ] :
elem.getAttribute( name ),
value = result + "",
type = match[2],
check = match[4];
I find this (by no means the worst example) a bit distasteful. Longer examples can get quite hard to read upwards (wait, I was in a var statement?) and you can end up counting the brackets to try to work out what's a multi-line expression and what's just an extended var block.
I believe that what he is going for is declaring all variables as abosultely the first statement in a function (You'll notice that JSLint complains about this if you use it and don't declare them on the first line). This is because of JavaScript's scope declaration limitations (or quirks). Crockford emphasizes this as good practice for maintainable JavaScript code. The second example declares them at the top, but not in the first execution statement. Personally, I see no reason as why to prefer the first over the second, but following the first does enforce that all variables are declared before doing anything else in the function.
David is right that the larger the script the more time it will take to down load, but in this case the difference between the two is minimal and can be handled by using YUI compress etc.
It's a personal programming style choice.
On the one hand there is readability, wherein placing each variable declaration on a separate line makes it more obvious what's going on.
On the other hand, there is brevity, wherein you're eliminating transmitting a few extra bytes over the network. It's generally not enough to worry about, unless you're dealing with slow networks or limited memory on the client browser side.
Brevity is also known as laziness on the part of the programmer, which is one reason that many purists avoid it.
It all comes down to personal taste or a set of style-guides, your development team follows. If you are serving JavaScript yourself, you usually compress or minify your script(s) into one long string in one single file. So the whole you-are-saving-bytes-and-your-scripts-download-faster argument is, well, not an argument :)
I usually declare my variables like this: (a style you didn't mention)
var something,
somethingElse,
evenMoreSomething,
andAnotherThing;
A statement like "var" is not minified/compressed.Every time you place a var, instead of a comma you lose 4 chars, if I count right.

Dynamic vs Static Compiler (JavaScript)

I'm currently writing a JavaScript compiler in ANTLR+Java.
I've read questions here on Stack Overflow on how to proceed with the execution - and the answer is always that it would be way too hard to do a static compilation (without JIT-information) of a dynamic language - but why is that exactly? There are of course the obvious "type resolving" problem and in JavaScript maybe a problem with the eval function - but are there other reasons? (because they don't seem too hard to overcome pure statically (no JITS))
I'm excluding JIT-based compilation because I figure it would be too hard for me to implement.
I have some experience in writing static compilers with a byte-code execution.
UPDATE:
All your answers are really helpfull understanding the problem.
To clarify does this mean that JavaScript is harder to implement than other dynamic languages?
And does this also means that im better of using a Tree-based interpreter than e.g. Byte-code (if we forget about the property that JS always is shipped in raw source code - hence adding extra time for generating and IR and afterwards execute it)? - or should they be about equally easy / hard to do?
(Im new to SOF; dont know if this is the preferred way to update a question?)
There are lots of ways this conversation could go. Here's one direction. In javascript, nearly everything is an object and properties or methods can be added to any object at run-time. As such, you don't know at compile time what methods or properties will or won't be attached to an object. As such, everything has to be looked up at run-time.
For example:
var myObj = {};
function configureObject() {
if (something in the environment) {
myObj.myfunc = function () {alert("Hi");}
} else {
myObj.myfunc = function () {document.write("Hello");}
}
}
Now, sometime later in the code you call myObj.myfunc(); It is not known at compile time what myfunc is or whether it's even an attribute of myObj. It has to be a run-time lookup.
In another example, take this line of code:
var c = a + b;
What his means depends entirely upon the types of a and b and those types are not known at compile time.
If a and b are both numbers, then this is an addition statement and c will be a number.
If either a or b is a string, then the other will be coerced to a string and c will be a string.
You can't precompile this kind of logic into native code. The execution environment has to record that this is a request for the addition operator between these two operands and it has to (at runtime) examine the types of the two operands and decide what to do.
The challenge with writing a static JavaScript compiler is that it is in general undecidably hard to determine what objects are being referenced at any program point or what functions are being called. I could use the fact that JavaScript is dynamic to decide which function to call based on the output of some Turing machine. For example:
var functionName = RunTuringMachineAndReportOutputOnTape(myTM, myInput);
eval(functionName + "();");
At this point, unless you have advance knowledge about what myTM and myInput are, it is provably impossible to decide what function will be invoked by the call to eval, since it's undecidable to determine what is on a Turing machine's tape if it halts (you can reduce the halting problem to this problem). Consequently, no matter how clever you are, and no matter how good of a static analyzer you build, you will never be able to correctly statically resolve all function calls. You can't even bound the set of functions that might be called here, since the Turing machine's output might define some function that is then executed by the above code.
What you can do is compile code that, whenever a function is called, includes extra logic to resolve the call, and possibly uses techniques like inline caching to speed things up. Additionally, in some cases you might be able to prove that a certain function is being called (or that one of a small number of functions will be called) and can then hardcode in those calls. You could also compile multiple versions of a piece of code, one for each common type (object, numeric, etc.), then emit code to jump to the appropriate compiled trace based on the dynamic type.
V8 does that. See Compile JavaScript to Native Code with V8
With EcmaScript 3 and 5 non-strict there are a number of wrinkles around scopes which you don't run into in other dynamic languages. You might think that it is easy to do compiler optimizations on local variables, but there are edge cases in the language when it is not, even ignoring eval's scope introspection.
Consider
function f(o, x, y) {
with (o) { return x + y + z; }
}
when called with
o = {};
o = { z: 3 };
o = { x: 1, z: 2 };
Object.prototype.z = 3, o = {};
and according to EcmaScript 3,
x = (function () { return toString(); })()
should produce quite a different result from
x = toString();
because EcmaScript 3 defines an activation record as an object with a prototype chain.

Categories

Resources