Javascript collection - javascript
sorry for noobie question. Can you explain please, what is the difference between:
1. var a = [];
a['b'] = 1;
2. var a = {};
a['b'] = 1;
I could not find article in the internet so wrote here.
Literals
The [] and {} are called the array and object literals respectively.
var x = [] is short for var x = new Array();
and var y = {} is short for var y = new Object();
Arrays
Arrays are structures with a length property. You can access values via their numeric index.
var x = [] or var x = new Array();
x[0] = 'b';
x[1] = 'c';
And if you want to list all the properties you do:
for(var i = 0; i < x.length; i++)
console.log(x[i]);// numeric index based access.
Performance tricks and gotchas
1. Inner-caching the length property
The standard array iteration:
for (var i = 0; i < arr.length; i++) {
// do stuff
};
Little known fact: In the above scenario, the arr.length property is read at every step of the for loop. Just like any function you call there:
for (var i = 0; i < getStopIndex(); i++) {
// do stuff
};
This decreases performance for no reason. Inner caching to the rescue:
for (var i = 0, len = arr.length; i < len; i++) {
// enter code here
};
Here's proof of the above.
2. Don't specify the Array length in the constructor.
// doing this:
var a = new Array(100);
// is very pointless in JS. It will result in an array with 100 undefined values.
// not even this:
var a = new Array();
// is the best way.
var a = [];
// using the array literal is the fastest and easiest way to do things.
Test cases for array definition are available here.
3. Avoid using Array.prototype.push(arr.push)
If you are dealing with large collections, direct assignment is faster than using the Array.prototype.push(); method.
myArray[i] = 0; is faster than myArray.push(0);, according to jsPerf.com test cases.
4. It is wrong to use arrays for associative assignments.
The only reason why it works is because Array extends the Object class inside the core of the JS language. You can just as well use a Date(); or RegEx(); object for instance. It won't make a difference.
x['property'] = someValue MUST always be used with Objects.
Arrays should only have numeric indexes. SEE THIS, the Google JS development guidelines! Avoid for (x in arr) loops or arr['key'] = 5;.
This can be easily backed up, look HERE for an example.
var x = [];
console.log(x.prototype.toString.call);
will output: [object Array]
This reveals the core language's 'class' inheritance pattern.
var x = new String();
console.log(x.prototype.toString.call);
will output [object String].
5. Getting the minimum and maximum from an array.
A little known, but really powerful trick:
function arrayMax(arr) {
return Math.max.apply(Math, arr);
};
, respectively:
function arrayMin(arr) {
return Math.min.apply(Math, arr);
};
Objects
With an object you can only do:
var y = {} or var y = new Object();
y['first'] = 'firstValue' is the same as y.first = 'firstValue', which you can't do with an array. Objects are designed for associative access with String keys.
And the iteration is something like this:
for (var property in y) {
if (y.hasOwnProperty(property)) {
console.log(y.property);
};
};
Performance tricks and gotchas
1. Checking if an object has a property.
Most people use Object.prototype.hasOwnProperty. Unfortunately that often gives erroneous results leading to unexpected bugs.
Here's a good way to do it:
function containsKey(obj, key) {
return typeof obj[key] !== 'undefined';
};
2. Replacing switch statements.
One of the simple but efficient JS tricks is switch replacement.
switch (someVar) {
case 'a':
doSomething();
break;
case 'b':
doSomethingElse();
break;
default:
doMagic();
break;
};
In most JS engines the above is painfully slow. When you are looking at three possible outcomes, it doesn't make a difference, but what if you had tens or hundreds?
The above can easily be replaced with an object. Don't add the trailing (), this is not executing the functions, but simply storing references to them:
var cases = {
'a': doSomething,
'b': doSomethingElse,
'c': doMagic
};
Instead of the switch:
var x = ???;
if (containsKey(cases, x)) {
cases[x]();
} else {
console.log("I don't know what to do!");
};
3. Deep-cloning made easy.
function cloneObject(obj) {
var tmp = {};
for (var key in obj) {
tmp[key] = fastDeepClone(obj[key];
};
return tmp;
}
function cloneArr(arr) {
var tmp = [];
for (var i = 0, len = arr.length; i < len; i++) {
tmp[i] = fastDeepClone(arr[i]);
}
return tmp;
}
function deepClone(obj) {
return JSON.parse(JSON.stringify(obj));
};
function isArray(obj) {
return obj instanceof Array;
}
function isObject(obj) {
var type = typeof obj;
return type === 'object' && obj !== null || type === 'function';
}
function fastDeepClone(obj) {
if (isArray(obj)) {
return cloneArr(obj);
} else if (isObject(obj)) {
return cloneObject(obj);
} else {
return obj;
};
};
HERE is the deep clone function in action.
Auto-boxing
As a dynamically typed language, JavaScript is limited in terms of native object types:
Object
Array
Number
Boolean
Date
RegEx
Error
Null is not a type, typeof null is object.
What's the catch? There is a strong distinction between primitive and non-primitive objects.
var s = "str";
var s2 = new String("str");
They do the same thing, you can call all string methods on s and s2.
Yet:
type of s == "string"; // raw data type
type of s2 == "object" // auto-boxed to non-primitive wrapper type
s2.prototype.toString.call == "[object String]";
You may hear in JS everything is an object. That's not exactly true, although it's a really easy mistake to make.
In reality there are 2 types, primitives and objects, and when you call s.indexOf("c"), the JS engine will automatically convert s to its non-primitive wrapper type, in this case object String, where all the methods are defined on the String.prototype.
This is called auto-boxing. The Object.prototype.valueOf(obj) method is a way to force the cast from primitive to non-primitive. It's the same behaviour a language like Java introduces for many of it's own primitives, specifically the pairs: int - Integer, double - Double, float - Float, etc.
Why should you care?
Simple:
function isString(obj) {
return typeof obj === "string";
}
isString(s); // true
isString(s2); // false
So if s2 was created with var s2 = new String("test") you are getting a false negative, even for an otherwise conceivably simple type check. More complex objects also bring with themselves a heavy performance penalty.
A micro-optimization as some would say, but the results are truly remarkable, even for extremely simple things such as string initialisation. Let's compare the following two in terms of performance:
var s1 = "this_is_a_test"
and
var s2 = new String("this_is_a_test")
You will probably expected matching performance across the board, but rather surprisingly the latter statement using new String is 92% slower than the first one, as proven here.
Functions
1. Default parameters
The || operator is the simplest possible way of defaulting. Why does it work? Because of truthy and falsy values.
When evaluated in a logical condition, undefined and null values will autocast to false.
A simple example(code HERE):
function test(x) {
var param = x || 5;
// do stuff with x
};
2. OO JS
The most important thing to understand is that the JavaScript this object is not immutable. It is simply a reference that can be changed with great ease.
In OO JS, we rely on the new keyword to guarantee implicit scope in all members of a JS Class. Even so, you can easily change the scope, via Function.prototype.call and Function.prototype.apply.
Another very important thing is the Object.prototype. Non-primitive values nested on an objects prototype are shared, while primitive ones are not.
Code with examples HERE.
A simple class definition:
function Size(width, height) {
this.width = width;
this.height = height;
};
A simple size class, with two members, this.width and this.height.
In a class definition, whatever has this in front of it, will create a new reference for every instance of Size.
Adding methods to classes and why the "closure" pattern and other "fancy name pattern" are pure fiction
This is perhaps where the most malicious JavaScript anti-patterns are found.
We can add a method to our Size class in two ways.
Size.prototype.area = function() {
return this.width * this.height;
};
Or:
function Size2(width, height) {
this.width = width;
this.height = height;
this.area = function() {
return this.width * this.height;
}
}
var s = new Size(5, 10);
var s2 = new Size2(5, 10);
var s3 = new Size2(5, 10);
var s4 = new Size(5, 10);
// Looks identical, but lets use the reference equality operator to test things:
s2.area === s3.area // false
s.area === s4.area // true
The area method of Size2 is created for every instance.
This is completely useless and slow, A LOT slower. 89% to be exact. Look HERE.
The above statement is valid for about 99% of all known "fancy name pattern". Remember the single most important thing in JS, all those are nothing more than fiction.
There are strong architectural arguments that can be made, mostly revolved around data encapsulation and the usage of closures.
Such things are unfortunately absolutely worthless in JavaScript, the performance loss simply isn't worth it. We are talking about 90% and above, it's anything but negligible.
3. Limitations
Because prototype definitions are shared among all instances of a class, you won't be able to put a non-primitive settings object there.
Size.prototype.settings = {};
Why? size.settings will be the same for every single instance.
So what's with the primitives?
Size.prototype.x = 5; // won't be shared, because it's a primitive.
// see auto-boxing above for primitive vs non-primitive
// if you come from the Java world, it's the same as int and Integer.
The point:
The average JS guy will write JS in the following way:
var x = {
doStuff: function(x) {
},
doMoreStuff: 5,
someConstant: 10
}
Which is fine (fine = poor quality, hard to maintain code), as long as you understand that is a Singleton object, and those functions should only be used in global scope without referencing this inside them.
But then it gets to absolutely terrible code:
var x = {
width: 10,
height: 5
}
var y = {
width: 15,
height: 10
}
You could have gotten away with: var x = new Size(10, 5); var y = new Size(15, 5);.
Takes longer to type, you need to type the same thing every time. And again, it's A LOT SLOWER. Look HERE.
Poor standards throughout
This can be seen almost anywhere:
function() {
// some computation
var x = 10 / 2;
var y = 5;
return {
width: x,
height: y
}
}
Again with the alternative:
function() {
var x = 10 / 2;
var y = 5;
return new Size(10, 5);
};
The point: USE CLASSES WHEREVER APPROPRIATE!!
Why? Example 1 is 93% Slower. Look HERE.
The examples here are trivial, but they illustrate something being ignored in JS, OO.
It's a solid rule of thumb not to employ people who think JS doesn't have classes and to get jobs from recruiters talking about "Object Orientated" JS.
Closures
A lot of people prefer them to the above because it gives them a sense of data encapsulation. Besides the drastic 90% performance drop, here's something equally easy to overlook. Memory leaks.
function Thing(someParam) {
this.someFn = function() {
return someParam;
}
}
You've just created a closure for someParam. Why is this bad? First, it forces you to define class methods as instance properties, resulting in the big performance drop.
Second, it eats up memory, because a closure will never get dereferenced. Look here for proof. Sure, you do get some fake data encapsulation, but you use three times the memory with a 90% performance drop.
Or you can add #private and get a way with an underscore function name.
Other very common ways of generating closures:
function bindSomething(param) {
someDomElement.addEventListener("click", function() {
if (param) //do something
else // do something else
}, false);
}
param is now a closure! How do you get rid of it? There are various tricks, some found here. The best possible approach, albeit more rigorous is to avoid using anonymous functions all-together, but this would require a way to specify scopes for event callbacks.
Such a mechanism is only available in Google Closure, as far as I know.
The singleton pattern
Ok, so what do I do for singletons? I don't want to store random references. Here's a wonderful idea shamelessly stolen from Google Closure's base.js
/**
* Adds a {#code getInstance} static method that always return the same instance
* object.
* #param {!Function} ctor The constructor for the class to add the static
* method to.
*/
function addSingletonGetter(ctor) {
ctor.getInstance = function() {
if (ctor.instance_) {
return ctor.instance_;
}
return ctor.instance_ = new ctor;
};
};
It's Java-esque, but it's a simple and powerful trick. You can now do:
project.some.namespace.StateManager = function() {
this.x_ = 5;
};
project.some.namespace.prototype.getX = function() { return x; }
addSingletonGetter(project.some.namespace.StateManager);
How is this useful? Simple. In all other files, every time you need to reference project.some.namespace.StateManager, you can write:
project.some.namespace.StateManager.getInstance(). This is more awesome than it looks.
You can have global state with the benefits of a class definition (inheritance, stateful members, etc.) and without polluting the global namespace.
The single instance pattern
You may now be tempted to do this:
function Thing() {
this.someMethod = function() {..}
}
// and then use it like this:
Thing.someMethod();
That is another big no-no in JavaScript. Remember, the this object is only guaranteed to be immutable when the new keyword is used. The magic behind the above code is interesting. this is actually the global scope, so without meaning to you are adding methods to the global object. And you guessed it, those things never get garbage collected.
There is nothing telling JavaScript to use something else. A function on it's own doesn't have a scope. Be really careful what you do with static properties. To reproduce a quote I once read, the JavaScript global object is like a public toilet. Sometimes you have no choice but to go there, yet try and minimise contact with the surfaces as much as possible.
Either stick to the above Singleton pattern or use a settings object nested under a namespace.
Garbage collection in JavaScript
JavaScript is a garbage collected language, but JavaScript GC is often rather poorly understood. The point is again speed. This is perhaps all too familiar.
// This is the top of a JavaScript file.
var a = 5;
var b = 20;
var x = {};//blabla
// more code
function someFn() {..}
That is bad, poor performance code. The reason is simple. JS will garbage collect a variable and free up the heap memory it holds only when that variable gets de-scoped, e.g. there are no references to it anywhere in the memory.
For example:
function test(someArgs) {
var someMoreStuf = // a very big complex object;
}
test();
Three things:
Function arguments are transformed into local definitions
Inner declarations are hoisted.
All the heap memory allocated for inner variables is freed up when the function finishes execution.
Why?
Because they no longer belong to the "current" scope. They are created, used, and destroyed. There are no closures either, so all the memory you've used is freed up through garbage collection.
For that reason, you should never, your JS files should never look like this, as global scope will just keep polluting memory.
var x = 5;
var y = {..}; //etc;
Alright, now what?
Namespaces.
JS doesn't have namespaces per say, so this isn't exactly a Java equivalent, yet from a codebase administration perspective you get what you want.
var myProject = {};
myProject.settings = {};
myProject.controllers = {};
myProject.controlls.MainController = function() {
// some class definition here
}
Beautiful. One global variable. Proper project structure.
With a build phase, you can split your project across files, and get a proper dev environment.
There's no limit to what you can achieve from here.
Count your libraries
Having had the pleasure of working on countless codebases, the last and most important argument is to be very mindful of your code dependencies. I've seen programmers casually adding jQuery into the mix of the stack for a simple animation effect and so forth.
Dependency and package management is something the JavaScript world hadn't addresses for the longest time, until the creation of tools like Bower. Browsers are still somewhat slow, and even when they're fast, internet connections are slow.
In the world of Google for instance, they go through the lengths of writing entire compilers just to save bytes, and that approach is in many ways the right mentality to have in web programming. And I uphold Google in very high regard as their JS library powers apps like Google Maps, which are not only insanely complex, but also work everywhere.
Arguably JavaScript has an immense variety of tools available, given its popularity, accessibility, and to some extent very low quality bar the ecosystem as a whole is willing to accept.
For Hacker News subscribers, a day doesn't go by without a new JS library out there, and they are certainly useful but one cannot ignore the fact that many of them re-implement the exact same concerns without any strong notion of novelty or any killer ideas and improvements.
It's a strong rule of thumb to resist the urge of mixing in all the new toys before they have the time to prove their novelty and usefulness to the entire ecosystem and to strongly distinguish between Sunday coding fun and production deployments.
If your <head></head> tag is longer than this post, you're doing it all wrong.
Testing your knowledge of JavaScript
A few "perfectionist" level tests:
http://perfectionkills.com/javascript-quiz/, thanks to Kangax.
http://javascript-puzzlers.herokuapp.com/
A collection of objects? Use this notation (JavaScript arrays):
var collection = [ {name:"object 1"} , {name:"object 2"} , {name:"object 3"} ];
To put a new element into your collection:
collection.push( {name:"object 4"} );
In JavaScript all objects are associative arrays. In first case you create an array in the second case you created an empty object which is array too :).
So in JS you can work with any object as with array:
var a = {};
a["temp"] = "test";
And as object:
var a = {};
a.temp = "test";
I would use an array of objects:
collection = [
{ "key":"first key", "value":"first value" },
{ "key":"second key", "value":"second value" }
];
etc
1) Is an Array
2) Is an Object
With Array all is usual as in other languages
With Object also.
- You can get value a.b == 1
- But in JS you can also get value with such syntax a["b"] == 1
This could be usefull when key look like something this "some key", in this case you can't use "chaining"
also this usefull if key is the variable
you can write like this
function some(f){
var Object = {name: "Boo", age: "foo"}, key;
if(f == true){
key = "name";
}else{
key = "age";
}
return Object[key];
}
but I want to use it as collection, which I have to choose?
This depends of what data you want to store
Related
Change array size by just adding element and no push javascript
I know the universal way of changing an array's size is to use .push(). However, today I saw a piece of code in angularJS that does something like this: var service = { pages: [], doSmth: doSmth }; doSmth(); function doSmth() { service.pages[1] = "abc"; service.pages[5] = "def"; } I ran the debugger on the browser and found that before doSmth() is called, pages[1] is undefined, but after that, pages[1] is assigned the value without any error. How is this possible?
That's just the magic that JavaScript allows. If you come from a language like Java or C, this may seem like a weird idea, but you can set the value of any index in the array at any time, and the array will expand to that size! Consider: var t = []; t.length === 0; t[10000] = 'value'; t.length === 10001; JavaScript just handles this behind the scenes. It's worth mentioning that this behavior is not specific to JavaScript. This is seen a bit in other interpreted languages as well. Ruby, for example, allows you to do the same. Additionally in JavaScript, length is a writeable attribute of arrays, so you can easily truncate or clear an entire array: var t = [1]; t[4] = 0; t === [1, undefined, undefined, undefined, 0]; t.length = 2; t === [1, undefined]; t.length = 0; t === []; Setting the length to 0 is one of the fastest and simplest ways to clear an array. It might not be the most intuitive solution, but it's my go-to.
An array in JavaScript is just an object with some special properties. Essentially, they are just objects with positive, integer property names. There are some other key differences, but the idea is that you could do this: var obj = { }; obj['1'] = 'abc'; So you can do the same with an array. However! You shouldn't. Modern JavaScript engines usually optimize arrays by backing them with fast, native implementations. By setting indexes that are not currently allocated will de-optimize your code into something more like an object, which is much slower.
Pages in function doSmth() is not in the same namespace as pages in service, wich is service.pages. Your pages[1] musst be a global declared somewhere else. However you are initializing pages as an empty (but existing) array: pages: [], var service = { pages: [], doSmth: doSmth }; doSmth(); function doSmth() { pages[1] = "abc"; pages[5] = "def"; }
Why is bind slower than a closure?
A previous poster asked Function.bind vs Closure in Javascript : how to choose? and received this answer in part, which seems to indicate bind should be faster than a closure: Scope traversal means, when you are reaching to grab a value (variable,object) that exists in a different scope, therefore additional overhead is added (code becomes slower to execute). Using bind, you 're calling a function with an existing scope, so that scope traversal does not take place. Two jsperfs suggest that bind is actually much, much slower than a closure. This was posted as a comment to the above And, I decided to write my own jsperf So why is bind so much slower (70+% on chromium)? Since it is not faster and closures can serve the same purpose, should bind be avoided?
Chrome 59 update: As I predicted in the answer below bind is no longer slower with the new optimizing compiler. Here's the code with details: https://codereview.chromium.org/2916063002/ Most of the time it does not matter. Unless you're creating an application where .bind is the bottleneck I wouldn't bother. Readability is much more important than sheer performance in most cases. I think that using native .bind usually provides for more readable and maintainable code - which is a big plus. However yes, when it matters - .bind is slower Yes, .bind is considerably slower than a closure - at least in Chrome, at least in the current way it's implemented in v8. I've personally had to switch in Node.JS for performance issues some times (more generally, closures are kind of slow in performance intensive situations). Why? Because the .bind algorithm is a lot more complicated than wrapping a function with another function and using .call or .apply. (Fun fact, it also returns a function with toString set to [native function]). There are two ways to look at this, from the specification point of view, and from the implementation point of view. Let's observe both. First, let's look at the bind algorithm defined in the specification: Let Target be the this value. If IsCallable(Target) is false, throw a TypeError exception. Let A be a new (possibly empty) internal list of all of the argument values provided after thisArg (arg1, arg2 etc), in order. ... (21. Call the [[DefineOwnProperty]] internal method of F with arguments "arguments", PropertyDescriptor {[[Get]]: thrower, [[Set]]: thrower, [[Enumerable]]: false, [[Configurable]]: false}, and false. (22. Return F. Seems pretty complicated, a lot more than just a wrap. Second , let's see how it's implemented in Chrome. Let's check FunctionBind in the v8 (chrome JavaScript engine) source code: function FunctionBind(this_arg) { // Length is 1. if (!IS_SPEC_FUNCTION(this)) { throw new $TypeError('Bind must be called on a function'); } var boundFunction = function () { // Poison .arguments and .caller, but is otherwise not detectable. "use strict"; // This function must not use any object literals (Object, Array, RegExp), // since the literals-array is being used to store the bound data. if (%_IsConstructCall()) { return %NewObjectFromBound(boundFunction); } var bindings = %BoundFunctionGetBindings(boundFunction); var argc = %_ArgumentsLength(); if (argc == 0) { return %Apply(bindings[0], bindings[1], bindings, 2, bindings.length - 2); } if (bindings.length === 2) { return %Apply(bindings[0], bindings[1], arguments, 0, argc); } var bound_argc = bindings.length - 2; var argv = new InternalArray(bound_argc + argc); for (var i = 0; i < bound_argc; i++) { argv[i] = bindings[i + 2]; } for (var j = 0; j < argc; j++) { argv[i++] = %_Arguments(j); } return %Apply(bindings[0], bindings[1], argv, 0, bound_argc + argc); }; %FunctionRemovePrototype(boundFunction); var new_length = 0; if (%_ClassOf(this) == "Function") { // Function or FunctionProxy. var old_length = this.length; // FunctionProxies might provide a non-UInt32 value. If so, ignore it. if ((typeof old_length === "number") && ((old_length >>> 0) === old_length)) { var argc = %_ArgumentsLength(); if (argc > 0) argc--; // Don't count the thisArg as parameter. new_length = old_length - argc; if (new_length < 0) new_length = 0; } } // This runtime function finds any remaining arguments on the stack, // so we don't pass the arguments object. var result = %FunctionBindArguments(boundFunction, this, this_arg, new_length); // We already have caller and arguments properties on functions, // which are non-configurable. It therefore makes no sence to // try to redefine these as defined by the spec. The spec says // that bind should make these throw a TypeError if get or set // is called and make them non-enumerable and non-configurable. // To be consistent with our normal functions we leave this as it is. // TODO(lrn): Do set these to be thrower. return result; We can see a bunch of expensive things here in the implementation. Namely %_IsConstructCall(). This is of course needed to abide to the specification - but it also makes it slower than a simple wrap in many cases. On another note, calling .bind is also slightly different, the spec notes "Function objects created using Function.prototype.bind do not have a prototype property or the [[Code]], [[FormalParameters]], and [[Scope]] internal properties"
I just want to give a little bit of perspective here: Note that while bind()ing is slow, calling the functions once bound is not! My test code in Firefox 76.0 on Linux: //Set it up. q = function(r, s) { }; r = {}; s = {}; a = []; for (let n = 0; n < 1000000; ++n) { //Tried all 3 of these. //a.push(q); //a.push(q.bind(r)); a.push(q.bind(r, s)); } //Performance-testing. s = performance.now(); for (let x of a) { x(); } e = performance.now(); document.body.innerHTML = (e - s); So while it is true that .bind()ing can be some ~2X slower than not binding (I tested that too), the above code takes the same amount of time for all 3 cases (binding 0, 1, or 2 variables). Personally, I don't care if the .bind()ing is slow in my current use case, I care about the performance of the code being called once those variables are already bound to the functions.
Cloning: what's the fastest alternative to JSON.parse(JSON.stringify(x))?
What's the fastest alternative to JSON.parse(JSON.stringify(x)) There must be a nicer/built-in way to perform a deep clone on objects/arrays, but I haven't found it yet. Any ideas?
No, there is no build in way to deep clone objects. And deep cloning is a difficult and edgey thing to deal with. Lets assume that a method deepClone(a) should return a "deep clone" of b. Now a "deep clone" is an object with the same [[Prototype]] and having all the own properties cloned over. For each clone property that is cloned over, if that has own properties that can be cloned over then do so, recursively. Of course were keeping the meta data attached to properties like [[Writable]] and [[Enumerable]] in-tact. And we will just return the thing if it's not an object. var deepClone = function (obj) { try { var names = Object.getOwnPropertyNames(obj); } catch (e) { if (e.message.indexOf("not an object") > -1) { // is not object return obj; } } var proto = Object.getPrototypeOf(obj); var clone = Object.create(proto); names.forEach(function (name) { var pd = Object.getOwnPropertyDescriptor(obj, name); if (pd.value) { pd.value = deepClone(pd.value); } Object.defineProperty(clone, name, pd); }); return clone; }; This will fail for a lot of edge cases. Live Example As you can see you can't deep clone objects generally without breaking their special properties (like .length in array). To fix that you have to treat Array seperately, and then treat every special object seperately. What do you expect to happen when you do deepClone(document.getElementById("foobar")) ? As an aside, shallow clones are easy. Object.getOwnPropertyDescriptors = function (obj) { var ret = {}; Object.getOwnPropertyNames(obj).forEach(function (name) { ret[name] = Object.getOwnPropertyDescriptor(obj, name); }); return ret; }; var shallowClone = function (obj) { return Object.create( Object.getPrototypeOf(obj), Object.getOwnPropertyDescriptors(obj) ); };
I was actually comparing it against angular.copy You can run the JSperf test here: https://jsperf.com/angular-copy-vs-json-parse-string I'm comparing: myCopy = angular.copy(MyObject); vs myCopy = JSON.parse(JSON.stringify(MyObject)); This is the fatest of all test I could run on all my computers
The 2022 solution for this is to use structuredClone See : https://developer.mozilla.org/en-US/docs/Web/API/structuredClone structuredClone(x)
Cyclic references are not really an issue. I mean they are but that's just a matter of proper record keeping. Anyway quick answer for this one. Check this: https://github.com/greatfoundry/json-fu In my mad scientist lab of crazy javascript hackery I've been putting the basic implementation to use in serializing the entirety of the javascript context including the entire DOM from Chromium, sending it over a websocket to Node and reserializing it successfully. The only cyclic issue that is problematic is the retardo navigator.mimeTypes and navigator.plugins circle jerking one another to infinity, but easily solved. (function(mimeTypes, plugins){ delete navigator.mimeTypes; delete navigator.plugins; var theENTIREwindowANDdom = jsonfu.serialize(window); WebsocketForStealingEverything.send(theENTIREwindowANDdom); navigator.mimeTypes = mimeTypes; navigator.plugins = plugins; })(navigator.mimeTypes, navigator.plugins); JSONFu uses the tactic of creating Sigils which represent more complex data types. Like a MoreSigil which say that the item is abbreviated and there's X levels deeper which can be requested. It's important to understand that if you're serializing EVERYTHING then it's obviously more complicated to revive it back to its original state. I've been experimenting with various things to see what's possible, what's reasonable, and ultimately what's ideal. For me the goal is a bit more auspicious than most needs in that I'm trying to get as close to merging two disparate and simultaneous javascript contexts into a reasonable approximation of a single context. Or to determine what the best compromise is in terms of exposing the desired capabilities while not causing performance issues. When you start looking to have revivers for functions then you cross the land from data serialization into remote procedure calling. A neat hacky function I cooked up along the way classifies all the properties on an object you pass to it into specific categories. The purpose for creating it was to be able to pass a window object in Chrome and have it spit out the properties organized by what's required to serialize and then revive them in a remote context. Also to accomplish this without any sort of preset cheatsheet lists, like a completely dumb checker that makes the determinations by prodding the passed value with a stick. This was only designed and ever checked in Chrome and is very much not production code, but it's a cool specimen. // categorizeEverything takes any object and will sort its properties into high level categories // based on it's profile in terms of what it can in JavaScript land. It accomplishes this task with a bafflingly // small amount of actual code by being extraordinarily uncareful, forcing errors, and generally just // throwing caution to the wind. But it does a really good job (in the one browser I made it for, Chrome, // and mostly works in webkit, and could work in Firefox with a modicum of effort) // // This will work on any object but its primarily useful for sorting the shitstorm that // is the webkit global context into something sane. function categorizeEverything(container){ var types = { // DOMPrototypes are functions that get angry when you dare call them because IDL is dumb. // There's a few DOM protos that actually have useful constructors and there currently is no check. // They all end up under Class which isn't a bad place for them depending on your goals. // [Audio, Image, Option] are the only actual HTML DOM prototypes that sneak by. DOMPrototypes: {}, // Plain object isn't callable, Object is its [[proto]] PlainObjects: {}, // Classes have a constructor Classes: {}, // Methods don't have a "prototype" property and their [[proto]] is named "Empty" Methods: {}, // Natives also have "Empty" as their [[proto]]. This list has the big boys: // the various Error constructors, Object, Array, Function, Date, Number, String, etc. Natives: {}, // Primitives are instances of String, Number, and Boolean plus bonus friends null, undefined, NaN, Infinity Primitives: {} }; var str = ({}).toString; function __class__(obj){ return str.call(obj).slice(8,-1); } Object.getOwnPropertyNames(container).forEach(function(prop){ var XX = container[prop], xClass = __class__(XX); // dumping the various references to window up front and also undefineds for laziness if(xClass == "Undefined" || xClass == "global") return; // Easy way to rustle out primitives right off the bat, // forcing errors for fun and profit. try { Object.keys(XX); } catch(e) { if(e.type == "obj_ctor_property_non_object") return types.Primitives[prop] = XX; } // I'm making a LOT flagrant assumptions here but process of elimination is key. var isCtor = "prototype" in XX; var proto = Object.getPrototypeOf(XX); // All Natives also fit the Class category, but they have a special place in our heart. if(isCtor && proto.name == "Empty" || XX.name == "ArrayBuffer" || XX.name == "DataView" || "BYTES_PER_ELEMENT" in XX) { return types.Natives[prop] = XX; } if(xClass == "Function"){ try { // Calling every single function in the global context without a care in the world? // There's no way this can end badly. // TODO: do this nonsense in an iframe or something XX(); } catch(e){ // Magical functions which you can never call. That's useful. if(e.message == "Illegal constructor"){ return types.DOMPrototypes[prop] = XX; } } // By process of elimination only regular functions can still be hanging out if(!isCtor) { return types.Methods[prop] = XX; } } // Only left with full fledged objects now. Invokability (constructor) splits this group in half return (isCtor ? types.Classes : types.PlainObjects)[prop] = XX; // JSON, Math, document, and other stuff gets classified as plain objects // but they all seem correct going by what their actual profiles and functionality }); return types; };
Determine how many fields a Javascript object has
I have a Javascript object that I'm trying to use as a "hashmap". The keys are always strings, so I don't think I need anything as sophisticated as what's described in this SO question. (I also don't expect the number of keys to go above about 10 so I'm not particularly concerned with lookups being O(n) vs. O(log n) etc.) The only functionality I want that built-in Javascript objects don't seem to have, is a quick way to figure out the number of key/value pairs in the object, like what Java's Map.size returns. Of course, you could just do something like: function getObjectSize(myObject) { var count=0 for (var key in myObject) count++ return count } but that seems kind of hacky and roundabout. Is there a "right way" to get the number of fields in the object?
There is an easier way spec'd in ECMAScript 5. Object.keys(..) returns an array of all keys defined on the object. Length can be called on that. Try in Chrome: Object.keys({a: 1, b: 2}).length; // 2 Note that all objects are basically key/value pairs in JavaScript, and they are also very extensible. You could extend the Object.prototype with a size method and get the count there. However, a much better solution is to create a HashMap type interface or use one of the many existing implementations out there, and define size on it. Here's one tiny implementation: function HashMap() {} HashMap.prototype.put = function(key, value) { this[key] = value; }; HashMap.prototype.get = function(key) { if(typeof this[key] == 'undefined') { throw new ReferenceError("key is undefined"); } return this[key]; }; HashMap.prototype.size = function() { var count = 0; for(var prop in this) { // hasOwnProperty check is important because // we don't want to count properties on the prototype chain // such as "get", "put", "size", or others. if(this.hasOwnProperty(prop) { count++; } } return count; }; Use as (example): var map = new HashMap(); map.put(someKey, someValue); map.size();
A correction: you need to check myObject.hasOwnProperty(key) in each iteration, because there're can be inherited attributes. For example, if you do this before loop Object.prototype.test = 'test', test will aslo be counted. And talking about your question: you can just define a helper function, if speed doesn't matter. After all, we define helpers for trim function and other simple things. A lot of javascript is "kind of hacky and roundabout" :) update Failure example, as requested. Object.prototype.test = 'test'; var x = {}; x['a'] = 1; x['b'] = 2; The count returned will be 3.
you could also just do myObject.length (in arrays) nevermind, see this: JavaScript object size
That's all you can do. Clearly, JavaScript objects are not designed for this. And this will only give you the number of Enumerable properties. Try getObjectSize(Math).
Are there legitimate uses for JavaScript's "with" statement?
Alan Storm's comments in response to my answer regarding the with statement got me thinking. I've seldom found a reason to use this particular language feature, and had never given much thought to how it might cause trouble. Now, I'm curious as to how I might make effective use of with, while avoiding its pitfalls. Where have you found the with statement useful?
Another use occurred to me today, so I searched the web excitedly and found an existing mention of it: Defining Variables inside Block Scope. Background JavaScript, in spite of its superficial resemblance to C and C++, does not scope variables to the block they are defined in: var name = "Joe"; if ( true ) { var name = "Jack"; } // name now contains "Jack" Declaring a closure in a loop is a common task where this can lead to errors: for (var i=0; i<3; ++i) { var num = i; setTimeout(function() { alert(num); }, 10); } Because the for loop does not introduce a new scope, the same num - with a value of 2 - will be shared by all three functions. A new scope: let and with With the introduction of the let statement in ES6, it becomes easy to introduce a new scope when necessary to avoid these problems: // variables introduced in this statement // are scoped to each iteration of the loop for (let i=0; i<3; ++i) { setTimeout(function() { alert(i); }, 10); } Or even: for (var i=0; i<3; ++i) { // variables introduced in this statement // are scoped to the block containing it. let num = i; setTimeout(function() { alert(num); }, 10); } Until ES6 is universally available, this use remains limited to the newest browsers and developers willing to use transpilers. However, we can easily simulate this behavior using with: for (var i=0; i<3; ++i) { // object members introduced in this statement // are scoped to the block following it. with ({num: i}) { setTimeout(function() { alert(num); }, 10); } } The loop now works as intended, creating three separate variables with values from 0 to 2. Note that variables declared within the block are not scoped to it, unlike the behavior of blocks in C++ (in C, variables must be declared at the start of a block, so in a way it is similar). This behavior is actually quite similar to a let block syntax introduced in earlier versions of Mozilla browsers, but not widely adopted elsewhere.
I have been using the with statement as a simple form of scoped import. Let's say you have a markup builder of some sort. Rather than writing: markupbuilder.div( markupbuilder.p('Hi! I am a paragraph!', markupbuilder.span('I am a span inside a paragraph') ) ) You could instead write: with(markupbuilder){ div( p('Hi! I am a paragraph!', span('I am a span inside a paragraph') ) ) } For this use case, I am not doing any assignment, so I don't have the ambiguity problem associated with that.
As my previous comments indicated, I don't think you can use with safely no matter how tempting it might be in any given situation. Since the issue isn't directly covered here, I'll repeat it. Consider the following code user = {}; someFunctionThatDoesStuffToUser(user); someOtherFunction(user); with(user){ name = 'Bob'; age = 20; } Without carefully investigating those function calls, there's no way to tell what the state of your program will be after this code runs. If user.name was already set, it will now be Bob. If it wasn't set, the global name will be initialized or changed to Bob and the user object will remain without a name property. Bugs happen. If you use with you will eventually do this and increase the chances your program will fail. Worse, you may encounter working code that sets a global in the with block, either deliberately or through the author not knowing about this quirk of the construct. It's a lot like encountering fall through on a switch, you have no idea if the author intended this and there's no way to know if "fixing" the code will introduce a regression. Modern programming languages are chocked full of features. Some features, after years of use, are discovered to be bad, and should be avoided. Javascript's with is one of them.
I actually found the with statement to be incredibly useful recently. This technique never really occurred to me until I started my current project - a command line console written in JavaScript. I was trying to emulate the Firebug/WebKit console APIs where special commands can be entered into the console but they don't override any variables in the global scope. I thought of this when trying to overcome a problem I mentioned in the comments to Shog9's excellent answer. To achieve this effect, I used two with statements to "layer" a scope behind the global scope: with (consoleCommands) { with (window) { eval(expression); } } The great thing about this technique is that, aside from the performance disadvantages, it doesn't suffer the usual fears of the with statement, because we're evaluating in the global scope anyway - there's no danger of variables outside our pseudo-scope from being modified. I was inspired to post this answer when, to my surprise, I managed to find the same technique used elsewhere - the Chromium source code! InjectedScript._evaluateOn = function(evalFunction, object, expression) { InjectedScript._ensureCommandLineAPIInstalled(); // Surround the expression in with statements to inject our command line API so that // the window object properties still take more precedent than our API functions. expression = "with (window._inspectorCommandLineAPI) { with (window) { " + expression + " } }"; return evalFunction.call(object, expression); } EDIT: Just checked the Firebug source, they chain 4 with statements together for even more layers. Crazy! const evalScript = "with (__win__.__scope__.vars) { with (__win__.__scope__.api) { with (__win__.__scope__.userVars) { with (__win__) {" + "try {" + "__win__.__scope__.callback(eval(__win__.__scope__.expr));" + "} catch (exc) {" + "__win__.__scope__.callback(exc, true);" + "}" + "}}}}";
Yes, yes and yes. There is a very legitimate use. Watch: with (document.getElementById("blah").style) { background = "black"; color = "blue"; border = "1px solid green"; } Basically any other DOM or CSS hooks are fantastic uses of with. It's not like "CloneNode" will be undefined and go back to the global scope unless you went out of your way and decided to make it possible. Crockford's speed complaint is that a new context is created by with. Contexts are generally expensive. I agree. But if you just created a div and don't have some framework on hand for setting your css and need to set up 15 or so CSS properties by hand, then creating a context will probably be cheaper then variable creation and 15 dereferences: var element = document.createElement("div"), elementStyle = element.style; elementStyle.fontWeight = "bold"; elementStyle.fontSize = "1.5em"; elementStyle.color = "#55d"; elementStyle.marginLeft = "2px"; etc...
You can define a small helper function to provide the benefits of with without the ambiguity: var with_ = function (obj, func) { func (obj); }; with_ (object_name_here, function (_) { _.a = "foo"; _.b = "bar"; });
Hardly seems worth it since you can do the following: var o = incrediblyLongObjectNameThatNoOneWouldUse; o.name = "Bob"; o.age = "50";
I don't ever use with, don't see a reason to, and don't recommend it. The problem with with is that it prevents numerous lexical optimizations an ECMAScript implementation can perform. Given the rise of fast JIT-based engines, this issue will probably become even more important in the near future. It might look like with allows for cleaner constructs (when, say, introducing a new scope instead of a common anonymous function wrapper or replacing verbose aliasing), but it's really not worth it. Besides a decreased performance, there's always a danger of assigning to a property of a wrong object (when property is not found on an object in injected scope) and perhaps erroneously introducing global variables. IIRC, latter issue is the one that motivated Crockford to recommend to avoid with.
Visual Basic.NET has a similar With statement. One of the more common ways I use it is to quickly set a number of properties. Instead of: someObject.Foo = '' someObject.Bar = '' someObject.Baz = '' , I can write: With someObject .Foo = '' .Bar = '' .Baz = '' End With This isn't just a matter of laziness. It also makes for much more readable code. And unlike JavaScript, it does not suffer from ambiguity, as you have to prefix everything affected by the statement with a . (dot). So, the following two are clearly distinct: With someObject .Foo = '' End With vs. With someObject Foo = '' End With The former is someObject.Foo; the latter is Foo in the scope outside someObject. I find that JavaScript's lack of distinction makes it far less useful than Visual Basic's variant, as the risk of ambiguity is too high. Other than that, with is still a powerful idea that can make for better readability.
I think the obvious use is as a shortcut. If you're e.g. initializing an object you simply save typing a lot of "ObjectName." Kind of like lisp's "with-slots" which lets you write (with-slots (foo bar) objectname "some code that accesses foo and bar" which is the same as writing "some code that accesses (slot-value objectname 'foo) and (slot-value objectname 'bar)"" It's more obvious why this is a shortcut then when your language allows "Objectname.foo" but still.
You can use with to introduce the contents of an object as local variables to a block, like it's being done with this small template engine.
Using "with" can make your code more dry. Consider the following code: var photo = document.getElementById('photo'); photo.style.position = 'absolute'; photo.style.left = '10px'; photo.style.top = '10px'; You can dry it to the following: with(document.getElementById('photo').style) { position = 'absolute'; left = '10px'; top = '10px'; } I guess it depends whether you have a preference for legibility or expressiveness. The first example is more legible and probably recommended for most code. But most code is pretty tame anyway. The second one is a bit more obscure but uses the expressive nature of the language to cut down on code size and superfluous variables. I imagine people who like Java or C# would choose the first way (object.member) and those who prefer Ruby or Python would choose the latter.
Having experience with Delphi, I would say that using with should be a last-resort size optimization, possibly performed by some kind of javascript minimizer algorithm with access to static code analysis to verify its safety. The scoping problems you can get into with liberal use of the with statement can be a royal pain in the a** and I wouldn't want anyone to experience a debugging session to figure out what the he.. is going on in your code, only to find out that it captured an object member or the wrong local variable, instead of your global or outer scope variable which you intended. The VB with statement is better, in that it needs the dots to disambiguate the scoping, but the Delphi with statement is a loaded gun with a hairtrigger, and it looks to me as though the javascript one is similar enough to warrant the same warning.
Using with is not recommended, and is forbidden in ECMAScript 5 strict mode. The recommended alternative is to assign the object whose properties you want to access to a temporary variable. Source: Mozilla.org
The with statement can be used to decrease the code size or for private class members, example: // demo class framework var Class= function(name, o) { var c=function(){}; if( o.hasOwnProperty("constructor") ) { c= o.constructor; } delete o["constructor"]; delete o["prototype"]; c.prototype= {}; for( var k in o ) c.prototype[k]= o[k]; c.scope= Class.scope; c.scope.Class= c; c.Name= name; return c; } Class.newScope= function() { Class.scope= {}; Class.scope.Scope= Class.scope; return Class.scope; } // create a new class with( Class.newScope() ) { window.Foo= Class("Foo",{ test: function() { alert( Class.Name ); } }); } (new Foo()).test(); The with-statement is very usefull if you want to modify the scope, what is necessary for having your own global scope that you can manipulate at runtime. You can put constants on it or certain helper functions often used like e.g. "toUpper", "toLower" or "isNumber", "clipNumber" aso.. About the bad performance I read that often: Scoping a function won't have any impact on the performance, in fact in my FF a scoped function runs faster then an unscoped: var o={x: 5},r, fnRAW= function(a,b){ return a*b; }, fnScoped, s, e, i; with( o ) { fnScoped= function(a,b){ return a*b; }; } s= Date.now(); r= 0; for( i=0; i < 1000000; i++ ) { r+= fnRAW(i,i); } e= Date.now(); console.log( (e-s)+"ms" ); s= Date.now(); r= 0; for( i=0; i < 1000000; i++ ) { r+= fnScoped(i,i); } e= Date.now(); console.log( (e-s)+"ms" ); So in the above mentioned way used the with-statement has no negative effect on performance, but a good one as it deceases the code size, what impacts the memory usage on mobile devices.
Using with also makes your code slower in many implementation, as everything now gets wrapped in an extra scope for lookup. There's no legitimate reason for using with in JavaScript.
I think the with-statement can come in handy when converting a template language into JavaScript. For example JST in base2, but I've seen it more often. I agree one can program this without the with-statement. But because it doesn't give any problems it is a legitimate use.
It's good for putting code that runs in a relatively complicated environment into a container: I use it to make a local binding for "window" and such to run code meant for a web browser.
I think the object literal use is interesting, like a drop-in replacement for using a closure for(var i = nodes.length; i--;) { // info is namespaced in a closure the click handler can access! (function(info) { nodes[i].onclick = function(){ showStuff(info) }; })(data[i]); } or the with statement equivilent of a closure for(var i = nodes.length; i--;) { // info is namespaced in a closure the click handler can access! with({info: data[i]}) { nodes[i].onclick = function(){ showStuff(info) }; } } I think the real risk is accidently minipulating variables that are not part of the with statement, which is why I like the object literal being passed into with, you can see exactly what it will be in the added context in the code.
I created a "merge" function which eliminates some of this ambiguity with the with statement: if (typeof Object.merge !== 'function') { Object.merge = function (o1, o2) { // Function to merge all of the properties from one object into another for(var i in o2) { o1[i] = o2[i]; } return o1; }; } I can use it similarly to with, but I can know it won't affect any scope which I don't intend for it to affect. Usage: var eDiv = document.createElement("div"); var eHeader = Object.merge(eDiv.cloneNode(false), {className: "header", onclick: function(){ alert("Click!"); }}); function NewObj() { Object.merge(this, {size: 4096, initDate: new Date()}); }
For some short code pieces, I would like to use the trigonometric functions like sin, cos etc. in degree mode instead of in radiant mode. For this purpose, I use an AngularDegreeobject: AngularDegree = new function() { this.CONV = Math.PI / 180; this.sin = function(x) { return Math.sin( x * this.CONV ) }; this.cos = function(x) { return Math.cos( x * this.CONV ) }; this.tan = function(x) { return Math.tan( x * this.CONV ) }; this.asin = function(x) { return Math.asin( x ) / this.CONV }; this.acos = function(x) { return Math.acos( x ) / this.CONV }; this.atan = function(x) { return Math.atan( x ) / this.CONV }; this.atan2 = function(x,y) { return Math.atan2(x,y) / this.CONV }; }; Then I can use the trigonometric functions in degree mode without further language noise in a with block: function getAzimut(pol,pos) { ... var d = pos.lon - pol.lon; with(AngularDegree) { var z = atan2( sin(d), cos(pol.lat)*tan(pos.lat) - sin(pol.lat)*cos(d) ); return z; } } This means: I use an object as a collection of functions, which I enable in a limited code region for direct access. I find this useful.
I think that the usefulness of with can be dependent on how well your code is written. For example, if you're writing code that appears like this: var sHeader = object.data.header.toString(); var sContent = object.data.content.toString(); var sFooter = object.data.footer.toString(); then you could argue that with will improve the readability of the code by doing this: var sHeader = null, sContent = null, sFooter = null; with(object.data) { sHeader = header.toString(); sContent = content.toString(); sFooter = content.toString(); } Conversely, it could be argued that you're violating the Law of Demeter, but, then again, maybe not. I digress =). Above all else, know that Douglas Crockford recommends not using with. I urge you to check out his blog post regarding with and its alternatives here.
I just really don't see how using the with is any more readable than just typing object.member. I don't think it's any less readable, but I don't think it's any more readable either. Like lassevk said, I can definitely see how using with would be more error prone than just using the very explicit "object.member" syntax.
You got to see the validation of a form in javascript at W3schools http://www.w3schools.com/js/js_form_validation.asp where the object form is "scanned" through to find an input with name 'email' But i've modified it to get from ANY form all the fields validate as not empty, regardless of the name or quantity of field in a form. Well i've tested only text-fields. But the with() made things simpler. Here's the code: function validate_required(field) { with (field) { if (value==null||value=="") { alert('All fields are mandtory');return false; } else { return true; } } } function validate_form(thisform) { with (thisform) { for(fiie in elements){ if (validate_required(elements[fiie])==false){ elements[fiie].focus(); elements[fiie].style.border='1px solid red'; return false; } else {elements[fiie].style.border='1px solid #7F9DB9';} } } return false; }
CoffeeScript's Coco fork has a with keyword, but it simply sets this (also writable as # in CoffeeScript/Coco) to the target object within the block. This removes ambiguity and achieves ES5 strict mode compliance: with long.object.reference #a = 'foo' bar = #b
My switch(e.type) { case gapi.drive.realtime.ErrorType.TOKEN_REFRESH_REQUIRED: blah case gapi.drive.realtime.ErrorType.CLIENT_ERROR: blah case gapi.drive.realtime.ErrorType.NOT_FOUND: blah } boils down to with(gapi.drive.realtime.ErrorType) {switch(e.type) { case TOKEN_REFRESH_REQUIRED: blah case CLIENT_ERROR: blah case NOT_FOUND: blah }} Can you trust so low-quality code? No, we see that it was made absolutely unreadable. This example undeniably proves that there is no need for with-statement, if I am taking readability right ;)
using "with" statement with proxy objects I recently want to write a plugin for babel that enables macros. I would like to have a separate variable namespace that keeps my macro variables, and I can run my macro codes in that space. Also, I want to detect new variables that are defined in the macro codes(Because they are new macros). First, I choose the vm module, but I found global variables in the vm module like Array, Object, etc. are different from the main program, and I cant implement module and require that be fully compatible with that global objects(Because I cant reconstruct the core modules). In the end, I find the "with" statement. const runInContext = function(code, context) { context.global = context; const proxyOfContext = new Proxy(context, { has: () => true }); let run = new Function( "proxyOfContext", ` with(proxyOfContext){ with(global){ ${code} } } ` ); return run(proxyOfContext); }; This proxy object traps search of all variables and says: "yes, I have that variable." and If the proxy object doesn't really have that variable, show its value as undefined. In this way, if any variable is defined in the macro code with the var statement, I can find it in the context object(like the vm module). But variables that are defined with let or const only available in that time and will not be saved in the context object(the vm module saves them but doesn't expose them). Performance: Performance of this method is better than vm.runInContext. safety: If you want to run code in a sandbox, this is not safe in any way, and you must use the vm module. It only provides a new namespace.
Here's a good use for with: adding new elements to an Object Literal, based on values stored in that Object. Here's an example that I just used today: I had a set of possible tiles (with openings facing top, bottom, left, or right) that could be used, and I wanted a quick way of adding a list of tiles which would be always placed and locked at the start of the game. I didn't want to keep typing types.tbr for each type in the list, so I just used with. Tile.types = (function(t,l,b,r) { function j(a) { return a.join(' '); } // all possible types var types = { br: j( [b,r]), lbr: j([l,b,r]), lb: j([l,b] ), tbr: j([t,b,r]), tbl: j([t,b,l]), tlr: j([t,l,r]), tr: j([t,r] ), tl: j([t,l] ), locked: [] }; // store starting (base/locked) tiles in types.locked with( types ) { locked = [ br, lbr, lbr, lb, tbr, tbr, lbr, tbl, tbr, tlr, tbl, tbl, tr, tlr, tlr, tl ] } return types; })("top","left","bottom","right");
As Andy E pointed out in the comments of Shog9's answer, this potentially-unexpected behavior occurs when using with with an object literal: for (var i = 0; i < 3; i++) { function toString() { return 'a'; } with ({num: i}) { setTimeout(function() { console.log(num); }, 10); console.log(toString()); // prints "[object Object]" } } Not that unexpected behavior wasn't already a hallmark of with. If you really still want to use this technique, at least use an object with a null prototype. function scope(o) { var ret = Object.create(null); if (typeof o !== 'object') return ret; Object.keys(o).forEach(function (key) { ret[key] = o[key]; }); return ret; } for (var i = 0; i < 3; i++) { function toString() { return 'a'; } with (scope({num: i})) { setTimeout(function() { console.log(num); }, 10); console.log(toString()); // prints "a" } } But this will only work in ES5+. Also don't use with.
I am working on a project that will allow users to upload code in order to modify the behavior of parts of the application. In this scenario, I have been using a with clause to keep their code from modifying anything outside of the scope that I want them to mess around with. The (simplified) portion of code I use to do this is: // this code is only executed once var localScope = { build: undefined, // this is where all of the values I want to hide go; the list is rather long window: undefined, console: undefined, ... }; with(localScope) { build = function(userCode) { eval('var builtFunction = function(options) {' + userCode + '}'); return builtFunction; } } var build = localScope.build; delete localScope.build; // this is how I use the build method var userCode = 'return "Hello, World!";'; var userFunction = build(userCode); This code ensures (somewhat) that the user-defined code neither has access to any globally-scoped objects such as window nor to any of my local variables through a closure. Just as a word to the wise, I still have to perform static code checks on the user-submitted code to ensure they aren't using other sneaky manners to access global scope. For instance, the following user-defined code grabs direct access to window: test = function() { return this.window }; return test();