Many times I'm having to loop through data and add the new to the old, but I can never seem to get it right.. Here's what I have now, basically trying to concatinate together the previous number from the new number, ultimately building a comma delimited string of numbers:
function showItems(){
if(prev_numbers == undefined){
var prev_numbers = '';
}else{
prev_numbers = prev_numbers;
}
numbers = Math.floor(Math.random()*101);
values = numbers +','+ prev_numbers;
// Here is where some code would be that makes use of comma delimited numbers
alert(values);
prev_numbers = values;
}
setInterval(showItems, 1000);
Why not use the Array.join method?
var numbers = [1, 2, 3, 4, 5];
values = numbers.join(',');
It's not clear where the numbers are coming from. Perhaps post a little more code.
It seems like you're storing the history as a comma-delimited string, and adding new numbers to the beginning of the string as they come in.
Generally, it would make more sense to store the numbers as an array of numbers, then use join to build the string for display as needed. You can prepend to the array using unshift:
numbers.unshift(Math.random() * 100);
Actually, I would recommend this even if the id's are strings.
As meder said, you'd need to define prev_numbers outside showItems, or specify them as global by referencing them as a property of window (substitute prev_numbers by window.prev_numbers). That said, harpo's solution is a lot faster if you're splitting the string up again to access the separate numbers. We need to know a bit more about the context as well as your priority (speed? memory? code length? maintainability?).
By the way, this
if(prev_numbers == undefined){
var prev_numbers = '';
}else{
prev_numbers = prev_numbers;
}
is completely useless as far as I know. It might change the internal representation of prev_numbers, but with complicated things like JIT I'm not sure if that still holds. Your application doesn't seem to be interested in that anyway. As far as I can tell, this code can be removed.
What you need to do is to implement the memoize technique, i.e. you need a memoizer function.
Here's what Google came up with for a JS memoize implementation.
Your doing it right, only that your neglecting that functions are implemented as a type in JS--it's a first-class object, so these are completely valid statements:
function a() { };
a.foo = 'bar';
a.hasOwnProperty('foo'); // true
a.foo; // 'bar'
a = function { this.foo = 'bar' }
a.foo; // 'bar'
a['foo']; // 'bar', because objects are implemented as dictionaries
The only thing that you need to change is to set prev_numbers as a property of showItems:
function showItems() {
// bool check for undefined object properties returns false
if(!this.prev_numbers)
this.prev_numbers = '';
numbers = Math.floor(Math.random()*101);
this.prev_numbers = numbers + ',' + this.prev_numbers;
}
As to your particular problem of always receiving ReferenceError in your code, I do not know the exact implementation details, but I have observed that accessing undefined globals will raise ReferenceError instead of simply returning undefined, as you'd expect. This is how to properly handle it:
if (hasOwnProperty('prev_numbers') { ... }
// equivalent to
if(window.hasOwnProperty('prev_numbers') { ... }
Take a look at this:
baz; // ReferenceError
hasOwnProperty('baz'); // false
window.hasOwnProperty('baz') //false
baz = 'bar';
hasOwnProperty('baz'); // true
window.hasOwnProperty('baz) // true
An alternative to calling hasOwnProperty is:
foo; // ReferenceError
window.foo // undefined (no ReferenceError raised)
if (!window.foo) 'yay'; // 'yay'
if (window.foo == undefined) 'yay'; // 'yay'
Related
I am following a JavaScript tutorial on W3Schools. While reading almost on each page they give note to user to "Avoid creating objects" and to use primitive data types instead. They give reason for this as "code becomes difficult to understand or execution speed will be decreased if object are used". Is it true that we should avoid creating objects in JavaScript?
For example:
var value = new Number(1); // Avoid this
var value = 1; // do something like this instead.
The statement "avoid creating objects" on its own is absurd in JavaScript, which has objects everywhere and is one of the most object-oriented languages in existence. But "avoid creating object versions of primitives," which is what the code you quote does, is valid. That is, avoid new String, new Number, and new Boolean.
JavaScript has both primitive and object versions of strings, numbers, and booleans. There's almost never any reason to create the object version of any of them explicitly, and doing so can indeed lead to confusion; see inline comments:
var s1, s2, n1, n2;
// These are false because with ===, an object is never equal to a non-object
s1 = new String("hi");
s2 = "hi";
console.log(s1 === s2); // false
n1 = new Number(42);
n2 = 42;
console.log(n1 === n2); // also false
// These are false because even with ==, two *different* objects are never equal
// (even if they're equivalent)
s1 = new String("what the...");
s2 = new String("what the...");
console.log(s1 == s2); // also false
n1 = new Number(42);
n2 = new Number(42);
console.log(n1 == n2); // also false
The object versions of strings, numbers, and booleans largely exist to enable methods on primitives to be provided using the same mechanism that provides methods to object types. When you do
console.log("foo".toUpperCase()); // "FOO"
a temporary object is created for the primitive string "foo", and then the toUpperCase property is read from that object. Since the object inherits from String.prototype, it has toUpperCase and all is well. Once the operation is done, the temporary object is thrown away (unless something keeps a reference to it, but nothing does and nothing can with toUpperCase, you'd have to add a method to String.prototype that returned the object in order for it to be kept around).
It changes the intuitive way the operators behave with numbers, strings and booleans:
the strict comparison (===) breaks when any of the numbers is constructed, so 42 === 42 is true, while 42 === new Number(42) is not,
the abstract comparison (==) breaks when both numbers are objects, so 42 == new Number(42) is true, while new Number(42) == new Number(42) is not,
the typeof operator gives different result when a number is constructed, so typeof(42) is number, but typeof(new Number(42)) is object,
when converted to a boolean, 0 is false, but new Number(0) is true, so the following two will have different behavior:
var a = 0;
if (a)
console.log("not zero");
else
console.log("zero!"); // "zero!"
var b = new Number(0);
if (b)
console.log("not zero"); // "not zero"
else
console.log("zero!");
So, avoid new Number, new String and new Boolean.
Apart from that, there is the issue of using / not using new with constructors. It stems from several facts:
in JS, a constructor is a regular function, using this.foo syntax to add new properties and methods;
when invoked without the new keyword, this becomes the global object, leading to side effects.
As a result, a tiny mistake can have catastrophic effects:
color = "blue";
var Fruit = function(color) {
this.color = color;
return this;
};
var apple = new Fruit("green");
console.log(apple.color); // "green" -- okay
console.log(color); // "blue" -- okay
var banana = Fruit("yellow");
console.log(banana.color); // "yellow" -- okay
console.log(color); // "yellow" -- wait, what?
console.log(banana.apple); // "{ color: 'green' }" -- what??
console.log(banana.document); // "{ location: [Getter/Setter] }" -- what???
(That's why some people resort to adding explicit checks in the constructor, or using closures instead. But that's for another story.)
Everybody only say "avoid using", "it can lead to confusion", "typeof x behaves weird", and for the most part they are right.
But nobody is able to give you one single reason as to why you would want to use the constructor instead.
When you have many variables that has the same value, then you will allocate much more memory, if you instead construct a new class instances and use that one instead, then you would only allocate 1 single item. and the rest of your variables would just be pointers to the same object.
This could technically increase speed when you use structuredClone (but i don't know, haven't bench tested it)
Something that i have tested out at least is to see how much disc space you allocate when using IndexedDB.
// small simple kv indexeddb storage
let p,query=(e,...r)=>(p??=new Promise((e=>{const r=indexedDB.open("kv");r.onupgradeneeded=()=>r.result.createObjectStore("kv"),r.onsuccess=()=>{const t=r.result;query=(e,...r)=>{const n=t.transaction("kv","readwrite").objectStore("kv")[e](...r);return new Promise(((e,r)=>{n.onsuccess=()=>e(n.result),n.onerror=()=>r(n.error)}))},e()}}))).then((()=>query(e,...r)));
var kv=(...e)=>query(...e);
var arr = Array(1024).fill('a'.repeat(1024))
await kv('put', arr, 'stuff')
await kv('get', 'stuff')
var est = await navigator.storage.estimate()
est.usageDetails.indexedDB // chrome 105 allocated 1055761 bytes
now if we do the same thing but using slightly different thing by using string constructor instead:
// Now you are using the same instance instead.
var arr = Array(1024).fill(new String('a'.repeat(1024)))
await kv('put', arr, 'stuff')
await kv('get', 'stuff')
var est = await navigator.storage.estimate()
est.usageDetails.indexedDB // chrome 105 allocated 7353 bytes
now you saved about 1055761-7353 = 1048408 bytes...
If you want to test this out for yourself, always open up a new inkognito window and await both put/get operators, estimate can maybe give wrong value otherwise. and deleting it may not always clean up properly, that's why you should create a new inkognito window every time you want to compare stuff.
But in the end: yeaa... maybe don't use the constructor after all. it's almost never a good thing.
Just wanted you to know what the "real" differences is by using objects instead
...also if you use NodeJS v8.serialize(value) then the same example will yield a smaller Buffer when you use the same object instances (as the rest will just be pointers)
another reason for using objects instead could be if you wanted to do something with WeakRef, WeakMap where simple literals isn't acceptable.
Do not use new when invoking a Number(). Source: JSLint...
The "Do not use {a} as a constructor" error is thrown when JSLint, JSHint or ESLint encounters a call to String, Number, Boolean, Math or JSON preceded by the new operator. (Source: LintErrors.com: Do not use {a} as a constructor
console.log(Number("3") == Number("3"))
console.log(new Number("3") == new Number("3"))
No jQuery please!
The Web says that the native String.concat() and join() functions of JS are to be avoided because of their poor performance, and a simple for() loop of += assignments should work a lot faster.
So I'm trying to create a function in pure JavaScript that will concatenate strings. This is somewhat how I envision it:
I want a main function concatenate() that will concatenate all passed arguments and additionally insert a variable string after each concatenated argument, except for the last one.
If the main function is called by itself and without the chained .using() function, then that variable string should be an empty one, which means no separators in the result.
I want a chained sub-function .using() that will tell the main concatenate() function what certain string other than the default '' empty string to add after each concatenated segment.
In theory, it should work like this:
concatenate('a','b','c'); /* result: 'abc' */
concatenate('a','b','c').using('-'); /* result: 'a-b-c' */
I want to avoid having two separate functions, like concatenate() and concatenateUsing(), because the concatenateUsing() variant would then have to utilize a special constant argument (like arguments[0] or arguments[arguments.length-1]) as the injected separator and that would be terribly untidy. Plus, I would always forget which one it was.
I also want to avoid having a superceding Concatenate object with two separate sub-methods, like Concatenate.strings() and Concatenate.using() or similar.
Here are some of my failed attempts so far...
Attempt #1:
function concatenate()
{
var result="";
if(this.separator===undefined){var separator=false;}
for(var i=0; i<arguments.length; i++)
{result += arguments[i] + ((separator && (i<arguments.length-1))?separator:'');}
this.using=function(x)
{
this.separator=x;
return this;
}
return result;
}
So what I'm trying to do is:
check if the separator variable is undefined, this means it wasn't set from a sub-method yet.
If it's undefined, declare it with the value false for later evaluation.
Run the concatenation, and if separator has another value than false then use it in each concatenation step - as long as it's not the last iteration.
Then return the result.
The sub-method .using(x) should somewhere along the way set the
value of the separator variable.
Naturally, this doesn't work.
Attempt #2:
var concatenate = function()
{
var result="";
var separator="";
for(var i=0; i<arguments.length; i++)
{result += arguments[i] + ((separator && (i<arguments.length-1))?separator:'');}
return result;
}
concatenate.prototype.using=function(x)
{
this.separator=x;
return this;
}
It also doesn't work, I assume that when this is returned from the using() sub-method, the var separator="" of the main concatenate() function just overwrites the value with "" again.
I tried doing this 4 or 5 different ways now, but I don't want to bore you with all the others as well.
Does anyone know a solution for this puzzle?
Thanks a lot in advance!
What you are trying to do is impossible.
You cannot chain something to a method call that returns a primitive, because primitives do not have (custom) methods1.
And you cannot make the first function return different things depending on whether something is chained or not, because it doesn't know about its call context and has to return the result before the method call is evaluated.
Your best bet is to return an object that can be stringified using a custom toString method, and also offers that using thing. It would be something along the lines of
function concatenate() {
return {
args: Array.from(arguments), // ES6 for simplicity
using: function(separator) {
return this.args.join(separator);
},
toString: function() {
return this.args.join("");
}
};
}
console.log(String(concatenate('a','b','c')); // result: 'abc'
// alternatively, use ""+… or explicitly call the ….toString() method
console.log(concatenate('a','b','c').using('-')); // result: 'a-b-c'
1: No, you don't want to know workarounds.
If JavaScript will readily coerce between primitives and objects why adding properties to it results to undefined??
var a = "abc";
var b = a.length
console.log(b)//outputs 3
Does coercion allow me to assign values to primitives?If not why ?
Does coercion allow me to assign values to primitives?
Yes. The primitive is wrapped in an object, and a property is created on that. No exception will be thrown.
why adding properties to it results to undefined?
The adding itself does result in the value.
var str = "abc";
console.log(str.someProperty = 5); // 5
Yet, what you're asking for is getting a property from a primitive. This will return in undefined since the primitive is wrapped in a new wrapper object - not the one which you assigned the property on:
console.log(str.someProperty); // undefined
It only works for special properties like .length that are created with the object, or inherited ones like slice or charAt methods (see the docs for those).
If you wanted such a thing, you'd need to create the wrapper object explicitly and store it somewhere:
var strObj = new String("abc");
strObj.someProperty = 5;
console.log(strObj.someProperty); // 5
// coercion and the `toString` method will even make the object act like a string
console.log(strObj + "def"); // "abcdef"
Why don't you do it like this?
var abc=Number(3);
abc.foo="bar";
abc; //3
abc.foo; //bar
#Bergi indeed, sorry for this. I've missed the new statement. Here it is:
var abc=new Number(3);
abc.foo="bar";
abc; //3
abc.foo; //bar
At least it works just right. I don't know what else someone may need :)
primitives... coercion... blah.
sorry for noobie question. Can you explain please, what is the difference between:
1. var a = [];
a['b'] = 1;
2. var a = {};
a['b'] = 1;
I could not find article in the internet so wrote here.
Literals
The [] and {} are called the array and object literals respectively.
var x = [] is short for var x = new Array();
and var y = {} is short for var y = new Object();
Arrays
Arrays are structures with a length property. You can access values via their numeric index.
var x = [] or var x = new Array();
x[0] = 'b';
x[1] = 'c';
And if you want to list all the properties you do:
for(var i = 0; i < x.length; i++)
console.log(x[i]);// numeric index based access.
Performance tricks and gotchas
1. Inner-caching the length property
The standard array iteration:
for (var i = 0; i < arr.length; i++) {
// do stuff
};
Little known fact: In the above scenario, the arr.length property is read at every step of the for loop. Just like any function you call there:
for (var i = 0; i < getStopIndex(); i++) {
// do stuff
};
This decreases performance for no reason. Inner caching to the rescue:
for (var i = 0, len = arr.length; i < len; i++) {
// enter code here
};
Here's proof of the above.
2. Don't specify the Array length in the constructor.
// doing this:
var a = new Array(100);
// is very pointless in JS. It will result in an array with 100 undefined values.
// not even this:
var a = new Array();
// is the best way.
var a = [];
// using the array literal is the fastest and easiest way to do things.
Test cases for array definition are available here.
3. Avoid using Array.prototype.push(arr.push)
If you are dealing with large collections, direct assignment is faster than using the Array.prototype.push(); method.
myArray[i] = 0; is faster than myArray.push(0);, according to jsPerf.com test cases.
4. It is wrong to use arrays for associative assignments.
The only reason why it works is because Array extends the Object class inside the core of the JS language. You can just as well use a Date(); or RegEx(); object for instance. It won't make a difference.
x['property'] = someValue MUST always be used with Objects.
Arrays should only have numeric indexes. SEE THIS, the Google JS development guidelines! Avoid for (x in arr) loops or arr['key'] = 5;.
This can be easily backed up, look HERE for an example.
var x = [];
console.log(x.prototype.toString.call);
will output: [object Array]
This reveals the core language's 'class' inheritance pattern.
var x = new String();
console.log(x.prototype.toString.call);
will output [object String].
5. Getting the minimum and maximum from an array.
A little known, but really powerful trick:
function arrayMax(arr) {
return Math.max.apply(Math, arr);
};
, respectively:
function arrayMin(arr) {
return Math.min.apply(Math, arr);
};
Objects
With an object you can only do:
var y = {} or var y = new Object();
y['first'] = 'firstValue' is the same as y.first = 'firstValue', which you can't do with an array. Objects are designed for associative access with String keys.
And the iteration is something like this:
for (var property in y) {
if (y.hasOwnProperty(property)) {
console.log(y.property);
};
};
Performance tricks and gotchas
1. Checking if an object has a property.
Most people use Object.prototype.hasOwnProperty. Unfortunately that often gives erroneous results leading to unexpected bugs.
Here's a good way to do it:
function containsKey(obj, key) {
return typeof obj[key] !== 'undefined';
};
2. Replacing switch statements.
One of the simple but efficient JS tricks is switch replacement.
switch (someVar) {
case 'a':
doSomething();
break;
case 'b':
doSomethingElse();
break;
default:
doMagic();
break;
};
In most JS engines the above is painfully slow. When you are looking at three possible outcomes, it doesn't make a difference, but what if you had tens or hundreds?
The above can easily be replaced with an object. Don't add the trailing (), this is not executing the functions, but simply storing references to them:
var cases = {
'a': doSomething,
'b': doSomethingElse,
'c': doMagic
};
Instead of the switch:
var x = ???;
if (containsKey(cases, x)) {
cases[x]();
} else {
console.log("I don't know what to do!");
};
3. Deep-cloning made easy.
function cloneObject(obj) {
var tmp = {};
for (var key in obj) {
tmp[key] = fastDeepClone(obj[key];
};
return tmp;
}
function cloneArr(arr) {
var tmp = [];
for (var i = 0, len = arr.length; i < len; i++) {
tmp[i] = fastDeepClone(arr[i]);
}
return tmp;
}
function deepClone(obj) {
return JSON.parse(JSON.stringify(obj));
};
function isArray(obj) {
return obj instanceof Array;
}
function isObject(obj) {
var type = typeof obj;
return type === 'object' && obj !== null || type === 'function';
}
function fastDeepClone(obj) {
if (isArray(obj)) {
return cloneArr(obj);
} else if (isObject(obj)) {
return cloneObject(obj);
} else {
return obj;
};
};
HERE is the deep clone function in action.
Auto-boxing
As a dynamically typed language, JavaScript is limited in terms of native object types:
Object
Array
Number
Boolean
Date
RegEx
Error
Null is not a type, typeof null is object.
What's the catch? There is a strong distinction between primitive and non-primitive objects.
var s = "str";
var s2 = new String("str");
They do the same thing, you can call all string methods on s and s2.
Yet:
type of s == "string"; // raw data type
type of s2 == "object" // auto-boxed to non-primitive wrapper type
s2.prototype.toString.call == "[object String]";
You may hear in JS everything is an object. That's not exactly true, although it's a really easy mistake to make.
In reality there are 2 types, primitives and objects, and when you call s.indexOf("c"), the JS engine will automatically convert s to its non-primitive wrapper type, in this case object String, where all the methods are defined on the String.prototype.
This is called auto-boxing. The Object.prototype.valueOf(obj) method is a way to force the cast from primitive to non-primitive. It's the same behaviour a language like Java introduces for many of it's own primitives, specifically the pairs: int - Integer, double - Double, float - Float, etc.
Why should you care?
Simple:
function isString(obj) {
return typeof obj === "string";
}
isString(s); // true
isString(s2); // false
So if s2 was created with var s2 = new String("test") you are getting a false negative, even for an otherwise conceivably simple type check. More complex objects also bring with themselves a heavy performance penalty.
A micro-optimization as some would say, but the results are truly remarkable, even for extremely simple things such as string initialisation. Let's compare the following two in terms of performance:
var s1 = "this_is_a_test"
and
var s2 = new String("this_is_a_test")
You will probably expected matching performance across the board, but rather surprisingly the latter statement using new String is 92% slower than the first one, as proven here.
Functions
1. Default parameters
The || operator is the simplest possible way of defaulting. Why does it work? Because of truthy and falsy values.
When evaluated in a logical condition, undefined and null values will autocast to false.
A simple example(code HERE):
function test(x) {
var param = x || 5;
// do stuff with x
};
2. OO JS
The most important thing to understand is that the JavaScript this object is not immutable. It is simply a reference that can be changed with great ease.
In OO JS, we rely on the new keyword to guarantee implicit scope in all members of a JS Class. Even so, you can easily change the scope, via Function.prototype.call and Function.prototype.apply.
Another very important thing is the Object.prototype. Non-primitive values nested on an objects prototype are shared, while primitive ones are not.
Code with examples HERE.
A simple class definition:
function Size(width, height) {
this.width = width;
this.height = height;
};
A simple size class, with two members, this.width and this.height.
In a class definition, whatever has this in front of it, will create a new reference for every instance of Size.
Adding methods to classes and why the "closure" pattern and other "fancy name pattern" are pure fiction
This is perhaps where the most malicious JavaScript anti-patterns are found.
We can add a method to our Size class in two ways.
Size.prototype.area = function() {
return this.width * this.height;
};
Or:
function Size2(width, height) {
this.width = width;
this.height = height;
this.area = function() {
return this.width * this.height;
}
}
var s = new Size(5, 10);
var s2 = new Size2(5, 10);
var s3 = new Size2(5, 10);
var s4 = new Size(5, 10);
// Looks identical, but lets use the reference equality operator to test things:
s2.area === s3.area // false
s.area === s4.area // true
The area method of Size2 is created for every instance.
This is completely useless and slow, A LOT slower. 89% to be exact. Look HERE.
The above statement is valid for about 99% of all known "fancy name pattern". Remember the single most important thing in JS, all those are nothing more than fiction.
There are strong architectural arguments that can be made, mostly revolved around data encapsulation and the usage of closures.
Such things are unfortunately absolutely worthless in JavaScript, the performance loss simply isn't worth it. We are talking about 90% and above, it's anything but negligible.
3. Limitations
Because prototype definitions are shared among all instances of a class, you won't be able to put a non-primitive settings object there.
Size.prototype.settings = {};
Why? size.settings will be the same for every single instance.
So what's with the primitives?
Size.prototype.x = 5; // won't be shared, because it's a primitive.
// see auto-boxing above for primitive vs non-primitive
// if you come from the Java world, it's the same as int and Integer.
The point:
The average JS guy will write JS in the following way:
var x = {
doStuff: function(x) {
},
doMoreStuff: 5,
someConstant: 10
}
Which is fine (fine = poor quality, hard to maintain code), as long as you understand that is a Singleton object, and those functions should only be used in global scope without referencing this inside them.
But then it gets to absolutely terrible code:
var x = {
width: 10,
height: 5
}
var y = {
width: 15,
height: 10
}
You could have gotten away with: var x = new Size(10, 5); var y = new Size(15, 5);.
Takes longer to type, you need to type the same thing every time. And again, it's A LOT SLOWER. Look HERE.
Poor standards throughout
This can be seen almost anywhere:
function() {
// some computation
var x = 10 / 2;
var y = 5;
return {
width: x,
height: y
}
}
Again with the alternative:
function() {
var x = 10 / 2;
var y = 5;
return new Size(10, 5);
};
The point: USE CLASSES WHEREVER APPROPRIATE!!
Why? Example 1 is 93% Slower. Look HERE.
The examples here are trivial, but they illustrate something being ignored in JS, OO.
It's a solid rule of thumb not to employ people who think JS doesn't have classes and to get jobs from recruiters talking about "Object Orientated" JS.
Closures
A lot of people prefer them to the above because it gives them a sense of data encapsulation. Besides the drastic 90% performance drop, here's something equally easy to overlook. Memory leaks.
function Thing(someParam) {
this.someFn = function() {
return someParam;
}
}
You've just created a closure for someParam. Why is this bad? First, it forces you to define class methods as instance properties, resulting in the big performance drop.
Second, it eats up memory, because a closure will never get dereferenced. Look here for proof. Sure, you do get some fake data encapsulation, but you use three times the memory with a 90% performance drop.
Or you can add #private and get a way with an underscore function name.
Other very common ways of generating closures:
function bindSomething(param) {
someDomElement.addEventListener("click", function() {
if (param) //do something
else // do something else
}, false);
}
param is now a closure! How do you get rid of it? There are various tricks, some found here. The best possible approach, albeit more rigorous is to avoid using anonymous functions all-together, but this would require a way to specify scopes for event callbacks.
Such a mechanism is only available in Google Closure, as far as I know.
The singleton pattern
Ok, so what do I do for singletons? I don't want to store random references. Here's a wonderful idea shamelessly stolen from Google Closure's base.js
/**
* Adds a {#code getInstance} static method that always return the same instance
* object.
* #param {!Function} ctor The constructor for the class to add the static
* method to.
*/
function addSingletonGetter(ctor) {
ctor.getInstance = function() {
if (ctor.instance_) {
return ctor.instance_;
}
return ctor.instance_ = new ctor;
};
};
It's Java-esque, but it's a simple and powerful trick. You can now do:
project.some.namespace.StateManager = function() {
this.x_ = 5;
};
project.some.namespace.prototype.getX = function() { return x; }
addSingletonGetter(project.some.namespace.StateManager);
How is this useful? Simple. In all other files, every time you need to reference project.some.namespace.StateManager, you can write:
project.some.namespace.StateManager.getInstance(). This is more awesome than it looks.
You can have global state with the benefits of a class definition (inheritance, stateful members, etc.) and without polluting the global namespace.
The single instance pattern
You may now be tempted to do this:
function Thing() {
this.someMethod = function() {..}
}
// and then use it like this:
Thing.someMethod();
That is another big no-no in JavaScript. Remember, the this object is only guaranteed to be immutable when the new keyword is used. The magic behind the above code is interesting. this is actually the global scope, so without meaning to you are adding methods to the global object. And you guessed it, those things never get garbage collected.
There is nothing telling JavaScript to use something else. A function on it's own doesn't have a scope. Be really careful what you do with static properties. To reproduce a quote I once read, the JavaScript global object is like a public toilet. Sometimes you have no choice but to go there, yet try and minimise contact with the surfaces as much as possible.
Either stick to the above Singleton pattern or use a settings object nested under a namespace.
Garbage collection in JavaScript
JavaScript is a garbage collected language, but JavaScript GC is often rather poorly understood. The point is again speed. This is perhaps all too familiar.
// This is the top of a JavaScript file.
var a = 5;
var b = 20;
var x = {};//blabla
// more code
function someFn() {..}
That is bad, poor performance code. The reason is simple. JS will garbage collect a variable and free up the heap memory it holds only when that variable gets de-scoped, e.g. there are no references to it anywhere in the memory.
For example:
function test(someArgs) {
var someMoreStuf = // a very big complex object;
}
test();
Three things:
Function arguments are transformed into local definitions
Inner declarations are hoisted.
All the heap memory allocated for inner variables is freed up when the function finishes execution.
Why?
Because they no longer belong to the "current" scope. They are created, used, and destroyed. There are no closures either, so all the memory you've used is freed up through garbage collection.
For that reason, you should never, your JS files should never look like this, as global scope will just keep polluting memory.
var x = 5;
var y = {..}; //etc;
Alright, now what?
Namespaces.
JS doesn't have namespaces per say, so this isn't exactly a Java equivalent, yet from a codebase administration perspective you get what you want.
var myProject = {};
myProject.settings = {};
myProject.controllers = {};
myProject.controlls.MainController = function() {
// some class definition here
}
Beautiful. One global variable. Proper project structure.
With a build phase, you can split your project across files, and get a proper dev environment.
There's no limit to what you can achieve from here.
Count your libraries
Having had the pleasure of working on countless codebases, the last and most important argument is to be very mindful of your code dependencies. I've seen programmers casually adding jQuery into the mix of the stack for a simple animation effect and so forth.
Dependency and package management is something the JavaScript world hadn't addresses for the longest time, until the creation of tools like Bower. Browsers are still somewhat slow, and even when they're fast, internet connections are slow.
In the world of Google for instance, they go through the lengths of writing entire compilers just to save bytes, and that approach is in many ways the right mentality to have in web programming. And I uphold Google in very high regard as their JS library powers apps like Google Maps, which are not only insanely complex, but also work everywhere.
Arguably JavaScript has an immense variety of tools available, given its popularity, accessibility, and to some extent very low quality bar the ecosystem as a whole is willing to accept.
For Hacker News subscribers, a day doesn't go by without a new JS library out there, and they are certainly useful but one cannot ignore the fact that many of them re-implement the exact same concerns without any strong notion of novelty or any killer ideas and improvements.
It's a strong rule of thumb to resist the urge of mixing in all the new toys before they have the time to prove their novelty and usefulness to the entire ecosystem and to strongly distinguish between Sunday coding fun and production deployments.
If your <head></head> tag is longer than this post, you're doing it all wrong.
Testing your knowledge of JavaScript
A few "perfectionist" level tests:
http://perfectionkills.com/javascript-quiz/, thanks to Kangax.
http://javascript-puzzlers.herokuapp.com/
A collection of objects? Use this notation (JavaScript arrays):
var collection = [ {name:"object 1"} , {name:"object 2"} , {name:"object 3"} ];
To put a new element into your collection:
collection.push( {name:"object 4"} );
In JavaScript all objects are associative arrays. In first case you create an array in the second case you created an empty object which is array too :).
So in JS you can work with any object as with array:
var a = {};
a["temp"] = "test";
And as object:
var a = {};
a.temp = "test";
I would use an array of objects:
collection = [
{ "key":"first key", "value":"first value" },
{ "key":"second key", "value":"second value" }
];
etc
1) Is an Array
2) Is an Object
With Array all is usual as in other languages
With Object also.
- You can get value a.b == 1
- But in JS you can also get value with such syntax a["b"] == 1
This could be usefull when key look like something this "some key", in this case you can't use "chaining"
also this usefull if key is the variable
you can write like this
function some(f){
var Object = {name: "Boo", age: "foo"}, key;
if(f == true){
key = "name";
}else{
key = "age";
}
return Object[key];
}
but I want to use it as collection, which I have to choose?
This depends of what data you want to store
I have a Javascript object that I'm trying to use as a "hashmap". The keys are always strings, so I don't think I need anything as sophisticated as what's described in this SO question. (I also don't expect the number of keys to go above about 10 so I'm not particularly concerned with lookups being O(n) vs. O(log n) etc.)
The only functionality I want that built-in Javascript objects don't seem to have, is a quick way to figure out the number of key/value pairs in the object, like what Java's Map.size returns. Of course, you could just do something like:
function getObjectSize(myObject) {
var count=0
for (var key in myObject)
count++
return count
}
but that seems kind of hacky and roundabout. Is there a "right way" to get the number of fields in the object?
There is an easier way spec'd in ECMAScript 5.
Object.keys(..) returns an array of all keys defined on the object. Length can be called on that. Try in Chrome:
Object.keys({a: 1, b: 2}).length; // 2
Note that all objects are basically key/value pairs in JavaScript, and they are also very extensible. You could extend the Object.prototype with a size method and get the count there. However, a much better solution is to create a HashMap type interface or use one of the many existing implementations out there, and define size on it. Here's one tiny implementation:
function HashMap() {}
HashMap.prototype.put = function(key, value) {
this[key] = value;
};
HashMap.prototype.get = function(key) {
if(typeof this[key] == 'undefined') {
throw new ReferenceError("key is undefined");
}
return this[key];
};
HashMap.prototype.size = function() {
var count = 0;
for(var prop in this) {
// hasOwnProperty check is important because
// we don't want to count properties on the prototype chain
// such as "get", "put", "size", or others.
if(this.hasOwnProperty(prop) {
count++;
}
}
return count;
};
Use as (example):
var map = new HashMap();
map.put(someKey, someValue);
map.size();
A correction: you need to check myObject.hasOwnProperty(key) in each iteration, because there're can be inherited attributes. For example, if you do this before loop Object.prototype.test = 'test', test will aslo be counted.
And talking about your question: you can just define a helper function, if speed doesn't matter. After all, we define helpers for trim function and other simple things. A lot of javascript is "kind of hacky and roundabout" :)
update
Failure example, as requested.
Object.prototype.test = 'test';
var x = {};
x['a'] = 1;
x['b'] = 2;
The count returned will be 3.
you could also just do myObject.length (in arrays)
nevermind, see this: JavaScript object size
That's all you can do. Clearly, JavaScript objects are not designed for this. And this will only give you the number of Enumerable properties. Try getObjectSize(Math).