This code from ryan niemeyer on this blog bost declares an empty function(named result) and then adds properties to the function:
ko.dirtyFlag = function(root, isInitiallyDirty) {
var result = function() {},
_initialState = ko.observable(ko.toJSON(root)),
_isInitiallyDirty = ko.observable(isInitiallyDirty);
result.isDirty = ko.computed(function() {
return _isInitiallyDirty() || _initialState() !== ko.toJSON(root);
});
result.reset = function() {
_initialState(ko.toJSON(root));
_isInitiallyDirty(false);
};
return result;
};
What advantage does this serve over simply creating an object and assigning the same properties before returning the object?
edit
in response to the comment requesting how i would expect it to look:
either declaring
var result={};
in the declarations, or as a style thing:
ko.dirtyFlag = function(root, isInitiallyDirty) {
var _initialState = ko.observable(ko.toJSON(root)),
_isInitiallyDirty = ko.observable(isInitiallyDirty);
return {
isDirty : ko.computed(function() {
return _isInitiallyDirty() || _initialState() !== ko.toJSON(root);
}),
reset : function() {
_initialState(ko.toJSON(root));
_isInitiallyDirty(false);
}
};
};
but the semantics are irrelevant - what does a shell of a function returned provide to the consuming code/developer calling the function?
In the link you posted, the author states
When ko.toJS runs, it will just see a plain function and ignore it.
In other words, he is using the fact that the framework he is using will ignore functions in the context where he is using it, whereas if he had used an object the framework would not ignore it.
He never intends to call the function, just to use it as a place to store his dirty flag while tricking the knockout framework into ignoring it.
It's just another way to create an object, I do not believe it has any difference to doing it one way or another. sometimes just a style preference, sometime just the way a programmer likes to do something. (just like using var that = this, or using a function's bind method. both legit and both ways of passing context).
Here is a detailed post on creating objects in JavaScript from MDN
Creating an object and declaring an empty function in JavaScript are way to create an object. In JavaScript things are objects and there are many ways to create them. No one way is much better than the other. Although from ECMAScript5 the better way to do it is Object.create.
Related
Suppose I've a Set as a lookup table.
const myset = new Set(["key1", "key2", "key3", "keepMe"]);
I wanted to filter another array of some other keys (say mykeys) which are in myset.
const mykeys = ["ignoreMe", "keepMe", "ignoreMeToo", "key2"];
Question:
Why do I have to use
const filtered = mykeys.filter(k => myset.has(k))
instead of
const filtered = mykeys.filter(myset.has)
// TypeError: Method Set.prototype.has called on incompatible receiver undefined
i.e., why do I've to create an anonymous lambda function in filter? keys.has has same signature (argument - element, return boolean). A friend told me it's related to this.
Whereas mykeys.map(console.log) works without error (although not being of much use).
I came across this article at MDN and I still don't get why "'myset' is not captured as this". I understand the workaround but not the "why". Can anyone explain it with some details and references in a human friendly way?
Update: Thank you all for the responses. Maybe I wasn't clear about what I'm asking. I do understand the workarounds.
#charlietfl understood. Here's his comment, the thing I was looking for:
Because filter() has no implicit this where as set.has needs to have proper this context. Calling it inside anonymous function and manually adding argument makes the call self contained.
You could use thisArg of Array#filter with the set and the prototype of has as callback.
This pattern does not require a binding of an instance of Set to the prototype, because
If a thisArg parameter is provided to filter, it will be used as the callback's this value. Otherwise, the value undefined will be used as its this value. The this value ultimately observable by callback is determined according to the usual rules for determining the this seen by a function.
const
myset = new Set(["key1", "key2", "key3", "keepMe"]),
mykeys = ["ignoreMe", "keepMe", "ignoreMeToo", "key2"],
filtered = mykeys.filter(Set.prototype.has, myset);
console.log(filtered);
This is a fundamental design decision dating back to the first definition of the JavaScript language.
Consider an object
var myObjet = {
someValue: 0,
someFunction: function() {
return this.someValue;
}
};
Now, when you write
var myValue = myObject.someValue;
You get exactly what you have put in it, as if you had written
var myValue = 0;
Similarly, when you write
var myFunction = myObject.someValue;
You get exactly what you have put in it, as if you had written
var myFunction = (function() {
return this.someValue;
});
...except now, you are not in an object anymore. So this doesn't mean anything. Indeed, if you try
console.log(myFunction());
you will see
undefined
exactly as if you had written
console.log(this.someValue);
outside of any object.
So, what is this? Well, JavaScript decides it as follows:
If you write myObject.myFunction(), then when executing the myFunction() part, this is myObject.
If you just write myFunction(), then this is the current global object, which is generally window (not always, there are many special cases).
A number of functions can inject a this in another function (e.g. call, apply, map, ...)
Now, why does it do this? The answer is that this is necessary for prototypes. Indeed, if you now define
var myDerivedObject = Object.create(myObject);
myDerivedObjet.someValue = 42;
you now have an object based on myObject, but with a different property
someValue
console.log(myObject.someFunction()); // Shows 0
console.log(myDerivedObject.someFunction()); // Shows 42
That's because myObject.someFunction() uses myObject for this, while myDerivedObject.someFunction() uses myDerivedObject for this.
If this had been captured during the definition of someFunction, we would have obtained 0 in both lines, but this would also have made prototypes much less useful.
I'm using a function to create other functions that will be used on an document event handler so the signature of the returned functions must match that of the event handler, eg. function (event, ui).
The code is as follows
function createEventHandler(refFn, additionalMods) {
var createdEvent = function (event, ui) {
// Merge the properties of additionalMods with an object { event, ui }
// call function with refFn and the resulting object as parameters
}
createdEvent.ready = true;
return createdEvent;
}
I removed the code of the generated function for clarity but inside the refFn and additionalMods variables are processed inside.
Now when processing the user input I call the following line
var handler = events[i].handler.ready ?
events[i].handler :
createEventHandler(events[i].handler);
Basically process an array of data that each has a property called handler which is either a function or the result of calling createEventHandler.
The bottom line is that if the function in the array has been processed then pass this function 'as is', if not process the function and store the result so in the end all the functions are processed.
Right now i'm attaching a property called ready to signal that the function was processed as it turns out there is no reliable method to obtain the function's name according to this post but this doesn't feel right.
I also tried to use the prototype for comparison but this doesn't work because a new function is created everytime and is inside a scope so I can not get a reference for comparison.
I even tried
events[i].handler.prototype == createEventHandler().prototype
but of course it didn't work.
Does anyone know how can i generate this functions and have a reliable way to compare them to know if they were generated by my code or not.
{Edit}
To add further clarification
All the code above is under the same scope meaning the code that process the array has visibility over the createEventHandler function. I can modify this code all I want what I cannot modify is the content of the array once is created. I have to iterate over it as it comes and generate or not based on if the work was done already.
The createEventHandler is also exposed to the user throught an API function. Let's say the user calls evt.generate('model') this will generate an event handler that does an specific work using the createEventHandler function under the hoods. If then you can call evt.generate('bind') another will be generated that does another work.
This is a lot of behaviour that is provided to the users by default but they can choose to add they custom behaviour if none of the predefined ones are fit for the task.
All the data is declared once but the content of the array is disparate because I can write the following and is supposed to work. I omitted most of the other irrelevant properties.
var events = [
{
event: 'pageshow',
handler: evt.generate('model')
},
{
event: 'pagebeforeshow',
handler: function (params, last) {
// My custom handler for this case
}
}
];
After looping the array all the handlers are in the same format and ready to be binded. The createEventHandler is necessary in all the cases because I use dependency injection to supply data for those parameters so it's basically "if not called already then call it and do al the dependency injection work" this is why I need to compare the functions.
I found an elegant solution and I post it here in case someone runs into the same problem.
The problem with my code is that an user car write a function with a property named ready which is a common name and a value of true which is also a common value and the processing will fail.
Maybe the user didn't write the property, maybe is present because is inherited from it's prototype. The goal is to try to be as certain as possible that the code you are processing was the output or your own functions or not, which in javascript is almost an impossible task.
The most accurate way that I found was when I was reading about Equality comparisons and sameness which tells me that an object is only equal to itself when you use the === equality operator and is not a primitive object. That is
undefined === undefined => true
null === null => true
"foo" === "foo" => true
0 === 0 => true
But
{a:1} === {a:1} => false
So you can write a property ready which is equal to an object and as you hold the reference to that object all the comparissons will fail if this property was not set by you.
This is good but it feels bad to have an extra property called ready with a random object just to compare, maybe there is a better way and yes, there is.
In javascript there are no classes but there is prototype inheritance so you can write a function and use one of the patterns of inheritance to set this function as the ancestor of yours and use that for comparisons.
var parentEvent = function () {};
// This is the function that will be used as parent
function createEventHandler(refFn, additionalMods) {
var createdEvent = function (event, ui) {
// Merge the properties of additionalMods with an object { event, ui }
// call function with refFn and the resulting object as parameters
}
//createdEvent.ready = true This is no longer necessary
// This is the "sharing prototype" inheritance pattern
createdEvent.prototype = parentEvent.prototype
return createdEvent;
}
Now the prototype of your returned function is pointing to a function that you hold in a variable. Then you can compare them using
// Replace the property comparison with the prototype comparison
var handler = events[i].handler.prototype === parentEvent.prototype ?
events[i].handler :
createEventHandler(events[i].handler);
This is not fail proof, I know, but is good enough for most cases.
{Edit}
Thank's to #Bergi for pointing out that this is not inheritance in the strict sense of the word. The reason for that is that most javascript inheritance patterns demand that you use constructor functions and I'm using a factory function here. To make it work you have to write something like this
function createEventHandler(refFn, additionalMods) {
// Same code as before
createdEvent.prototype = parentEvent.prototype
return new createdEvent();
}
And the comparison is done with
events[i].handler.__proto__ === parentEvent.prototype
Note the difference in the way the function is returned ant the way the new prototype property is accessed. This is good when you do have other properties that you want to return that are in the parent function.
Do the contents of the events array change during the execution of your program, aside from replacing them with the converted versions?
If not, a simple solution is just to make a copy of the handlers before you start converting them, and use that for the comparison:
// keep this somewhere that you can access it later
var origHandlers = events.map(function (e) { return e.handler; });
var handler = events[i].handler === origHandlers[i] ?
createEventHandler(events[i].handler) :
events[i].handler;
I was reading the source code for pallet.js and came across this.
var ret = (function(proto) {
return {
slice: function(arr, opt_begin, opt_end) {
return proto.slice.apply(arr, proto.slice.call(arguments, 1));
},
extend: function(arr, arr2) {
proto.push.apply(arr, arr2);
}
};
})(Array.prototype);
var slice = ret.slice;
var extend = ret.extend;
Why is this necessary? Why could they not simply write this:
var slice = function(arr,opt_begin,opt_end) {
return Array.prototype.slice.apply(arr,[opt_begin,opt_end]));
}
var extend = function(arr,arr2) {
return Array.prototype.push.apply(arr,arr2);
}
EDIT 1:
In response to the duplicate question. I don't think it is a duplicate, but that question definitely does address my question. So it is an optimization. But won't each one only be evaluated once? So is there really a significant improvement here for two function calls?
Also if we are worried about performance why are we calling proto.slice.call(arguments,1) instead of constructing the array of two elements by hand [opt_begin,opt_end], is slice faster?
Because the syntax is just so much cooler. Plus you can rationalize it's use by telling yourself that it's more DRY. You didn't have to type Array.prototype twice.
I can't be sure what was the original rationale behind that code (only the author knows) but I can see a few differences:
proto is a closed-over local variable, while instead Array is a global. It's possible for a smart enough Javascript engine to optimize access because proto is never changed and thus it could even be captured by value, not reference. proto.slice can be faster than Array.prototype.slice because one lookup less is needed.
passing opt_begin and opt_end as undefined is not the same as not passing them in general. The called function can know if a parameter was passed and happens to be undefined or if instead it wasn't passed. Using proto.slice.call(arguments, 1) ensures that the parameters are passed to slice only if they were actually passed to the closure.
I am receiving an ajax feed of documents that looks something like this (much simplified):
aDocs = [{title:'new doc', ext:'pdf'}, {title:'another', ext:'xlsx'}];
I am going to iterate through the aDocs array and display information about each doc, while adding some methods to each doc that will allow for modifying the HTML for display and making API calls to update the database.
I read here that in order to add methods to existing objects, you can use the __proto__ attribute. Something along the lines of:
function Doc(){}
Doc.prototype.getExt = function(){return this.ext}
Doc.prototype.getTitle = function(){return this.title}
for (var i=0; i<aDocs.length; i++){
aDocs[i].__proto__ = Doc.prototype
}
According to that article above,this isn't official javascript, isn't supported by IE (never will be), and will likely be deprecated in webkit browsers.
Here's an alternative stab at it:
function getExt(){ return this.ext }
function getTitle(){return this.title}
for (var i=0; i<aDocs.length; i++){
aDocs[i].getExt = getExt;
aDocs[i].getTitle = getTitle;
}
Is this second alternative viable and efficient? Or am I re-creating those functions and thereby creating redundant overhead?
Again the above examples are simplified (I know aDocs[i].ext will solve the problem above, but my methods for display and API calls are more complicated).
Is this second alternative viable and efficient?
Yes.
Or am I re-creating those functions and thereby creating redundant overhead?
No, the functions are reused, not re-created. All of the objects will share the single copy of the getExt and getTitle functions. During the call to the functions from (say) aDocs[1], within the call, this will refer to the object the function is attached to. (This only applies if you call it as part of an expression retrieving it from the object, e.g., var title = aDocs[1].getTitle();)
Alternately, if you liked, you could create new objects which shared a prototype, and copy the properties from the aDocs objects to the new objects, but you've asked about assigning new functions to existing objects, so...
Augmenting (adding methods to) the prototype is often the best way to go, but since you're dealing with object literals (or JSON.parse results), you'd have to either augment the Object.prototype which is not done, or create a wrapper constructor, with the methods you need attached to its prototype. The problem will be: getting to grips with this in that case... I'd leave things as they are: use the second approach: a simple loop will do just fine. Besides: prototype methods are (marginally) slower anyway...
The function objects themselves are being created ASAP (if they are defined in the global namespace, they're created as soon as the script is parsed). By simply looping through those objects, and assigning a reference to any function to each object, you're not creating additional functions at all.
Just try this:
var someObj = {name:'someObj'},
anotherObj = {name: 'anotherObj'},
someFunction = function()
{
console.log(this);
};
someObj.func = someFunction;
anotherObj.func = someFunction;
//or, shorter
someObj.func = anotherObj.func = someFunction;
//therefore:
console.log(someObj.func === anotherObj.func);//logs true! there is only 1 function object
someObj.func();//logs {name: 'someObj'}
anotherObj.func();//logs: {name: 'anotherObj'}
There have been posted many questions (and answers) that deal with this matter more in-depth, so if you're interested:
Objects and functions in javascript
Print subclass name instead of 'Class' when using John Resig's JavaScript Class inheritance implementation
What makes my.class.js so fast?
What are the differences between these three patterns of "class" definitions in JavaScript?
Are all more or less related to your question
In this case, I would just pass the object to the constructor of Doc;
function Doc(obj){
this.obj = obj;
}
Doc.prototype.getExt = function(){
return this.obj.ext;
}
Doc.prototype.getTitle = function(){
return this.obj.title;
}
var docs = [];
for (var i=0; i<aDocs.length; i++){
docs.push(new Doc(aDocs[i]));
}
There are two problems with your approach:
You have to copy each method individually for every instance.
Your "class" is not documented anywhere, making it a class makes it clearer that your object has those methods.
I saw these 2 basic ways of namespacing in JavaScript.
Using object:
var Namespace = { };
Namespace.Class1 = function() { ... };
Using function:
function Namespace() { };
Namespace.Class1 = function() { ... };
How do they differ? Thank you.
As others have pointed out, a function is an object so the two forms can be interchangeable. As a side note, jQuery utilizes the function-as-namespace approach in order to support invocation and namespacing (in case you're wondering who else does that sort of thing or why).
However with the function-as-namespace approach, there are reserved properties that should not be touched or are otherwise immutable:
function Namespace(){}
Namespace.name = "foo"; // does nothing, "name" is immutable
Namespace.length = 3; // does nothing, "length" is immutable
Namespace.caller = "me"; // does nothing, "caller" is immutable
Namespace.call = "1-800-555-5555" // prob not a good idea, because...
// some user of your library tries to invoke the
// standard "call()" method available on functions...
Namespace.call(this, arg); // Boom, TypeError
These properties do not intersect with Object so the object-as-namespace approach will not have these behaviours.
The first one declares a simple object while the second one declares a function. In JavaScript, functions are also objects, so there is almost no difference between the two except that in the second example you can call Namespace() as a function.
Well, if all you're doing us using that "Namespace" thing as a way to "contain" other names, then those two approaches are pretty much exactly the same. A function instance is just an object, after all.
Now, generally one would use a function like that if the function itself were to be used as a constructor, or as a "focal point" for a library (as is the case with jQuery).
They don't. Functions are "first class objects". All this means is that conceptually and internally they are stored and used in the same ways. Casablanca's point of one difference you can call it as a function is a good one though. You can also test for whether or not the class was defined through a function with the typeof operator.
typeof {}
returns "object"
typeof (function())
returns "function"