is there a benefit to using function.apply(obj) vs function(obj) - javascript

in javascript we can perform operations on an object in a few different ways. My question is primarily about the 2 ways described below:
function op() {
this.x;
this.y;
etc..
}
op.apply(someObject);
or
function op(ob) {
ob.x;
ob.y;
etc..
}
op(someObject);
both of these seem to get the same end result. are there scenarios where using one has advantages over using the other?
advantages can be things that make the code:
more flexible
more declarative
easier to maintain
simpler
or other advantages I haven't thought of

The first (using apply to set this) is a bit more obscure. If one looks just at the function definition, one will spend some time wondering what this is supposed to be. If by any chance, the function is defined as a method, then naturally one would assume that this is supposed to refer to the object it is defined on.
The second variant (without apply) is clearer: the parameter has a name that can (should) reveal its role, while this can remain available for when the function is assigned to an object property, and it is called as a method on that object.
As a side note: apply allows (and expects) the (other) arguments to be passed as an array. This used to be a "plus" in some scenarios, but has become less significant now that the spread syntax is available.

Related

What is the purpose of JavaScript's call and apply methods?

What is the purpose of call and apply methods in the language?
It seems to be an example of anti-pattern which breaks OOP principles. Or it just presents multi-paradigm approach?
This example made me to ask this question.
In one place of application (a part of library) there is such a code
socket.emit = function() {
if (!socket.banned) {
emit.apply(socket, arguments)
}
}
In another part of application (self written) there is such piece of code
socket.emit = function(arguments){
data = Array.prototype.slice.call(arguments);
emit.apply(socket, data);
}
In the first place the object should not work if it is banned. But the second part knows nothing about library logic and works independently.
How this situations may be resolved? We use libraries to ease the most difficult parts.
Function.prototype.call exists for one purpose only, and this is to allow the function to be called with a specific value of this, i.e. the one that's passed as the first parameter.
Function.prototype.apply is the same, but with the added advantage that the array passed as the second parameter is split out into multiple parameters when the function is invoked. This is useful when you want to call a function that takes a variable sized list of arguments but you have those arguments ready in an array instead of as individual values.
Take, for example, Math.max. If I have a set of unknown size and I wish to find the largest there are two approaches:
successively compare each element against the current maximum until you find the largest
pass all of them directly to Math.max.
I can't call Math.max(a, b, c, etc) because I don't know how many variables there are. Instead I can call Math.max.apply(null, myArray) and that gets expanded to Math.max(myArray[0], myArray[1], etc).
NB1: in ES6 .apply isn't strictly required any more because you can use .call with the ... spread operator, e.g. Math.max.call(null, ...myArray).
NB2: this has nothing to do with breaking OOP principles.

What are the harmful effects of manipulating an object passed in as a parameter? [duplicate]

Eclipse has an option to warn on assignment to a method's parameter (inside the method), as in:
public void doFoo(int a){
if (a<0){
a=0; // this will generate a warning
}
// do stuff
}
Normally I try to activate (and heed) almost all available compiler warnings, but in this case I'm not really sure whether it's worth it.
I see legitimate cases for changing a parameter in a method (e.g.: Allowing a parameter to be "unset" (e.g. null) and automatically substituting a default value), but few situations where it would cause problems, except that it might be a bit confusing to reassign a parameter in the middle of the method.
Do you use such warnings? Why / why not?
Note:
Avoiding this warning is of course equivalent to making the method parameter final (only then it's a compiler error :-)). So this question Why should I use the keyword "final" on a method parameter in Java? might be related.
The confusing-part is the reason for the warning. If you reassign a parameter a new value in the method (probably conditional), then it is not clear, what a is. That's why it is seen as good style, to leave method-params unchanged.
For me, as long as you do it early and clearly, it's fine. As you say, doing it buried deep in four conditionals half-way into a 30-line function is less than ideal.
You also obviously have to be careful when doing this with object references, since calling methods on the object you were given may change its state and communicate information back to the caller, but of course if you've subbed in your own placeholder, that information is not communicated.
The flip side is that declaring a new variable and assigning the argument (or a default if argument needs defaulting) to it may well be clearer, and will almost certainly not be less efficient -- any decent compiler (whether the primary compiler or a JIT) will optimize it out when feasible.
Assigning a method parameter is not something most people expect to happen in most methods. Since we read the code with the assumption that parameter values are fixed, an assignment is usually considered poor practice, if only by convention and the principle of least astonishment.
There are always alternatives to assigning method parameters: usually a local temporary copy is just fine. But generally, if you find you need to control the logic of your function through parameter reassignment, it could benefit from refactoring into smaller methods.
Reassigning to the method parameter variable is usually a mistake if the parameter is a reference type.
Consider the following code:
MyObject myObject = new myObject();
myObject.Foo = "foo";
doFoo(myObject);
// what's the value of myObject.Foo here?
public void doFoo(MyObject myFoo){
myFoo = new MyObject("Bar");
}
Many people will expect that at after the call to doFoo, myObject.Foo will equal "Bar". Of course, it won't - because Java is not pass by reference, but pass by reference value - that is to say, a copy of the reference is passed to the method. Reassigning to that copy only has an effect in the local scope, and not at the callsite. This is one of the most commonly misunderstood concepts.
Different compiler warnings can be appropriate for different situations. Sure, some are applicable to most or all situations, but this does not seem to be one of them.
I would think of this particular warning as the compiler giving you the option to be warned about a method parameter being reassigned when you need it, rather than a rule that method parameters should not be reassigned. Your example constitutes a perfectly valid case for it.
I sometimes use it in situations like these:
void countdown(int n)
{
for (; n > 0; n--) {
// do something
}
}
to avoid introducing a variable i in the for loop. Typically I only use these kind of 'tricks' in very short functions.
Personally I very much dislike 'correcting' parameters inside a function this way. I prefer to catch these by asserts and make sure that the contract is right.
I usually don't need to assign new values to method parameters.
As to best-practices - the warning also avoids confusion when facing code like:
public void foo() {
int a = 1;
bar(a);
System.out.println(a);
}
public void bar(int a) {
a++;
}
You shoud write code with no side effect : every method shoud be a function that doesn't change . Otherwise it's a command and it can be dangerous.
See definitions for command and function on the DDD website :
Function :
An operation that computes and returns a result without observable side effects.
Command : An operation that effects some change to the system (for
example, setting a variable). An
operation that intentionally creates a
side effect.

Why can Array.prototype.forEach not be chained?

I learned today that forEach() returns undefined. What a waste!
If it returned the original array, it would be far more flexible without breaking any existing code. Is there any reason forEach returns undefined.
Is there anyway to chain forEach with other methods like map & filter?
For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction)
Wouldn't work because the forEach doesn't want to play nice, requiring you to run all the methods before the forEach again to get the object in the state needed for the forEach.
What you want is known as method cascading via method chaining. Describing them in brief:
Method chaining is when a method returns an object that has another method that you immediately invoke. For example, using jQuery:
$("#person")
.slideDown("slow")
.addClass("grouped")
.css("margin-left", "11px");
Method cascading is when multiple methods are called on the same object. For example, in some languages you can do:
foo
..bar()
..baz();
Which is equivalent to the following in JavaScript:
foo.bar();
foo.baz();
JavaScript doesn't have any special syntax for method cascading. However, you can simulate method cascading using method chaining if the first method call returns this. For example, in the following code if bar returns this (i.e. foo) then chaining is equivalent to cascading:
foo
.bar()
.baz();
Some methods like filter and map are chainable but not cascadable because they return a new array, but not the original array.
On the other hand the forEach function is not chainable because it doesn't return a new object. Now, the question arises whether forEach should be cascadable or not.
Currently, forEach is not cascadable. However, that's not really a problem as you can simply save the result of the intermediate array in a variable and use that later:
var arr = someThing.keys()
.filter(someFilter);
arr.forEach(passToAnotherObject);
var obj = arr
.map(transformKeys)
.reduce(reduction);
Yes, this solution looks uglier than the your desired solution. However, I like it more than your code for several reasons:
It is consistent because chainable methods are not mixed with cascadable methods. Hence, it promotes a functional style of programming (i.e. programming with no side effects).
Cascading is inherently an effectful operation because you are calling a method and ignoring the result. Hence, you're calling the operation for its side effects and not for its result.
On the other hand, chainable functions like map and filter don't have any side effects (if their input function doesn't have any side effects). They are used solely for their results.
In my humble opinion, mixing chainable methods like map and filter with cascadable functions like forEach (if it was cascadable) is sacrilege because it would introduce side effects in an otherwise pure transformation.
It is explicit. As The Zen of Python teaches us, “Explicit is better than implicit.” Method cascading is just syntactic sugar. It is implicit and it comes at a cost. The cost is complexity.
Now, you might argue that my code looks more complex than yours. If so, you would be judging the book by its cover. In their famous paper Out of the Tar Pit, the authors Ben Moseley and Peter Marks describe different types of software complexities.
The second biggest software complexity on their list is complexity caused by explicit concern with control flow. For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction);
The above program is explicitly concerned with control flow because you are explicit stating that .forEach(passToAnotherObject) should happen before .map(transformKeys) even though it shouldn't have any effect on the overall transformation.
In fact, you can remove it from the equation altogether and it wouldn't make any difference:
var obj = someThing.keys()
.filter(someFilter)
.map(transformKeys)
.reduce(reduction);
This suggests that the .forEach(passToAnotherObject) didn't have any business being in the equation in the first place. Since it's a side effectful operation, it should be kept separate from pure code.
When you write it explicitly as I did above, not only are you separating pure code from side effectful code but also you can choose when to evaluate each computation. For example:
var arr = someThing.keys()
.filter(someFilter);
var obj = arr
.map(transformKeys)
.reduce(reduction);
arr.forEach(passToAnotherObject); // evaluate after pure computation
Yes, you are still explicitly concerned with control flow. However, at least now you know that .forEach(passToAnotherObject) has nothing to do with the other transformations.
Thus, you have eliminated some (but not all) of the complexity caused by explicit concern with control flow.
For these reasons, I believe that the current implementation of forEach is actually beneficial because it prevents you from writing code that introduces complexity due to explicit concern with control flow.
I know from personal experience from when I used to work at BrowserStack that explicit concern with control flow is a big problem in large-scale software applications. It is indeed a real world problem.
It's easy to write complex code because complex code is usually shorter (implicit) code. So it's always tempting to drop in a side effectful function like forEach in the middle of a pure computation because it requires less code refactoring.
However, in the long run it makes your program more complex. Think of what would happen a few years down the line when you quit the company that you work for and somebody else has to maintain your code. Your code now looks like:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.forEach(doSomething)
.map(transformKeys)
.forEach(doSomethingElse)
.reduce(reduction);
The person reading your code now has to assume that all the additional forEach methods in your chain are essential, put in extra work to understand what each function does, figure out by herself that these extra forEach methods are not essential to compute obj, eliminate them from her mental model of your code and only concentrate on the essential parts.
That's a lot of unnecessary complexity added to your program, and you thought that it was making your program more simple.
It's easy to implement a chainable forEach function:
Array.prototype.forEachChain = function () {
this.forEach(...arguments);
return this;
};
const arr = [1,2,3,4];
const dbl = (v, i, a) => {
a[i] = 2 * v;
};
arr.forEachChain(dbl).forEachChain(dbl);
console.log(arr); // [4,8,12,16]

AngularJS - Best practice: model properties on view or function calls?

For quite sometime I've been wondering about this question: when working with AngularJS, should I use directly the model object properties on the view or can I use a function to get the that property value?
I've been doing some minor home projects in Angular, and (specially working with read-only directives or controllers) I tend to create scope functions to access and display scope objects and their properties values on the views, but performance-wise, is this a good way to go?
This way seems easier for maintaining the view code, since, if for some reason the object is changed (due to a server implementation or any other particular reason), I only have to change the directive's JS code, instead of the HTML.
Here's an example:
//this goes inside directive's link function
scope.getPropertyX = function() {
return scope.object.subobject.propX;
}
in the view I could simply do
<span>{{ getPropertyX() }}</span>
instead of
<span>{{ object.subobject.propX }}</span>
which is harder to maintain, amidst the HTML clutter that sometimes it's involved.
Another case is using scope functions to test properties values for evaluations on a ng-if, instead of using directly that test expression:
scope.testCondition = function() {
return scope.obj.subobj.propX === 1 && scope.obj.subobj.propY === 2 && ...;
}
So, are there any pros/cons of this approach? Could you provide me with some insight on this issue? It's been bothering me lately, on how an heavy app might behave when, for example a directive can get really complex, and on top of that could be used inside a ng-repeat that could generate hundreds or thousands of its instances.
Thank you
I don't think creating functions for all of your properties is a good idea. Not just will there be more function calls being made every digest cycle to see if the function return value has changed but it really seems less readable and maintainable to me. It could add a lot of unnecessary code to your controllers and is sort of making your controller into a view model. Your second case seems perfectly fine, complex operations seems like exactly what you would want your controller to handle.
As for performance it does make a difference according to a test I wrote (fiddle, tried to use jsperf but couldn't get different setup per test). The results are almost twice as fast, i.e. 223,000 digests/sec using properties versus 120,000 digests/sec using getter functions. Watches are created for bindings that use angular's $parse.
One thing to think about is inheritance. If you uncomment the ng-repeat list in the fiddle and inspect the scope of one of the elements you can see what I'm talking about. Each child scope that is created inherits the parent scope's properties. For objects it inherits a reference, so if you have 50 properties on your object it only copies the object reference value to the child scope. If you have 50 manually created functions it will copy each of those function to each child scope that it inherits from. The timings are slower for both methods, 126,000 digests/sec for properties and 80,000 digests/sec with getter functions.
I really don't see how it would be any easier for maintaining your code and it seems more difficult to me. If you want to not have to touch your HTML if the server object changes it would probably be better to do that in a javascript object instead of putting getter functions directly on your scope, i.e.:
$scope.obj = new MyObject(obj); // MyObject class
In addition, Angular 2.0 will be using Object.observe() which should increase performance even more, but would not improve the performance using getter functions on your scope.
It looks like this code is all executed for each function call. It calls contextGetter(), fnGetter(), and ensureSafeFn(), as well as ensureSafeObject() for each argument, for the scope itself and for the return value.
return function $parseFunctionCall(scope, locals) {
var context = contextGetter ? contextGetter(scope, locals) : scope;
var fn = fnGetter(scope, locals, context) || noop;
if (args) {
var i = argsFn.length;
while (i--) {
args[i] = ensureSafeObject(argsFn[i](scope, locals), expressionText);
}
}
ensureSafeObject(context, expressionText);
ensureSafeFunction(fn, expressionText);
// IE stupidity! (IE doesn't have apply for some native functions)
var v = fn.apply
? fn.apply(context, args)
: fn(args[0], args[1], args[2], args[3], args[4]);
return ensureSafeObject(v, expressionText);
};
},
By contrast, simple properties are compiled down to something like this:
(function(s,l /**/) {
if(s == null) return undefined;
s=((l&&l.hasOwnProperty("obj"))?l:s).obj;
if(s == null) return undefined;
s=s.subobj;
if(s == null) return undefined;
s=s.A;
return s;
})
Performance wise - it is likely to matter little
Jason Goemaat did a great job providing a Benchmarking Fiddle. Where you can change the last line from:
setTimeout(function() { benchmark(1); }, 500);
to
setTimeout(function() { benchmark(0); }, 500);
to see the difference.
But he also frames the answer as properties are twice as fast as function calls. In fact, on my mid-2014 MacBook Pro, properties are three times faster.
But equally, the difference between calling a function or accessing the property directly is 0.00001 seconds - or 10 microseconds.
This means that if you have 100 getters, they will be slower by 1ms compared to having 100 properties accessed.
Just to put things in context, the time it takes sensory input (a photon hitting the retina) to reach our consciousness is 300ms (yes, conscious reality is 300ms delayed). So you'd need 30,000 getters on a single view to get the same delay.
Code quality wise - it could matter a great deal
In the days of assembler, software was looked like this:
A collection of executable lines of code.
But nowadays, specifically for software that have even the slightest level of complexity, the view is:
A social interaction between communication objects.
The latter is concerned much more about how behaviour is established via communication objects, than the actual low-level implementation. In turn, the communication is granted by an interface, which is typically achieved using the query or command principle. What matters, is the interface of (or the contract between) collaborating objects, not the low-level implementation.
By inspecting properties directly, you are hit the internals of an object bypassing its interface, thus couple the caller to the callee.
Obviously, with getters, you may rightly ask "what's the point". Well, consider these changes to implantation that won't affect the interface:
Checking for edge cases, such as whether the property is defined at all.
Change from getName() returning first + last names rather than just the name property.
Deciding to store the property in mutable construct.
So even an implementation seemingly simple may change in a way that if using getters would only require single change, where without will require many changes.
I vote getters
So I argue that unless you have a profiled case for optimisation, you should use getters.
Whatever you put in the {{}} is going be evaluated A LOT. It has to be evaluated each digest cycle in order to know if the value changed or not. Thus, one very important rule of angular is to make sure you don't have expensive operations in any $watches, including those registered through {{}}.
Now the difference between referencing the property directly or having a function do nothing else but return it, to me, seems negligible. (Please correct me if I'm wrong)
So, as long as your functions aren't performing expensive operations, I think it's really a matter of personal preference.

Getting data back out of closures

Is there a way to extract a variable that is closed over by a function?
In the (JavaScript-like) R language values that are
closed-over can be accessed by looking up the function's scope
directly. For example, the constant combinator takes a value and returns
a function that always yields said value.
K = function (self) {
function () {
self
}
}
TenFunction = K(10)
TenFunction()
10
In R the value bound to "self" can be looked up directly.
environment(TenFunction)[[ "self" ]]
10
In R this is a perfectly normal and acceptable thing to want to do. Is
there a similar mechanism in JavaScript?
My motivation is that I'm working with functions that I
create with an enclosed value called "self". I'd like to be able
to extract that data back out of the function. A mock example loosely
related to my problem is.
var Velocity = function (self) {
return function (time) {
return self.vx0 + self.ax * time
}
}
var f = Velocity({vx0: 10, ax: 100})
I'd really like to extract the values of self.vx0 and self.ax as they are
difficult to recover by other means. Is there a function "someFun" that does this?
someFun(f).self
{vx0: 10, ax: 100}
Any help or insights would be appreciated. If any clarification is needed leave a comment below and I'll edit my question.
Not as you have described, no. Function objects support very few reflective methods, most of which are deprecated or obsolete. There is a good reason for this: while closures are a common way to implement lexically scoped functions, they are not the only way, and in some cases they may not be the fastest. Javascript probably avoids exposing such details to allow implementations more flexibility to improve performance.
That said, you can get around this in various ways. One approach is to add an argument to the inner function telling it that it should return the value of a certain variable rather than doing what it usually does. Alternatively, you can store the variable alongside the function.
For an example of an alternative implementation technique, look up "lambda lifting". Some implementations may use different approaches in different situations.
Edit
An even better reason not to allow that sort of reflection is that it breaks the function abstraction rather horribly, and in doing so exposes hairy details of how the function was produced. If you want that sort of access, you really want an object, not a function.

Categories

Resources