Say I have a step in a procedure that requires the retrieval of two objects. I would use join() to coordinate the retrievals:
return promise.join(retrieveA(), retrieveB())
.spread(function(A, B) {
// create something out of A and B
});
The documentation shows that you can also pass the handler as the last parameter:
return promise.join(retrieveA(), retrieveB(), function(A, B) {
// create something out of A and B
});
I'm curious as to what the rationale behind the existence of this option.
Fact time: The reason .join was added was to make #spion happy. Not without reason though, using .join means you have a static and known number of promise which makes using it with TypeScript a lot easier. Petka (Esailija) liked the idea and also the fact it can be optimised further because it doesn't have to abide to weird guarantees the other form does have to abide to.
Over time, people started (at least me) using it for other use cases - namely using promises as proxies.
So, let's talk about what it does better:
Static Analysis
It's hard to statically analyse Promise.all since it works on an array with an unknown types of promises of potentially different types. Promise.join can be typed since it can be seen as taking a tuple - so for example for the 3 promises case you can give it a type signature of (Promise<S>, Promise<U>, Promise<T>, ((S,U,T) -> Promise<K> | K)) -> Promise<K> which simply can't be done in a type safe way for Promise.all.
Proxying
It's very clean to use when writing promise code in the proxying style:
var user = getUser();
var comments = user.then(getComments);
var related = Promise.join(user, comments, getRelated);
Promise.join(user, comments, related, (user, comments, related) => {
// use all 3 here
});
It's faster
Since it doesn't need to produce the value of the given promises cached and to keep all the checks .all(...).spread(...) does - it'll perform slightly faster.
But... you really usually shouldn't care.
you can also pass the handler as the last parameter. I'm curious as to what the rationale behind the existence of this option.
It is not an "option". It's the sole purpose of the join function.
Promise.join(promiseA, promiseB, …, function(a, b, …) { … })
is exactly equivalent to
Promise.all([promiseA, promiseB, …]).spread(function(a, b, …) { … })
But, as mentioned in the documentation, it
is much easier (and more performant) to use when you have a fixed amount of discrete promises
It relieves you of needing to use that array literal, and it doesn't need to create that intermediate promise object for the array result.
Related
Suppose you have function which takes a union type and then narrows the type and delegate to one of two other pure functions.
function foo(arg: string|number) {
if (typeof arg === 'string') {
return fnForString(arg)
} else {
return fnForNumber(arg)
}
}
Assume that fnForString() and fnForNumber() are also pure functions, and they have already themselves been tested.
How should one go about testing foo()?
Should you treat the fact that it delegates to fnForString() and fnForNumber() as an implementation detail, and essentially duplicate the tests for each of them when writing the tests for foo()? Is this repetition acceptable?
Should you write tests which "know" that foo() delegate to fnForString() and fnForNumber() e.g. by mocking them out and checking that it delegates to them?
The best solution would be just testing for foo.
fnForString and fnForNumber are an implementation detail that you may change in the future without necessarily changing the behaviour of foo.
If that happens your tests may break with no reason, this kind of problem makes your test too expansive and useless.
Your interface just needs foo, just test for it.
If you have to test for fnForString and fnForNumber keep this kind of test apart from your public interface tests.
This is my interpretation of the following principle stated by Kent Beck
Programmer tests should be sensitive to behaviour changes and insensitive to structure changes. If the program’s behavior is stable from an observer’s perspective, no tests should change.
Short answer: the specification of a function determines the manner in which it should be tested.
Long answer:
Testing = using a set of test cases (hopefully representative of all cases that may be encountered) to verify that an implementation meets its specification.
In the example foo is stated without specification, so one should go about testing foo by doing nothing at all (or at most some silly tests to verify the implicit requirement that "foo terminates in one way or another").
If the specification is something operational like "this function returns the result of applying args to either fnForString or fnForNumber according to the type of args" then mocking the delegates (option 2) is the way to go. No matter what happens to fnForString/Number, foo remains in accordance with its specification.
If the specification does not depend on fnForType in such a manner then re-using the tests for fnFortype (option 1) is the way to go (assuming those tests are good).
Note that operational specifications remove much of the usual freedom to replace one implementation by another (one that is more elegant/readable/efficient/etc). They should only be used after careful consideration.
In an ideal world, you would write proofs instead of tests. For example, consider the following functions.
const negate = (x: number): number => -x;
const reverse = (x: string): string => x.split("").reverse().join("");
const transform = (x: number|string): number|string => {
switch (typeof x) {
case "number": return negate(x);
case "string": return reverse(x);
}
};
Say you want to prove that transform applied twice is idempotent, i.e. for all valid inputs x, transform(transform(x)) is equal to x. Well, you would first need to prove that negate and reverse applied twice are idempotent. Now, suppose that proving the idempotence of negate and reverse applied twice is trivial, i.e. the compiler can figure it out. Thus, we have the following lemmas.
const negateNegateIdempotent = (x: number): negate(negate(x))≡x => refl;
const reverseReverseIdempotent = (x: string): reverse(reverse(x))≡x => refl;
We can use these two lemmas to prove that transform is idempotent as follows.
const transformTransformIdempotent = (x: number|string): transform(transform(x))≡x => {
switch (typeof x) {
case "number": return negateNegateIdempotent(x);
case "string": return reverseReverseIdempotent(x);
}
};
There's a lot going on here, so let's break it down.
Just as a|b is a union type and a&b is an intersection type, a≡b is an equality type.
A value x of an equality type a≡b is a proof of the equality of a and b.
If two values, a and b, are not equal then it's impossible to construct a value of type a≡b.
The value refl, short for reflexivity, has the type a≡a. It's the trivial proof of a value being equal to itself.
We used refl in the proof of negateNegateIdempotent and reverseReverseIdempotent. This is possible because the propositions are trivial enough for the compiler to prove automatically.
We use the negateNegateIdempotent and reverseReverseIdempotent lemmas to prove transformTransformIdempotent. This is an example of a non-trivial proof.
The advantage of writing proofs is that the compiler verifies the proof. If the proof is incorrect, then the program fails to type check and the compiler throws an error. Proofs are better than tests for two reasons. First, you don't have to create test data. It's difficult to create test data that handles all the edge cases. Second, you won't accidentally forget to test any edge cases. The compiler will throw an error if you do.
Unfortunately, TypeScript doesn't have an equality type because it doesn't support dependent types, i.e. types that depend upon values. Hence, you can't write proofs in TypeScript. You can write proofs in dependently typed functional programming languages like Agda.
However, you can write propositions in TypeScript.
const negateNegateIdempotent = (x: number): boolean => negate(negate(x)) === x;
const reverseReverseIdempotent = (x: string): boolean => reverse(reverse(x)) === x;
const transformTransformIdempotent = (x: number|string): boolean => {
switch (typeof x) {
case "number": return negateNegateIdempotent(x);
case "string": return reverseReverseIdempotent(x);
}
};
You can then use a library such as jsverify to automatically generate test data for multiple test cases.
const jsc = require("jsverify");
jsc.assert(jsc.forall("number", transformTransformIdempotent)); // OK, passed 100 tests
jsc.assert(jsc.forall("string", transformTransformIdempotent)); // OK, passed 100 tests
You can also call jsc.forall with "number | string" but I can't seem to get it to work.
So to answer your questions.
How should one go about testing foo()?
Functional programming encourages property-based testing. For example, I tested the negate, reverse, and transform functions applied twice for idempotence. If you follow property-based testing, then your proposition functions should be similar in structure to the functions that you're testing.
Should you treat the fact that it delegates to fnForString() and fnForNumber() as an implementation detail, and essentially duplicate the tests for each of them when writing the tests for foo()? Is this repetition acceptable?
Yes, is it acceptable. Although, you can entirely forego testing fnForString and fnForNumber because the tests for those are included in the tests for foo. However, for completeness I would recommend including all the tests even if it introduces redundancy.
Should you write tests which "know" that foo() delegate to fnForString() and fnForNumber() e.g. by mocking them out and checking that it delegates to them?
The propositions that you write in property-based testing follows the structure of the functions you're testing. Hence, they "know" about the dependencies by using the propositions of the other functions being tested. No need to mock them. You'd only need to mock things like network calls, file system calls, etc.
Assume that fnForString() and fnForNumber() are also pure functions, and they have already themselves been tested.
Well since implementation details are delegated to fnForString() and fnForNumber() for string and number respectively, testing it boils down to merely make sure that foo calls the right function. So yes, I would mock them and ensure that they are called accordingly.
foo("a string")
fnForNumberMock.hasNotBeenCalled()
fnForStringMock.hasBeenCalled()
Since fnForString() and fnForNumber() have been tested individually, you know that when you call foo(), it calls the right function and you know the function does what it is supposed to do.
foo should return something. You could return something from your mocks, each a different thing and ensure that foo returns correctly (for example, if you forgot a return in your foo function).
And all things have been covered.
I think it's useless to test the type of your function, the system can do this alone and allow you to give the same name to each of the types of objects that interest you
sample code
// fnForStringorNumber String Wrapper
String.prototype.fnForStringorNumber = function() {
return this.repeat(3)
}
// fnForStringorNumber Number Wrapper
Number.prototype.fnForStringorNumber = function() {
return this *3
}
function foo( arg ) {
return arg.fnForStringorNumber(4321)
}
console.log ( foo(1234) ) // 3702
console.log ( foo('abcd_') ) // abcd_abcd_abcd_
// or simply:
console.log ( (12).fnForStringorNumber() ) // 36
console.log ( 'xyz_'.fnForStringorNumber() ) // xyz_xyz_xyz_
I'm probably not a great theorist on coding techniques, but I did a lot of code maintenance. I think that one can really judge the effectiveness of a way of coding only on concrete cases, the speculation can not have value of proof.
Recently I've been getting into the javascript ecosystem. After sometime with javascript's callbacks I started asking myself if the javascript interpreters are capable of doing conditional evaluation of callback arguments. Let's take the following two example:
var a = 1;
var b = 2;
// example 1
abc.func(a, b, function (res) {
// do something with res
});
// example 2
abc.func(a, b, function () {
// do something
});
From what I understand, Javascript uses the arguments object to keep track of what is passed into a function. This is regardless of what the function definition is. So assuming that:
abc.func = function (a, b, cb) {
// do stuff
var res = {};
// Expensive computation to populate res
cb(res);
}
In both examples (1, 2) the res object will be passed to arguments[0]. In example 1 res === arguments[0] since the res parameter is defined.
Let's assume that computing res is expensive. In example 1 it's ok to go through this computation since the res object is used. In example 2, since the res object is not used, there really is no point in doing that computation. Although, since the arguments object needs to be populated, in both cases the computation to populate res is done. Is this correct ?
Assuming that's true, this seems like a (potentially) huge waste. Why compute something that's going to go out of scope and be garbage collected ? Think of all the libraries out there that use callbacks. A lot of them send multiple arguments back to the callback function, but sometimes none of them are used.
Is there a way to prevent this behaviour. Essentially make the Javascript interpreter smart enough to not compute those specific variables that will turn into unused arguments ? So in example 2 the res object would not actually be computed since it will never actually be used.
I understand that until this point things like this were used:
function xyz(a, b /*, rest */)
// use arguments to iterate over rest
}
So by default it makes sense to still compute those arguments. Now let's look forward to ECMAScript 2015. This will include the ...rest parameter to be defined. So for engines that support the new version, is there a way to enable conditional evaluation? This would make much more sense, since now there is a way to explicitly ask to evaluate and pass in all extra arguments to a function.
No, JavaScript is not a lazy call-by-name language. This is mostly because expressions can have side effects, and the ES standard requires them to be executed in the order the programmer expects them.
Yes, JS engines are smart. If they do detect that code does not execute side effects, and its results are not used anywhere, it just dumps them (dead code elimination). I'm not sure whether this works across function boundaries, I guess it doesn't, but if you are in a hot code path and the call does get inlined, it might be optimised.
So if you know that you are doing a heavy computation, you may want to make it lazy explicitly by passing a thunk. In an eagerly evaluated language, this is typically simply represented by a function that takes no parameters. In your case:
abc.func = function (a, b, cb) {
// do stuff
var res = {};
cb(() => {
// Expensive computation to populate res
return res;
});
}
// example 1
abc.func(a, b, function(getRes) {
var res = getRes();
// do something with res
});
// example 2
abc.func(a, b, function() {
// no heavy computation
// do something
});
You couldn't do that on an interpreter level, it's not feasible to determine whether or not computing a argument was dependent on computing another argument, and even if you could this would create inconsistent behaviour for the user. And because passing in variables into a function is extremely cheap, this becomes a pointless exercise.
It could be done on a functional level - if you wanted to you could pass the expected arguments of the callback as a parameter to the function, thereby augmenting the behaviour of the function based on the parameters, which is commonplace.
I learned today that forEach() returns undefined. What a waste!
If it returned the original array, it would be far more flexible without breaking any existing code. Is there any reason forEach returns undefined.
Is there anyway to chain forEach with other methods like map & filter?
For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction)
Wouldn't work because the forEach doesn't want to play nice, requiring you to run all the methods before the forEach again to get the object in the state needed for the forEach.
What you want is known as method cascading via method chaining. Describing them in brief:
Method chaining is when a method returns an object that has another method that you immediately invoke. For example, using jQuery:
$("#person")
.slideDown("slow")
.addClass("grouped")
.css("margin-left", "11px");
Method cascading is when multiple methods are called on the same object. For example, in some languages you can do:
foo
..bar()
..baz();
Which is equivalent to the following in JavaScript:
foo.bar();
foo.baz();
JavaScript doesn't have any special syntax for method cascading. However, you can simulate method cascading using method chaining if the first method call returns this. For example, in the following code if bar returns this (i.e. foo) then chaining is equivalent to cascading:
foo
.bar()
.baz();
Some methods like filter and map are chainable but not cascadable because they return a new array, but not the original array.
On the other hand the forEach function is not chainable because it doesn't return a new object. Now, the question arises whether forEach should be cascadable or not.
Currently, forEach is not cascadable. However, that's not really a problem as you can simply save the result of the intermediate array in a variable and use that later:
var arr = someThing.keys()
.filter(someFilter);
arr.forEach(passToAnotherObject);
var obj = arr
.map(transformKeys)
.reduce(reduction);
Yes, this solution looks uglier than the your desired solution. However, I like it more than your code for several reasons:
It is consistent because chainable methods are not mixed with cascadable methods. Hence, it promotes a functional style of programming (i.e. programming with no side effects).
Cascading is inherently an effectful operation because you are calling a method and ignoring the result. Hence, you're calling the operation for its side effects and not for its result.
On the other hand, chainable functions like map and filter don't have any side effects (if their input function doesn't have any side effects). They are used solely for their results.
In my humble opinion, mixing chainable methods like map and filter with cascadable functions like forEach (if it was cascadable) is sacrilege because it would introduce side effects in an otherwise pure transformation.
It is explicit. As The Zen of Python teaches us, “Explicit is better than implicit.” Method cascading is just syntactic sugar. It is implicit and it comes at a cost. The cost is complexity.
Now, you might argue that my code looks more complex than yours. If so, you would be judging the book by its cover. In their famous paper Out of the Tar Pit, the authors Ben Moseley and Peter Marks describe different types of software complexities.
The second biggest software complexity on their list is complexity caused by explicit concern with control flow. For example:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.map(transformKeys)
.reduce(reduction);
The above program is explicitly concerned with control flow because you are explicit stating that .forEach(passToAnotherObject) should happen before .map(transformKeys) even though it shouldn't have any effect on the overall transformation.
In fact, you can remove it from the equation altogether and it wouldn't make any difference:
var obj = someThing.keys()
.filter(someFilter)
.map(transformKeys)
.reduce(reduction);
This suggests that the .forEach(passToAnotherObject) didn't have any business being in the equation in the first place. Since it's a side effectful operation, it should be kept separate from pure code.
When you write it explicitly as I did above, not only are you separating pure code from side effectful code but also you can choose when to evaluate each computation. For example:
var arr = someThing.keys()
.filter(someFilter);
var obj = arr
.map(transformKeys)
.reduce(reduction);
arr.forEach(passToAnotherObject); // evaluate after pure computation
Yes, you are still explicitly concerned with control flow. However, at least now you know that .forEach(passToAnotherObject) has nothing to do with the other transformations.
Thus, you have eliminated some (but not all) of the complexity caused by explicit concern with control flow.
For these reasons, I believe that the current implementation of forEach is actually beneficial because it prevents you from writing code that introduces complexity due to explicit concern with control flow.
I know from personal experience from when I used to work at BrowserStack that explicit concern with control flow is a big problem in large-scale software applications. It is indeed a real world problem.
It's easy to write complex code because complex code is usually shorter (implicit) code. So it's always tempting to drop in a side effectful function like forEach in the middle of a pure computation because it requires less code refactoring.
However, in the long run it makes your program more complex. Think of what would happen a few years down the line when you quit the company that you work for and somebody else has to maintain your code. Your code now looks like:
var obj = someThing.keys()
.filter(someFilter)
.forEach(passToAnotherObject)
.forEach(doSomething)
.map(transformKeys)
.forEach(doSomethingElse)
.reduce(reduction);
The person reading your code now has to assume that all the additional forEach methods in your chain are essential, put in extra work to understand what each function does, figure out by herself that these extra forEach methods are not essential to compute obj, eliminate them from her mental model of your code and only concentrate on the essential parts.
That's a lot of unnecessary complexity added to your program, and you thought that it was making your program more simple.
It's easy to implement a chainable forEach function:
Array.prototype.forEachChain = function () {
this.forEach(...arguments);
return this;
};
const arr = [1,2,3,4];
const dbl = (v, i, a) => {
a[i] = 2 * v;
};
arr.forEachChain(dbl).forEachChain(dbl);
console.log(arr); // [4,8,12,16]
I want to formulate algebraic expressions in such a way that the underlying number types can be exchanged. If you want to, think about complex numbers, big integers, matrices and the likes. For this reason, I'd write either add(a, b) or a.add(b) instead of a + b. In a statically typed language, I'd simply use type-based overloading of the function add to implement the various alternatives. But for JavaScript this doesn't work, so I'm looking for alternatives. The executed method depends on the type of both operands.
One way which I've come up with would be the following double dispatch mechanism:
Write the expression as a.add(b).
Implement that method for a given type (e.g. my own Complex type, or the built-in Number type) in the following way:
add: function(that) { that.addComplex(this); }
So the method name of the second call encodes the type of one of the operands.
Implement specialized methods to deal with all combinations. For example, set
Number.prototype.addComplex = function(that)
{ return newComplex(that.real + this, that.imaginary); }
Let's assume I know all types, so I can ensure all combinations get handled. What has me troubled right now is more the creation of these objects.
The above approach relies heavily on virtual method dispatch, so the way I see it it requires some kind of inheritance. No problem with classical constructor functions, but according to this jsperf I just did, object creation using constructor functions tends to be slower than object literals. Sometimes slower by quite a large factor, like in the case of Firefox for this example. So I'm reluctant to incur this kind of overhead for every e.g. complex-valued numerical intermediate just to make my operator overloading work.
The other approach I tried in this jsperf would be not using a prototype, but instead storing the virtual method as a property of each single object instance. Works quite fast on pretty much all tested browsers, but here I'm worried about the size of the objects. I'm worried about having objects with two actual floating point values but perhaps as much as 50 different member functions just to handle all pairs of operator overloading.
A third approach would be having a single add function which somehow inspects the types of its arguments and then makes its decision based on that. Possibly looking up the actual implementation in some list indexed by a combination of some numerical type identifiers. I haven't written this out for a test yet, but this kind of type checking feels pretty slow, and I also have doubts that the JIT compiler will be able to optimize this exotic kind of function dispatch.
Is there some way to trick current JavaScript implementations into doing proper optimized double dispatch with objects which are cheap to create and don't take excessive amounts of memory either?
The third approach looks quite viable:
function Complex(re, im) {
return {type:'c', re:re, im:im }
}
function Real(n) {
return {type:'r', n:n }
}
funcs = {
add_c_r: function(a, b) {
console.log('add compl to real')
},
add_r_c: function(a, b) {
console.log('add real to compl')
}
}
function add(a, b) {
return funcs["add_" + a.type + "_" + b.type](a, b);
}
add(Complex(1, 2), Real(5))
add(Real(5), Complex(1, 2))
One extra field + one indirection is a reasonable cost.
I want to create a javascript pipeline like powershell, bash (|) or f# (|>). Ie. something equivalent to
getstuff() | sort() | grep("foo") | take(5)
I saw a discussion about this in coffeescript forum but in the end they shelved it because everybody said that you could do the same thing with function chaining. But as far as I can see that requires getstuff returns something that has a sort method on it; the sort method must return something that has grep method on it etc. This is pretty restrictive as it requires all potential pipeline members to know about each other in advance. I know JavaScript has some pretty clever tricks in it and I am still at the 101 level - so is this doable
getstuff().sort.().grep().take()
without that constraint
is this doable
getstuff().sort.().grep().take()
without that constraint
No.
I like short answers! Can you suggest any way that something like it could be done
At a high level, you could do something similar to what jQuery does under the hood to allow chaining. Create an array-like wrapper object type which has all of the functions you want to be able to call; each successive chained call can operate on an internal stack in addition to the explicitly-passed arguments.
Not to keep beating the dead jQuery horse, but one of the best ways to understand what I'm talking about is to just start digging through the jQuery core source code and figure out how the chaining works.
Defining an object to support the kind of function chaining you want is actually quite easy:
getStuff = ->
sort: ->
# set #stuff...
this
grep: (str) ->
# modify #stuff...
this
take: (num) ->
#stuff[num]
That's all you need to make getstuff().sort.().grep('foo').take(5) work.
You can make those calls without worrying about the return values having the appropriate methods like so:
take(5, grep("foo", sort(getstuff())));
But, that doesn't get through the problem of each function needing to be passed data that is meaningful to it. Even JavaScript isn't that slippery. You can call sort() on an image (for example,) but there's no meaningful way to generate results.
You could do something similar by returning a special object that has all required methods on it, but can be used instead of the final value. For example, you could return an Array instance that has all these methods on it.
var getstuff = function () {
obj = Array.apply(this, arguments);
obj.take = function (n) {
return this[n];
};
obj.grep = function (regexp) {
return getstuff.apply(this, Array.prototype.filter.apply(this, [function (item) {
return item.toString().search(regexp) !== -1;
}]));
};
obj.splice = function () {
return getstuff.apply(this, Array.prototype.splice.apply(this, arguments));
}
return obj;
}
// shows [-8, 1]
console.log(getstuff(3, 1, 2, 'b', -8).sort().grep(/\d+/).splice(0, 2));
// shows 3
var stuff = getstuff(3, 1, 2, 'b', -8).grep(/\d+/);
console.log(stuff.sort()[stuff.length]);
Note that the above is not a particularly fast implementation, but it returns arrays with special methods by still keeping the global Allay's prototype clean, so it won't interfere with other code.
You could make it faster by defining these special methods on the Array.prototype, but you should be careful with that...
Or, if your browser supports subclassing Array, then all you need is a supclass and a handy constructor, getstuff().