Related
I am stuck with the curry examples in "Professor's Frisby..." when using Sanctuary instead of Ramda. I get error: "‘curry2’ expected at most three arguments but received five arguments." while with Ramda works fine. I am sure I am doing something wrong, but I can't figure it out.
Following the book's example:
var match = curry2((what, str) => {return str.match(what);});
var hasSpaces = match(/\s+/g);
var filter = curry2((f, ary) => {return ary.filter(f);});
var f2 = filter(hasSpaces, ["tori_spelling", "tori amos"]);
I get
TypeError: Function applied to too many arguments
curry2 :: ((a, b) -> c) -> a -> b -> c
‘curry2’ expected at most three arguments but received five arguments.
Sanctuary is much stricter than Ramda. It ensures that functions are only ever applied to the correct number of arguments and that the arguments are of the expected types. S.add(42, true), for example, is a type error, whereas R.add(42, true) evaluates to 43.
The problem in your case is that Array#filter applies the given function to three arguments (element, index, array). hasSpaces, though, expects exactly one argument.
The solution is to use S.filter rather than Array#filter:
const match = S.curry2((what, str) => str.match(what));
const hasSpaces = match(/\s+/g);
const f2 = S.filter(hasSpaces, ['tori_spelling', 'tori amos']);
Having made this change, another type error is revealed:
TypeError: Invalid value
filter :: (Applicative f, Foldable f, Monoid f) => (a -> Boolean) -> f a -> f a
^^^^^^^
1
1) null :: Null
The value at position 1 is not a member of ‘Boolean’.
See https://github.com/sanctuary-js/sanctuary-def/tree/v0.12.1#Boolean for information about the Boolean type.
S.filter expects a predicate as its first argument. In strict terms a predicate is a function which returns either true or false. String#match, though, returns either null or an array of matches.
The solution is to use S.test rather than String#match:
const hasSpaces = S.test(/\s+/);
const f2 = S.filter(hasSpaces, ['tori_spelling', 'tori amos']);
At this point the definition of hasSpaces is so clear that there's not much value in giving it a name. We can write the code as a single expression:
S.filter(S.test(/\s/), ['tori_spelling', 'tori amos'])
Note that the pattern can be simplified from /\s+/g to /\s/. The g flag has no effect when using S.test, and the + isn't necessary since we're interested in strings with spaces but we're not interested in counting the spaces.
Does functional programming have a standard construct for this logic?
const passAround = (f) => (x) => {
f(x);
return x;
};
This enables me to compose functions that have side effects and no return values, like console.log. It's not like a Task because I don't want to represent the state of the side effect.
If you are talking about pure functional programming, then you need to challenge this starting point:
functions that have side effects and no return values
In functional programming, there is no such thing. Every function is defined as a transformation on some input into some output.
So the obvious question is, how would you represent console.log without a side effect? To answer, we need to challenge another assumption in your question:
I don't want to represent the state of the side effect
This is exactly how functional programming represents the problem: consider your input and output to be "the state of the world". In other words, given the state of the world before the function, return the state of the world after the function. In this case, you would be representing the state of the console: given a console with x lines of output, return a console with x+1 lines of output. Crudely, you could write something like this:
(x, console) => { return [x, console.withExtraLine(x)]; }
The more powerful mechanism generally used for representing this is called a "monad" - a special kind of object which wraps a series of steps along with some extra meaning. In the case of the IO monad, each step is wrapped with an action which will transform the state of the world. (I/O is just one of many useful applications of the monad concept.)
You write the steps as functions which only know about the "unwrapped" value of some part of that state (e.g. a parameter which ultimately came from user input), and the monad handles the messy details of actually executing that outside the realm of the functional program. So rather than thinking about your input and output as "the state of the world", you think about your input as "a chain of computations", and your output as "a slightly longer chain of computations".
There are many introductions to this that are far better than any I could give, just search for "monad" or "functional programming io".
See also, this answer, this question, and probably many others in the "Related" sidebar auto-generated when you view this question.
The SKI combinator calculus might interest you. Let's pretend that f is always a pure function:
const S = g => f => x => g(x)(f(x)); // S combinator of SKI combinator calculus
const K = x => y => x; // K combinator of SKI combinator calculus
const passAround = S(K); // Yes, the passAround function is just SK
console.log(passAround(console.log)(10) + 20);
Anyway, the reason why I bring up the SKI combinator calculus is because I want to introduce you to the concept of Applicative Functors. In particular, the Reader applicative functor is equivalent to the SKI combinator calculus. The S combinator is equivalent to the ap method of Reader and the K combinator is equivalent to the pure method of Reader.
In JavaScript, the equivalent of Reader is Function. Hence, we can define ap and pure for functions in JavaScript as follows:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const print = Function.pure.ap(console.log);
console.log(print(10) + 20);
But wait, there's so much more that you can do with applicative functors. Every applicative functor is also a functor. This means that applicative functors must also have a map method. For Reader the map method is just function composition. It's equivalent to the B combinator. Using map you can do really interesting things like:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const id = x => x; // I combinator of SKI combinator calculus
Function.prototype.map = function (f) {
return x => this(f(x));
};
Function.prototype.seq = function (g) {
return Function.pure(id).map(this).ap(g);
};
const result = console.log.seq(x => x + 20);
console.log(result(10));
The seq function is in fact equivalent to the (*>) method of the Applicative class. This enables a functional style of method cascading.
So in Haskell terminology, you want this:
passAround :: Monad m => (a -> m b) -> a -> m a
passAround f x = do
f x
return x
Read the type signature as “passAround takes a function f :: a -> m b, whose result is a monadic action (i.e., something that may have side-effects which can be sequenced in a well-defined order, thus the Monad m constraint) with arbitrary result-type b, and a value a to pass this function. It yields a monadic action with result-type a.”
To see what “functional programming construct” this might correspond to, let's first unroll this syntax. In Haskell, do sequencing notation is just syntactic sugar for monadic combinators, namely,
do
foo
bar
is sugar for foo >> bar. (This is a bit trivial really, the whole thing really only gets interesting when you also bind local results to variables.)
So,
passAround f x = f x >> return x
>> itself is shorthand for the general monadic-chaining operator, namely
passAround f x = f x >>= const (return x)
or
passAround f x = f x >>= \y -> return x
(That backslash denotes a lambda function, in JavaScript it would read f(x) >>= (y)=>return x.)
Now, what you really want all this for is, chaining multiple actions. In Javascript you would write g(passAround(f, x)), in Haskell this is not just a function argument because it's still a monadic action, so you want another monadic chaining operator: g =<< passAround f x or
passAround f x >>= g
If we expand passAround here, we get
(f x >>= \y -> return x) >>= g
Now, here we can apply the monad laws, namely the associativity law, giving us
f x >>= (\y -> return x >>= g)
and now the left unit law
f x >>= (\y -> g x)
IOW, the whole composition collapses down to just f x >> g x, which could also be written
do
f x
g x
...which is kind of, duh. What of it all? Well, the nice thing is that we can abstract over this monad-rewrapping, with a monad transformer. In Haskell, it's called ReaderT. What you would do if you know that f and g both use the variable x, you could exchange
f :: a -> m b
g :: a -> m c
with
f' :: ReaderT a m b
f' = ReaderT f
g' :: ReaderT a m c
g' = ReaderT g
The ReaderT value constructor corresponds conceptually to your passAround function.
Note that ReaderT a m c has the form (ReaderT a m) c or, ignoring the details, m' c, where m' is again a monad! And, using the do syntax for that monad, you can simply write
runReaderT (do
f'
g'
) x
which would in JavaScript look, theoretically, like
runReaderT (() => {
f';
g';
}, x)
Unfortunately you can't actually write it this way because unlike Haskell, imperative languages always use the same monad for sequencing their operation (which roughly corresponds to Haskell's IO monad). Incidentally, that's one of the standard description of what a monad is: it's an overloaded semicolon operator.
What you can certainly do however is implement a monad transformer on dynamic types in the functional part of the JavaScript language. I'm just not sure if it's worth the effort.
I want to do some function composition. I know already this:
If f3(x) shall be the same as f1(f2(x))
then f3 = _.flowRight(f1,f2);
If f3(x,y) shall be the same as f1(x, f2(y))
then …?
(The use case is the composition of node.js/express middleware functions.)
In the following images, I use {_} as a placeholder for a value. Think of it as a hole in the code where we pass something in.
Ok let's imagine what your function would have to do...
Does this seems like a generic transformation? ie, do you think we can use this in many places? – functional programming promotes building functions which are highly reusable and can be combined in various ways.
What is the difference between f1 and f2? f1 is a unary function which will only get one arg, f2 is a binary function which will get two. Are you going to remember which one goes in which place?
What governs the position that f1(x) gets placed in f2?
Compare f2(y,f1(x)) ...
to f2(f1(x),y)
is one of those more useful than the other?
are you going to remember which position f1 gets?
Recall that function composition should be able to chain as many functions together as you want. To help you understand the futility of someFunc, let's imagine it accepting up to 3 functions and 3 arguments.
Is there even a pattern here? Maybe, but you still have the awkward unary function f1 that only gets one arg, while f2 and f3 each get 2
Is it true that f2 and f3 are going need the value of the previous function calls on the right side always ?
Compare f3(z,f2(y,f1(x)))
to f3(f2(y,f1(x)),z)
Maybe f3 needs to chain left, but f2 chains from the right?
I can't imagine your entire API of binary functions would magically need chained arguments in the same place
You've already mixed unary with binary functions in your composition; why arbitrarily limit it to just functions of those type then? What about a function of 3 or more arguments?
The answer is self-realizing
Function composition is being misused here. Function composition pretty much only works when you're composing unary functions exclusive (functions accepting 1 argument each). It immediately breaks down and cannot be generalised when mixing in functions of higher arity.
Going back to your code now, if f3 needs a name and it is the combination of f1, f2, and two parameters, it should be plainly expressed as …
const f3 = (x,y) => f1(x, f2(y))
Because it makes so many arbitrary choices, it cannot be generalized in any useful way. Just let it be as it is.
"So is there any way to compose functions of varying arity?"
Sure, there are a couple techniques of varied practicality. I'll demonstrate use of the highly practical partial function here
const partial = (f,...xs) => (...ys) => f(...xs, ...ys)
const add = (x,y) => x + y
const mult = (x,y) => x * y
const sq = x => mult (x,x)
// R.I.P. lodash.flowRight
const compose = ([f,...fs]) => x =>
f === undefined ? x : f (compose (fs) (x))
let f = compose([partial(add, 1), sq, partial(mult, 3)])
console.log(f(2))
// add(1, square(mult(3, 2)))
// add(1, square(6))
// add(1, 36)
// => 37
Oh, by the way, we replaced Lodash's flowRight (wrapper of the complex flow) with a single line of code.
It sounds like you have a very specific requirement that may not have a lodash equivalent.
Why not just write your own helper function for this?
function composeFuncs(f1, f2) {
return function(x, y) {
return f1.call(this, x, f2.call(this, y));
};
}
var myObj = {
add: function(val1, val2) {
return this.myVal + val1 + val2
},
mult: function(val) {
return this.myVal * val
},
myVal: 7
};
myObj.newFunc = composeFuncs(myObj.add, myObj.mult);
// 7 + 1 + 7 * 2 = 22
console.log(myObj.newFunc(1, 2));
Edit: Updated to handle this the same way _.flowRight does.
As a functional Javascript developer with only a vague understanding of Haskell I really have a hard time to understand Haskell idioms like monads. When I look at >>= of the function instance
(>>=) :: (r -> a) -> (a -> (r -> b)) -> r -> b
instance Monad ((->) r) where
f >>= k = \ r -> k (f r) r
// Javascript:
and its application with Javascript
const bind = f => g => x => g(f(x)) (x);
const inc = x => x + 1;
const f = bind(inc) (x => x <= 5 ? x => x * 2 : x => x * 3);
f(2); // 4
f(5); // 15
the monadic function (a -> (r -> b)) (or (a -> m b)) provides a way to choose the next computation depending on the previous result. More generally, the monadic function along with its corresponding bind operator seems to give us the capability to define what function composition means in a specific computational context.
It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one. Instead, the original value is passed. I'd expect f(2)/f(5) to yield 6/18, similar to normal function composition. Is this behavior specific to functions as monads? What do I misunderstand?
I think your confusion arises from using functions that are too simple. In particular, you write
const inc = x => x + 1;
whose type is a function that returns values in the same space as its input. Let's say inc is dealing with integers. Because both its input and output are integers, if you have another function foo that takes integers, it is easy to imagine using the output of inc as an input to foo.
The real world includes more exciting functions, though. Consider the function tree_of_depth that takes an integer and creates a tree of strings of that depth. (I won't try to implement it, because I don't know enough javascript to do a convincing job of it.) Now all of a sudden it's harder to imagine passing the output of tree_of_depth as an input to foo, since foo is expecting integers and tree_of_depth is producing trees, right? The only thing we can pass on to foo is the input to tree_of_depth, because that's the only integer we have lying around, even after running tree_of_depth.
Let's see how that manifests in the Haskell type signature for bind:
(>>=) :: (r -> a) -> (a -> r -> b) -> (r -> b)
This says that (>>=) takes two arguments, each functions. The first function can be of any old type you like -- it can take a value of type r and produce a value of type a. In particular, you don't have to promise that r and a are the same at all. But once you pick its type, then the type of the next function argument to (>>=) is constrained: it has to be a function of two arguments whose types are the same r and a as before.
Now you can see why we have to pass the same value of type r to both of these functions: the first function produces an a, not an updated r, so we have no other value of type r to pass to the second function! Unlike your situation with inc, where the first function happened to also produce an r, we may be producing some other very different type.
This explains why bind has to be implemented the way it is, but maybe doesn't explain why this monad is a useful one. There is writing elsewhere on that. But the canonical use case is for configuration variables. Suppose at program start you parse a configuration file; then for the rest of the program, you want to be able to influence the behavior of various functions by looking at information from that configuration. In all cases it makes sense to use the same configuration information -- it doesn't need to change. Then this monad becomes useful: you can have an implicit configuration value, and the monad's bind operation makes sure that the two functions you're sequencing both have access to that information without having to manually pass it in to both functions.
P.S. You say
It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one.
which I find slightly imprecise: in fact in m >>= f, the function f gets both the result of m (as its first argument) and the original value (as its second argument).
More generally, the monadic function along with its corresponding bind
operator seems to give us the capability to define what function
composition means in a specific computational context.
I'm not sure what you mean by the "monadic function". Monads (which in Haskell consist of a bind function and a pure function) let you express how a series of monadic actions can be chained together ((<=<) is the monad equivalent of composition, equivalent to (.) for the Identity monad). In that sense, you do sort of get composition, but only composition of actions (functions of the form a -> m b).
(This is further abstracted in the Kleisli newtype around functions of the type a -> m b. Its category instance really lets you write the sequencing of monadic actions as composition.)
I'd expect f(2)/f(5) to yield 6/18, similar to normal function composition.
Then, you can just use normal function composition! Don't use a monad if you don't need one.
It is all the more surprising that the monadic function doesn't supply
the result of the previous computation to the subsequent one. Instead,
the original value is passed. ... Is this behavior specific to
functions as monads?
Yes, it it. The monad Monad ((->) r) is also known as the "reader monad" because it only reads from its environment. That said, as far as monads are concerned, you are still passing the monadic result of the previous action to the subsequent one - but those results are themselves functions!
As already mentioned by chi, this line
const f = bind(inc) (x => x <= 5 ? x => x * 2 : x => x * 3);
would be clearer as something like
const f = bind(inc) (x => x <= 5 ? y => y * 2 : y => y * 3);
the Monad instance for functions is basically the Reader monad. You have a value x => x + 1 that depends on an enviroment (it adds 1 to the environment).
You also have a function which, depending on its input, returns one value that depends on an environment (y => y * 2) or another value that depends on an environment (y => y * 3).
In your bind, you are only using the result of x => x + 1 to choose between these two functions. Your are not returning the previous result directly. But you could, if you returned constant functions which ignored their environments and returned a fixed value depending only on the previous result:
const f = bind(inc) (x => x <= 5 ? _ => x * 2 : _ => x * 3);
(not sure about the syntax)
I've read that the composition of g :: A -> B and f :: B -> C, pronounced (“f composed of g”), results in another function (arrow) from A -> C. This can be expressed more formally as
f • g = f(g) = compose :: (B -> C) -> (A -> B) -> (A -> C)
Can the above composition be also defined as below? Please clarify.
In this case the compose function takes the same two functions f and g and return a new function from A -> C.
f • g = f(g) = compose :: ((B -> C), (A -> B)) -> (A -> C)
First we need to get some things right:
f ○ g means something quite different from f(g).
The former is a function that, given an argument x, will first feed it to g, then pass on the result to f, and output that final result, i.e. f(g(x)).
OTOH, f(g) means you apply the function f to the value g right away, without waiting for any argument. (g just happens to have a function type, but in functional languages, functions can be passed around just like any other values / arguments).
Unless you're dealing with some pretty wacky polymorphic functions, one of these will be ill-typed. For example, a well-typed composition might be
sqrt ○ abs :: Double -> Double
whereas a well-typed application could be (at least in Haskell)
map(sqrt) :: [Double] -> [Double]
I'll assume in the following you're talking about f ○ g.
Type signatures must be given for a function itself, not for a function applied to some arguments. This is something that loads of people get utterly wrong: in f(x), you have a function f and an argument x. But f(x) is not a function, only the value that's the result of applying a function to a value! So, you shouldn't write something like f ○ g :: ... (unless you're actually talking only about the type that results from the composition). Better write just ○ :: ... (or, in Haskell, (○) :: ...).
Function arrows aren't associative. Most mathematicians likely won't even know what X -> Y -> Z is supposed to mean. What it means in languages like Haskell may actually be somewhat surprising:
X -> Y -> Z ≡ X -> (Y -> Z)
i.e. this is the type of a function that first takes only an argument of type X. The result will be again a function, but one that takes only an argument of type Y. This function will have, if you like, the X value already built-in (in a so-called closure, unless the compiler optimises that away). Giving it also the Y value will allow the function to actually do its job and finally yield the Z result.
At this point you already have your answer, pretty much: indeed the signatures X -> Y -> Z and (X, Y) -> Z are essentially equivalent. The process of rewriting this is called currying.
To answer your question in particular: most languages don't normally do any currying, so the signature ((B -> C), (A -> B)) -> (A -> C) is actually more correct. It corresponds to a function you can call as
compose(f,g)
OTOH, the curried signature (B -> C) -> (A -> B) -> (A -> C) means that you need to feed in the arguments one by one:
compose(f)(g)
Only in languages like Haskell is this the standard style, but you don't need the parens there: all the following are parsed the same in Haskell
compose(f)(g)
compose f g
(compose) f g
(.) f g
f . g
where . is in fact the composition operator, which as you can see from the documentation has type
(.) :: (b -> c) -> (a -> b) -> a -> c
Since you marked your question with Javascript here is an answer from a Javascript point of view.
Assuming I understand your signature properly, you want to adapt the composition function as follows: (f, g) => x => f(g(x));. Sure, that works, but you lose flexibility and gain uhm, nothing.
The original curry function is defined in curried form that means, it expects always a single argument. If every function in your whole code expects exactly one argument, then there is no more arity (well, in most cases). It is abstracted away. Currying facilitates function composition, because functions always return a single value. Curried functions are like building blocks. You can put them together in almost any way:
const comp = f => g => x => f(g(x)),
comp2 = comp(comp)(comp),
add = y => x => x + y,
inc = x => x + 1,
sqr = x => x * x;
console.log(comp(sqr)(inc)(2)); // 9
console.log(comp(add)(sqr)(2)(3)); // 7
console.log(comp2(sqr)(add)(2)(3)); // 25
As you can see only in the latter case we must consider the arity.
Currying can only develop its benefits if it is consistently applied for each function of your codebase, because it has a systemic effect.
First, an open circle is more commonly used: f ∘ g.
Second, it would more properly be pronounced "f composed with g". ("f composed of g" sounds like f is made up of g, rather than a new function made up of both.)
Finally, the two types are essentially the same, differing only in how you expect to pass functions to the compose function. The first defines the type of a fully curried function, such that compose takes one function as an argument, and returns a new function that takes the second function as an argument and returns the composed. This means with f :: B -> C and g :: A -> B, you can define either (using Haskell syntax)
compose :: (B -> C) -> (A -> B) -> (A -> C)
compose f g = \x -> f (g x)
or the uncurried version
compose' :: ((B -> C), (A -> B)) -> (A -> C)
compose' (f, g) = \x -> f (g x)
Either way, the return value is the same; the only difference is in how the arguments are passed. You could write h = compose f g or you could write h = compose' (f, g).