Compose function signature - javascript

I've read that the composition of g :: A -> B and f :: B -> C, pronounced (“f composed of g”), results in another function (arrow) from A -> C. This can be expressed more formally as
f • g = f(g) = compose :: (B -> C) -> (A -> B) -> (A -> C)
Can the above composition be also defined as below? Please clarify.
In this case the compose function takes the same two functions f and g and return a new function from A -> C.
f • g = f(g) = compose :: ((B -> C), (A -> B)) -> (A -> C)

First we need to get some things right:
f ○ g means something quite different from f(g).
The former is a function that, given an argument x, will first feed it to g, then pass on the result to f, and output that final result, i.e. f(g(x)).
OTOH, f(g) means you apply the function f to the value g right away, without waiting for any argument. (g just happens to have a function type, but in functional languages, functions can be passed around just like any other values / arguments).
Unless you're dealing with some pretty wacky polymorphic functions, one of these will be ill-typed. For example, a well-typed composition might be
sqrt ○ abs :: Double -> Double
whereas a well-typed application could be (at least in Haskell)
map(sqrt) :: [Double] -> [Double]
I'll assume in the following you're talking about f ○ g.
Type signatures must be given for a function itself, not for a function applied to some arguments. This is something that loads of people get utterly wrong: in f(x), you have a function f and an argument x. But f(x) is not a function, only the value that's the result of applying a function to a value! So, you shouldn't write something like f ○ g :: ... (unless you're actually talking only about the type that results from the composition). Better write just ○ :: ... (or, in Haskell, (○) :: ...).
Function arrows aren't associative. Most mathematicians likely won't even know what X -> Y -> Z is supposed to mean. What it means in languages like Haskell may actually be somewhat surprising:
X -> Y -> Z ≡ X -> (Y -> Z)
i.e. this is the type of a function that first takes only an argument of type X. The result will be again a function, but one that takes only an argument of type Y. This function will have, if you like, the X value already built-in (in a so-called closure, unless the compiler optimises that away). Giving it also the Y value will allow the function to actually do its job and finally yield the Z result.
At this point you already have your answer, pretty much: indeed the signatures X -> Y -> Z and (X, Y) -> Z are essentially equivalent. The process of rewriting this is called currying.
To answer your question in particular: most languages don't normally do any currying, so the signature ((B -> C), (A -> B)) -> (A -> C) is actually more correct. It corresponds to a function you can call as
compose(f,g)
OTOH, the curried signature (B -> C) -> (A -> B) -> (A -> C) means that you need to feed in the arguments one by one:
compose(f)(g)
Only in languages like Haskell is this the standard style, but you don't need the parens there: all the following are parsed the same in Haskell
compose(f)(g)
compose f g
(compose) f g
(.) f g
f . g
where . is in fact the composition operator, which as you can see from the documentation has type
(.) :: (b -> c) -> (a -> b) -> a -> c

Since you marked your question with Javascript here is an answer from a Javascript point of view.
Assuming I understand your signature properly, you want to adapt the composition function as follows: (f, g) => x => f(g(x));. Sure, that works, but you lose flexibility and gain uhm, nothing.
The original curry function is defined in curried form that means, it expects always a single argument. If every function in your whole code expects exactly one argument, then there is no more arity (well, in most cases). It is abstracted away. Currying facilitates function composition, because functions always return a single value. Curried functions are like building blocks. You can put them together in almost any way:
const comp = f => g => x => f(g(x)),
comp2 = comp(comp)(comp),
add = y => x => x + y,
inc = x => x + 1,
sqr = x => x * x;
console.log(comp(sqr)(inc)(2)); // 9
console.log(comp(add)(sqr)(2)(3)); // 7
console.log(comp2(sqr)(add)(2)(3)); // 25
As you can see only in the latter case we must consider the arity.
Currying can only develop its benefits if it is consistently applied for each function of your codebase, because it has a systemic effect.

First, an open circle is more commonly used: f ∘ g.
Second, it would more properly be pronounced "f composed with g". ("f composed of g" sounds like f is made up of g, rather than a new function made up of both.)
Finally, the two types are essentially the same, differing only in how you expect to pass functions to the compose function. The first defines the type of a fully curried function, such that compose takes one function as an argument, and returns a new function that takes the second function as an argument and returns the composed. This means with f :: B -> C and g :: A -> B, you can define either (using Haskell syntax)
compose :: (B -> C) -> (A -> B) -> (A -> C)
compose f g = \x -> f (g x)
or the uncurried version
compose' :: ((B -> C), (A -> B)) -> (A -> C)
compose' (f, g) = \x -> f (g x)
Either way, the return value is the same; the only difference is in how the arguments are passed. You could write h = compose f g or you could write h = compose' (f, g).

Related

fantasy-land confusion on ap method signature

In fantasy-land spec, the signature for ap method is defined as
fantasy-land/ap :: Apply f => f a ~> f (a -> b) -> f b
This translates as: The container f with value a has a method ap which takes a parameter container f with value of a function (a ->b) and returns a container f with value b. I hope I am right in this interpretation.
If I test this with Folktale, However I see different results:
const Maybe = require("data.maybe")
Maybe.of(5).ap(Maybe.of(x => x + 1)) // Uncaught TypeError: f is not a function
Maybe.of(x=>x+1).ap(Maybe.of(5)) // Maybe { value: 6 }
Maybe.of(x=>x+1).ap(Either.of(5)) // Either { value: 6 }
If I test this with Sanctuary, I see similar results (though Sanctuary does not have it as a "method")
const S = require("sanctuary")
let a = S.of(S.Maybe)(5)
let fn = S.of(S.Maybe)(x => x + 1)
S.ap(fn)(a) // Just (6)
S.ap(a)(fn) // Uncaught TypeError: Invalid value
This brings me to the conclusion that perhaps the fantasy-land specs for ap method could be:
fantasy-land/ap :: Apply f => f (a -> b) ~> f a -> f b
I am a newbie on FP and fantasy-land as well. I am happy to get corrected :)
Fantasyland specifies an interoperability layer, not a public API (although it use to be the case), hence the fantasy-land/ prefix which would otherwise not be user friendly at all. As a result you can find different conventions in different libraries. Oftentimes libraries implement both ap and fantasy-land/ap, with the arguments flipped.
The specification for ap also changed at some point. Some articles still mention the old spec. As for implementations, they don't want to break their users and the old spec is arguably easier to use (you can chain ap calls).

How can R.head be of type 'chain a'

I am trying to understand buzzdecafe's Chain chain chain article
That article explains how one can append the first value in an array to the (end) of that array with R.chain, and why that works.
const f = chain(append, head); //=> f :: [x] -> [x]`
f([1, 2, 3]); //=> [1, 2, 3, 1]
In the fifth last paragraph he writes that
head is of type m a
As someone who have just started to experiment with functional programming I don't get that.
I don't fully understand the substitution that goes on in the article from different types of chains from array to function and vice versa.
The type of R.chain is:
(a -> m b) -> m a -> m b
I understand that a chain can be a function. So R.append that has the type x → [x] → [x] can be rewritten as a -> m b and that fits the first part of the R.chain type. I assume that means that we've now defined (or whatever the word is) m b to be [x] -> [x] so that the last m b also must be replaced with [x] -> [x]?
In that case what we have would look like this:
(a -> [x] -> [x]) -> m a -> ([x] -> [x])
And since a and x will be of the same type (in this case number) we have:
(x -> [x] -> [x]) -> m x -> ([x] -> [x])
So the first past matches R.append. The end matches the type the returned function. Great, I sort of understand it I think...
But... that m x in between how does that fit R.head? m x can be a function that returns something of type x? Okey? But what about the inputs to that function? How can I see, and understand, that [x] would be a valid input compatible with the type of R.chain and the rest of the formula manipulation we did?
Going from this:
chain :: (a -> (x -> b)) -> (x -> a) -> (x -> b)
As it seems you've already understood, here we are interpreting m b to be "a function that takes an x and returns a b". So it follows that m a would be "a function that takes an x and returns an a".
Comparing this side-by-side with the signature of concat (I'll use y to avoid confusion between different x es):
(a -> (x -> b))
y -> [y]-> [y]
We can see that a is y, x is [y], and b is also [y]. So a function that takes an x and returns a would have the signature [y] -> y, which is precisely the signature that head has.
So what we have at the end is:
append head
a -> m b m a m b
a -> x -> b -> ( x -> a) -> ( x -> b )
(y -> [y] -> [y]) -> ([y] -> y) -> ([y] -> [y])
Does that help clear it up?
One other way to look at this is that if f and g are both functions, then:
chain(f, g)(x) is equivalent to f(g(x), x)
Which is pretty much what we see in the Ramda source:
fn(monad(x))(x)
From this, we can see that functions f and g are chain-able when the following are both true:
g(x) has the same type as the first parameter of f
f's second parameter has the same type as g's first parameter

Functional programming construct for composing identity and side effect

Does functional programming have a standard construct for this logic?
const passAround = (f) => (x) => {
f(x);
return x;
};
This enables me to compose functions that have side effects and no return values, like console.log. It's not like a Task because I don't want to represent the state of the side effect.
If you are talking about pure functional programming, then you need to challenge this starting point:
functions that have side effects and no return values
In functional programming, there is no such thing. Every function is defined as a transformation on some input into some output.
So the obvious question is, how would you represent console.log without a side effect? To answer, we need to challenge another assumption in your question:
I don't want to represent the state of the side effect
This is exactly how functional programming represents the problem: consider your input and output to be "the state of the world". In other words, given the state of the world before the function, return the state of the world after the function. In this case, you would be representing the state of the console: given a console with x lines of output, return a console with x+1 lines of output. Crudely, you could write something like this:
(x, console) => { return [x, console.withExtraLine(x)]; }
The more powerful mechanism generally used for representing this is called a "monad" - a special kind of object which wraps a series of steps along with some extra meaning. In the case of the IO monad, each step is wrapped with an action which will transform the state of the world. (I/O is just one of many useful applications of the monad concept.)
You write the steps as functions which only know about the "unwrapped" value of some part of that state (e.g. a parameter which ultimately came from user input), and the monad handles the messy details of actually executing that outside the realm of the functional program. So rather than thinking about your input and output as "the state of the world", you think about your input as "a chain of computations", and your output as "a slightly longer chain of computations".
There are many introductions to this that are far better than any I could give, just search for "monad" or "functional programming io".
See also, this answer, this question, and probably many others in the "Related" sidebar auto-generated when you view this question.
The SKI combinator calculus might interest you. Let's pretend that f is always a pure function:
const S = g => f => x => g(x)(f(x)); // S combinator of SKI combinator calculus
const K = x => y => x; // K combinator of SKI combinator calculus
const passAround = S(K); // Yes, the passAround function is just SK
console.log(passAround(console.log)(10) + 20);
Anyway, the reason why I bring up the SKI combinator calculus is because I want to introduce you to the concept of Applicative Functors. In particular, the Reader applicative functor is equivalent to the SKI combinator calculus. The S combinator is equivalent to the ap method of Reader and the K combinator is equivalent to the pure method of Reader.
In JavaScript, the equivalent of Reader is Function. Hence, we can define ap and pure for functions in JavaScript as follows:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const print = Function.pure.ap(console.log);
console.log(print(10) + 20);
But wait, there's so much more that you can do with applicative functors. Every applicative functor is also a functor. This means that applicative functors must also have a map method. For Reader the map method is just function composition. It's equivalent to the B combinator. Using map you can do really interesting things like:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const id = x => x; // I combinator of SKI combinator calculus
Function.prototype.map = function (f) {
return x => this(f(x));
};
Function.prototype.seq = function (g) {
return Function.pure(id).map(this).ap(g);
};
const result = console.log.seq(x => x + 20);
console.log(result(10));
The seq function is in fact equivalent to the (*>) method of the Applicative class. This enables a functional style of method cascading.
So in Haskell terminology, you want this:
passAround :: Monad m => (a -> m b) -> a -> m a
passAround f x = do
f x
return x
Read the type signature as “passAround takes a function f :: a -> m b, whose result is a monadic action (i.e., something that may have side-effects which can be sequenced in a well-defined order, thus the Monad m constraint) with arbitrary result-type b, and a value a to pass this function. It yields a monadic action with result-type a.”
To see what “functional programming construct” this might correspond to, let's first unroll this syntax. In Haskell, do sequencing notation is just syntactic sugar for monadic combinators, namely,
do
foo
bar
is sugar for foo >> bar. (This is a bit trivial really, the whole thing really only gets interesting when you also bind local results to variables.)
So,
passAround f x = f x >> return x
>> itself is shorthand for the general monadic-chaining operator, namely
passAround f x = f x >>= const (return x)
or
passAround f x = f x >>= \y -> return x
(That backslash denotes a lambda function, in JavaScript it would read f(x) >>= (y)=>return x.)
Now, what you really want all this for is, chaining multiple actions. In Javascript you would write g(passAround(f, x)), in Haskell this is not just a function argument because it's still a monadic action, so you want another monadic chaining operator: g =<< passAround f x or
passAround f x >>= g
If we expand passAround here, we get
(f x >>= \y -> return x) >>= g
Now, here we can apply the monad laws, namely the associativity law, giving us
f x >>= (\y -> return x >>= g)
and now the left unit law
f x >>= (\y -> g x)
IOW, the whole composition collapses down to just f x >> g x, which could also be written
do
f x
g x
...which is kind of, duh. What of it all? Well, the nice thing is that we can abstract over this monad-rewrapping, with a monad transformer. In Haskell, it's called ReaderT. What you would do if you know that f and g both use the variable x, you could exchange
f :: a -> m b
g :: a -> m c
with
f' :: ReaderT a m b
f' = ReaderT f
g' :: ReaderT a m c
g' = ReaderT g
The ReaderT value constructor corresponds conceptually to your passAround function.
Note that ReaderT a m c has the form (ReaderT a m) c or, ignoring the details, m' c, where m' is again a monad! And, using the do syntax for that monad, you can simply write
runReaderT (do
f'
g'
) x
which would in JavaScript look, theoretically, like
runReaderT (() => {
f';
g';
}, x)
Unfortunately you can't actually write it this way because unlike Haskell, imperative languages always use the same monad for sequencing their operation (which roughly corresponds to Haskell's IO monad). Incidentally, that's one of the standard description of what a monad is: it's an overloaded semicolon operator.
What you can certainly do however is implement a monad transformer on dynamic types in the functional part of the JavaScript language. I'm just not sure if it's worth the effort.

how do you read the ramda docs?

I'm having trouble understanding the signature of Ramda docs. For example if you look at map you see this
Functor f => (a → b) → f a → f b
I don't see how this pattern fits the example:
var double = x => x * 2;
R.map(double, [1, 2, 3]); //=> [2, 4, 6]
The functor in this example is [1,2,3], so how does that get placed into the signature of f in Functor f => (a → b) → f a → f b? Also, what do the → mean?
I'll give a brief answer here, but a more complete one is spread across two answers to a similar question, which in turn was taken from the Ramda wiki page. (Disclaimer: I'm the author of that page and one of the principals in Ramda itself.)
This is broken into two parts:
Functor f => (a → b) → f a → f b
Before the fat arrow (=>) we have constraints on the remainder. The single constraint in this example is that the variable f must be a Functor. A Functor is a type whose members have a map method which obeys certain laws. And the declaration is parameterized over another type, so we don't write just f but f String, f Number, or more generically, f a for some unknown type a.
The skinny arrow (->) is an abbreviation for the type Function. So instead of writing
Function x y
we can instead write
x -> y
or when needed to avoid ambiguity.
(x -> y)
Putting these together, we can note that in R.map(double, [1, 2, 3]), we have a function (double) from Number to Number, which means that our a and b are both Number. And our functor is Array. So specializing the definitions with these types, we have map accepting a function from Number to Number, and returning a function that takes an array of Numbers and returns a new array of Numbers. (That's because in this system, -> binds to the right, so (a -> b -> c) is equivalent to (a -> (b -> c)). In Ramda, all functions are curried in such a way that you can call them with any initial set of parameters, and until all the terms have been supplied, you continue to get back functions. Thus with Ramda functions there is no real difference between R.map(double)([1, 2, 3]) and R.map(double, [1, 2, 3]).

Why does bind of the function instance supply the original value to the next computation?

As a functional Javascript developer with only a vague understanding of Haskell I really have a hard time to understand Haskell idioms like monads. When I look at >>= of the function instance
(>>=) :: (r -> a) -> (a -> (r -> b)) -> r -> b
instance Monad ((->) r) where
f >>= k = \ r -> k (f r) r
// Javascript:
and its application with Javascript
const bind = f => g => x => g(f(x)) (x);
const inc = x => x + 1;
const f = bind(inc) (x => x <= 5 ? x => x * 2 : x => x * 3);
f(2); // 4
f(5); // 15
the monadic function (a -> (r -> b)) (or (a -> m b)) provides a way to choose the next computation depending on the previous result. More generally, the monadic function along with its corresponding bind operator seems to give us the capability to define what function composition means in a specific computational context.
It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one. Instead, the original value is passed. I'd expect f(2)/f(5) to yield 6/18, similar to normal function composition. Is this behavior specific to functions as monads? What do I misunderstand?
I think your confusion arises from using functions that are too simple. In particular, you write
const inc = x => x + 1;
whose type is a function that returns values in the same space as its input. Let's say inc is dealing with integers. Because both its input and output are integers, if you have another function foo that takes integers, it is easy to imagine using the output of inc as an input to foo.
The real world includes more exciting functions, though. Consider the function tree_of_depth that takes an integer and creates a tree of strings of that depth. (I won't try to implement it, because I don't know enough javascript to do a convincing job of it.) Now all of a sudden it's harder to imagine passing the output of tree_of_depth as an input to foo, since foo is expecting integers and tree_of_depth is producing trees, right? The only thing we can pass on to foo is the input to tree_of_depth, because that's the only integer we have lying around, even after running tree_of_depth.
Let's see how that manifests in the Haskell type signature for bind:
(>>=) :: (r -> a) -> (a -> r -> b) -> (r -> b)
This says that (>>=) takes two arguments, each functions. The first function can be of any old type you like -- it can take a value of type r and produce a value of type a. In particular, you don't have to promise that r and a are the same at all. But once you pick its type, then the type of the next function argument to (>>=) is constrained: it has to be a function of two arguments whose types are the same r and a as before.
Now you can see why we have to pass the same value of type r to both of these functions: the first function produces an a, not an updated r, so we have no other value of type r to pass to the second function! Unlike your situation with inc, where the first function happened to also produce an r, we may be producing some other very different type.
This explains why bind has to be implemented the way it is, but maybe doesn't explain why this monad is a useful one. There is writing elsewhere on that. But the canonical use case is for configuration variables. Suppose at program start you parse a configuration file; then for the rest of the program, you want to be able to influence the behavior of various functions by looking at information from that configuration. In all cases it makes sense to use the same configuration information -- it doesn't need to change. Then this monad becomes useful: you can have an implicit configuration value, and the monad's bind operation makes sure that the two functions you're sequencing both have access to that information without having to manually pass it in to both functions.
P.S. You say
It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one.
which I find slightly imprecise: in fact in m >>= f, the function f gets both the result of m (as its first argument) and the original value (as its second argument).
More generally, the monadic function along with its corresponding bind
operator seems to give us the capability to define what function
composition means in a specific computational context.
I'm not sure what you mean by the "monadic function". Monads (which in Haskell consist of a bind function and a pure function) let you express how a series of monadic actions can be chained together ((<=<) is the monad equivalent of composition, equivalent to (.) for the Identity monad). In that sense, you do sort of get composition, but only composition of actions (functions of the form a -> m b).
(This is further abstracted in the Kleisli newtype around functions of the type a -> m b. Its category instance really lets you write the sequencing of monadic actions as composition.)
I'd expect f(2)/f(5) to yield 6/18, similar to normal function composition.
Then, you can just use normal function composition! Don't use a monad if you don't need one.
It is all the more surprising that the monadic function doesn't supply
the result of the previous computation to the subsequent one. Instead,
the original value is passed. ... Is this behavior specific to
functions as monads?
Yes, it it. The monad Monad ((->) r) is also known as the "reader monad" because it only reads from its environment. That said, as far as monads are concerned, you are still passing the monadic result of the previous action to the subsequent one - but those results are themselves functions!
As already mentioned by chi, this line
const f = bind(inc) (x => x <= 5 ? x => x * 2 : x => x * 3);
would be clearer as something like
const f = bind(inc) (x => x <= 5 ? y => y * 2 : y => y * 3);
the Monad instance for functions is basically the Reader monad. You have a value x => x + 1 that depends on an enviroment (it adds 1 to the environment).
You also have a function which, depending on its input, returns one value that depends on an environment (y => y * 2) or another value that depends on an environment (y => y * 3).
In your bind, you are only using the result of x => x + 1 to choose between these two functions. Your are not returning the previous result directly. But you could, if you returned constant functions which ignored their environments and returned a fixed value depending only on the previous result:
const f = bind(inc) (x => x <= 5 ? _ => x * 2 : _ => x * 3);
(not sure about the syntax)

Categories

Resources