how do you read the ramda docs? - javascript

I'm having trouble understanding the signature of Ramda docs. For example if you look at map you see this
Functor f => (a → b) → f a → f b
I don't see how this pattern fits the example:
var double = x => x * 2;
R.map(double, [1, 2, 3]); //=> [2, 4, 6]
The functor in this example is [1,2,3], so how does that get placed into the signature of f in Functor f => (a → b) → f a → f b? Also, what do the → mean?

I'll give a brief answer here, but a more complete one is spread across two answers to a similar question, which in turn was taken from the Ramda wiki page. (Disclaimer: I'm the author of that page and one of the principals in Ramda itself.)
This is broken into two parts:
Functor f => (a → b) → f a → f b
Before the fat arrow (=>) we have constraints on the remainder. The single constraint in this example is that the variable f must be a Functor. A Functor is a type whose members have a map method which obeys certain laws. And the declaration is parameterized over another type, so we don't write just f but f String, f Number, or more generically, f a for some unknown type a.
The skinny arrow (->) is an abbreviation for the type Function. So instead of writing
Function x y
we can instead write
x -> y
or when needed to avoid ambiguity.
(x -> y)
Putting these together, we can note that in R.map(double, [1, 2, 3]), we have a function (double) from Number to Number, which means that our a and b are both Number. And our functor is Array. So specializing the definitions with these types, we have map accepting a function from Number to Number, and returning a function that takes an array of Numbers and returns a new array of Numbers. (That's because in this system, -> binds to the right, so (a -> b -> c) is equivalent to (a -> (b -> c)). In Ramda, all functions are curried in such a way that you can call them with any initial set of parameters, and until all the terms have been supplied, you continue to get back functions. Thus with Ramda functions there is no real difference between R.map(double)([1, 2, 3]) and R.map(double, [1, 2, 3]).

Related

How can R.head be of type 'chain a'

I am trying to understand buzzdecafe's Chain chain chain article
That article explains how one can append the first value in an array to the (end) of that array with R.chain, and why that works.
const f = chain(append, head); //=> f :: [x] -> [x]`
f([1, 2, 3]); //=> [1, 2, 3, 1]
In the fifth last paragraph he writes that
head is of type m a
As someone who have just started to experiment with functional programming I don't get that.
I don't fully understand the substitution that goes on in the article from different types of chains from array to function and vice versa.
The type of R.chain is:
(a -> m b) -> m a -> m b
I understand that a chain can be a function. So R.append that has the type x → [x] → [x] can be rewritten as a -> m b and that fits the first part of the R.chain type. I assume that means that we've now defined (or whatever the word is) m b to be [x] -> [x] so that the last m b also must be replaced with [x] -> [x]?
In that case what we have would look like this:
(a -> [x] -> [x]) -> m a -> ([x] -> [x])
And since a and x will be of the same type (in this case number) we have:
(x -> [x] -> [x]) -> m x -> ([x] -> [x])
So the first past matches R.append. The end matches the type the returned function. Great, I sort of understand it I think...
But... that m x in between how does that fit R.head? m x can be a function that returns something of type x? Okey? But what about the inputs to that function? How can I see, and understand, that [x] would be a valid input compatible with the type of R.chain and the rest of the formula manipulation we did?
Going from this:
chain :: (a -> (x -> b)) -> (x -> a) -> (x -> b)
As it seems you've already understood, here we are interpreting m b to be "a function that takes an x and returns a b". So it follows that m a would be "a function that takes an x and returns an a".
Comparing this side-by-side with the signature of concat (I'll use y to avoid confusion between different x es):
(a -> (x -> b))
y -> [y]-> [y]
We can see that a is y, x is [y], and b is also [y]. So a function that takes an x and returns a would have the signature [y] -> y, which is precisely the signature that head has.
So what we have at the end is:
append head
a -> m b m a m b
a -> x -> b -> ( x -> a) -> ( x -> b )
(y -> [y] -> [y]) -> ([y] -> y) -> ([y] -> [y])
Does that help clear it up?
One other way to look at this is that if f and g are both functions, then:
chain(f, g)(x) is equivalent to f(g(x), x)
Which is pretty much what we see in the Ramda source:
fn(monad(x))(x)
From this, we can see that functions f and g are chain-able when the following are both true:
g(x) has the same type as the first parameter of f
f's second parameter has the same type as g's first parameter

Functional programming construct for composing identity and side effect

Does functional programming have a standard construct for this logic?
const passAround = (f) => (x) => {
f(x);
return x;
};
This enables me to compose functions that have side effects and no return values, like console.log. It's not like a Task because I don't want to represent the state of the side effect.
If you are talking about pure functional programming, then you need to challenge this starting point:
functions that have side effects and no return values
In functional programming, there is no such thing. Every function is defined as a transformation on some input into some output.
So the obvious question is, how would you represent console.log without a side effect? To answer, we need to challenge another assumption in your question:
I don't want to represent the state of the side effect
This is exactly how functional programming represents the problem: consider your input and output to be "the state of the world". In other words, given the state of the world before the function, return the state of the world after the function. In this case, you would be representing the state of the console: given a console with x lines of output, return a console with x+1 lines of output. Crudely, you could write something like this:
(x, console) => { return [x, console.withExtraLine(x)]; }
The more powerful mechanism generally used for representing this is called a "monad" - a special kind of object which wraps a series of steps along with some extra meaning. In the case of the IO monad, each step is wrapped with an action which will transform the state of the world. (I/O is just one of many useful applications of the monad concept.)
You write the steps as functions which only know about the "unwrapped" value of some part of that state (e.g. a parameter which ultimately came from user input), and the monad handles the messy details of actually executing that outside the realm of the functional program. So rather than thinking about your input and output as "the state of the world", you think about your input as "a chain of computations", and your output as "a slightly longer chain of computations".
There are many introductions to this that are far better than any I could give, just search for "monad" or "functional programming io".
See also, this answer, this question, and probably many others in the "Related" sidebar auto-generated when you view this question.
The SKI combinator calculus might interest you. Let's pretend that f is always a pure function:
const S = g => f => x => g(x)(f(x)); // S combinator of SKI combinator calculus
const K = x => y => x; // K combinator of SKI combinator calculus
const passAround = S(K); // Yes, the passAround function is just SK
console.log(passAround(console.log)(10) + 20);
Anyway, the reason why I bring up the SKI combinator calculus is because I want to introduce you to the concept of Applicative Functors. In particular, the Reader applicative functor is equivalent to the SKI combinator calculus. The S combinator is equivalent to the ap method of Reader and the K combinator is equivalent to the pure method of Reader.
In JavaScript, the equivalent of Reader is Function. Hence, we can define ap and pure for functions in JavaScript as follows:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const print = Function.pure.ap(console.log);
console.log(print(10) + 20);
But wait, there's so much more that you can do with applicative functors. Every applicative functor is also a functor. This means that applicative functors must also have a map method. For Reader the map method is just function composition. It's equivalent to the B combinator. Using map you can do really interesting things like:
Function.prototype.ap = function (f) {
return x => this(x)(f(x));
};
Function.pure = x => y => x;
const id = x => x; // I combinator of SKI combinator calculus
Function.prototype.map = function (f) {
return x => this(f(x));
};
Function.prototype.seq = function (g) {
return Function.pure(id).map(this).ap(g);
};
const result = console.log.seq(x => x + 20);
console.log(result(10));
The seq function is in fact equivalent to the (*>) method of the Applicative class. This enables a functional style of method cascading.
So in Haskell terminology, you want this:
passAround :: Monad m => (a -> m b) -> a -> m a
passAround f x = do
f x
return x
Read the type signature as “passAround takes a function f :: a -> m b, whose result is a monadic action (i.e., something that may have side-effects which can be sequenced in a well-defined order, thus the Monad m constraint) with arbitrary result-type b, and a value a to pass this function. It yields a monadic action with result-type a.”
To see what “functional programming construct” this might correspond to, let's first unroll this syntax. In Haskell, do sequencing notation is just syntactic sugar for monadic combinators, namely,
do
foo
bar
is sugar for foo >> bar. (This is a bit trivial really, the whole thing really only gets interesting when you also bind local results to variables.)
So,
passAround f x = f x >> return x
>> itself is shorthand for the general monadic-chaining operator, namely
passAround f x = f x >>= const (return x)
or
passAround f x = f x >>= \y -> return x
(That backslash denotes a lambda function, in JavaScript it would read f(x) >>= (y)=>return x.)
Now, what you really want all this for is, chaining multiple actions. In Javascript you would write g(passAround(f, x)), in Haskell this is not just a function argument because it's still a monadic action, so you want another monadic chaining operator: g =<< passAround f x or
passAround f x >>= g
If we expand passAround here, we get
(f x >>= \y -> return x) >>= g
Now, here we can apply the monad laws, namely the associativity law, giving us
f x >>= (\y -> return x >>= g)
and now the left unit law
f x >>= (\y -> g x)
IOW, the whole composition collapses down to just f x >> g x, which could also be written
do
f x
g x
...which is kind of, duh. What of it all? Well, the nice thing is that we can abstract over this monad-rewrapping, with a monad transformer. In Haskell, it's called ReaderT. What you would do if you know that f and g both use the variable x, you could exchange
f :: a -> m b
g :: a -> m c
with
f' :: ReaderT a m b
f' = ReaderT f
g' :: ReaderT a m c
g' = ReaderT g
The ReaderT value constructor corresponds conceptually to your passAround function.
Note that ReaderT a m c has the form (ReaderT a m) c or, ignoring the details, m' c, where m' is again a monad! And, using the do syntax for that monad, you can simply write
runReaderT (do
f'
g'
) x
which would in JavaScript look, theoretically, like
runReaderT (() => {
f';
g';
}, x)
Unfortunately you can't actually write it this way because unlike Haskell, imperative languages always use the same monad for sequencing their operation (which roughly corresponds to Haskell's IO monad). Incidentally, that's one of the standard description of what a monad is: it's an overloaded semicolon operator.
What you can certainly do however is implement a monad transformer on dynamic types in the functional part of the JavaScript language. I'm just not sure if it's worth the effort.

ES6 - Attempting to console.log(arguments.length) [duplicate]

(() => console.log(arguments))(1,2,3);
// Chrome, FF, Node give "1,2,3"
// Babel gives "arguments is not defined" from parent scope
According to Babel (and from what I can tell initial TC39 recommendations), that is "invalid" as arrow functions should be using their parent scope for arguments. The only info I've been able to find that contradicts this is a single comment saying this was rejected by TC39, but I can't find anything to back this up.
Just looking for official docs here.
Chrome, FF, and node seem to be wrong here, Babel is correct:
Arrow functions do not have an own arguments binding in their scope; no arguments object is created when calling them.
looking for official docs here
Arrow function expressions evaluate to functions that have their [[ThisMode]] set to lexical, and when such are called the declaration instantiation does not create an arguments object. There is even a specifc note (18 a) stating that "Arrow functions never have an arguments objects.".
As noted by Bergi, arrow functions do not have their own arguments variable.
However, if you do want to capture the args for your arrow function, you can simply use a rest parameter
const myFunc = (...args) =>
console.log ("arguments", args)
myFunc (1, 2, 3)
// arguments [1, 2, 3]
Rest parameters can be combined with other positional parameters, but must always be included as the last parameter
const myFunc = (a, b, c, ...rest) =>
console.log (a, b, c, rest)
myFunc (1, 2, 3, 4, 5, 6, 7)
// 1 2 3 [ 4, 5, 6, 7 ]
If you make the mistake of writing a rest parameter in any other position, you will get an Error
const myFunc = (...rest, a, b, c) =>
console.log (a, b, c, rest)
myFunc (1, 2, 3, 4, 5, 6, 7)
// Error: Rest parameter must be last formal parameter

Compose function signature

I've read that the composition of g :: A -> B and f :: B -> C, pronounced (“f composed of g”), results in another function (arrow) from A -> C. This can be expressed more formally as
f • g = f(g) = compose :: (B -> C) -> (A -> B) -> (A -> C)
Can the above composition be also defined as below? Please clarify.
In this case the compose function takes the same two functions f and g and return a new function from A -> C.
f • g = f(g) = compose :: ((B -> C), (A -> B)) -> (A -> C)
First we need to get some things right:
f ○ g means something quite different from f(g).
The former is a function that, given an argument x, will first feed it to g, then pass on the result to f, and output that final result, i.e. f(g(x)).
OTOH, f(g) means you apply the function f to the value g right away, without waiting for any argument. (g just happens to have a function type, but in functional languages, functions can be passed around just like any other values / arguments).
Unless you're dealing with some pretty wacky polymorphic functions, one of these will be ill-typed. For example, a well-typed composition might be
sqrt ○ abs :: Double -> Double
whereas a well-typed application could be (at least in Haskell)
map(sqrt) :: [Double] -> [Double]
I'll assume in the following you're talking about f ○ g.
Type signatures must be given for a function itself, not for a function applied to some arguments. This is something that loads of people get utterly wrong: in f(x), you have a function f and an argument x. But f(x) is not a function, only the value that's the result of applying a function to a value! So, you shouldn't write something like f ○ g :: ... (unless you're actually talking only about the type that results from the composition). Better write just ○ :: ... (or, in Haskell, (○) :: ...).
Function arrows aren't associative. Most mathematicians likely won't even know what X -> Y -> Z is supposed to mean. What it means in languages like Haskell may actually be somewhat surprising:
X -> Y -> Z ≡ X -> (Y -> Z)
i.e. this is the type of a function that first takes only an argument of type X. The result will be again a function, but one that takes only an argument of type Y. This function will have, if you like, the X value already built-in (in a so-called closure, unless the compiler optimises that away). Giving it also the Y value will allow the function to actually do its job and finally yield the Z result.
At this point you already have your answer, pretty much: indeed the signatures X -> Y -> Z and (X, Y) -> Z are essentially equivalent. The process of rewriting this is called currying.
To answer your question in particular: most languages don't normally do any currying, so the signature ((B -> C), (A -> B)) -> (A -> C) is actually more correct. It corresponds to a function you can call as
compose(f,g)
OTOH, the curried signature (B -> C) -> (A -> B) -> (A -> C) means that you need to feed in the arguments one by one:
compose(f)(g)
Only in languages like Haskell is this the standard style, but you don't need the parens there: all the following are parsed the same in Haskell
compose(f)(g)
compose f g
(compose) f g
(.) f g
f . g
where . is in fact the composition operator, which as you can see from the documentation has type
(.) :: (b -> c) -> (a -> b) -> a -> c
Since you marked your question with Javascript here is an answer from a Javascript point of view.
Assuming I understand your signature properly, you want to adapt the composition function as follows: (f, g) => x => f(g(x));. Sure, that works, but you lose flexibility and gain uhm, nothing.
The original curry function is defined in curried form that means, it expects always a single argument. If every function in your whole code expects exactly one argument, then there is no more arity (well, in most cases). It is abstracted away. Currying facilitates function composition, because functions always return a single value. Curried functions are like building blocks. You can put them together in almost any way:
const comp = f => g => x => f(g(x)),
comp2 = comp(comp)(comp),
add = y => x => x + y,
inc = x => x + 1,
sqr = x => x * x;
console.log(comp(sqr)(inc)(2)); // 9
console.log(comp(add)(sqr)(2)(3)); // 7
console.log(comp2(sqr)(add)(2)(3)); // 25
As you can see only in the latter case we must consider the arity.
Currying can only develop its benefits if it is consistently applied for each function of your codebase, because it has a systemic effect.
First, an open circle is more commonly used: f ∘ g.
Second, it would more properly be pronounced "f composed with g". ("f composed of g" sounds like f is made up of g, rather than a new function made up of both.)
Finally, the two types are essentially the same, differing only in how you expect to pass functions to the compose function. The first defines the type of a fully curried function, such that compose takes one function as an argument, and returns a new function that takes the second function as an argument and returns the composed. This means with f :: B -> C and g :: A -> B, you can define either (using Haskell syntax)
compose :: (B -> C) -> (A -> B) -> (A -> C)
compose f g = \x -> f (g x)
or the uncurried version
compose' :: ((B -> C), (A -> B)) -> (A -> C)
compose' (f, g) = \x -> f (g x)
Either way, the return value is the same; the only difference is in how the arguments are passed. You could write h = compose f g or you could write h = compose' (f, g).

passing anonymous functions as parameters in livescript

What's the correct way to pass a function as a parameter in liveScript?
For instance, lets say I Want to use the array reduce function, in convectional javascript I would write it as the following
myArray.reduce(function (a,b) {return a + b});
This translates to liveScript quite nicely as:
myArray.reduce (a,b) -> a + b
Now, I want to set the initial value by providing a second parameter:
myArray.reduce(function (a,b) {return a + b},5);
How would I translate this to liveScript? It seems that the first function overrides any ability to pass additional parameters to reduce.
Apologies if I have missed something obvious, but I can't seem to find anything pertaining to this scenario in the docs
For more complex functions I'd recommend you to use this style
[1, 2, 3].reduce do
(a, b) ->
# your code here
0
You can use ~ to bind the this argument, then call flip on it to swap the first and second parameters:
flip [1, 2, 3]~reduce, 0, (a, b) -> a + b
This may be more readable if the callback body is very long.
You have to wrap the closure in ()
[1,2,3].reduce ((a,b) -> a + b), 0
Compiles to
[1, 2, 3].reduce(function(a, b){
return a + b;
}, 0);
Just to complement the other answer, LiveScript offers binops, just put parentheses around the operator.
[1 2 3].reduce (+), 0

Categories

Resources