Does this contrieved example reflect the purpose of the Reader monad? - javascript

The Reader monad covers computations that share a common environment. The only meaningful way to do this with functions is to implicitly thread arguments through a composition. So I attempted to create a most minimal example that still reflects the notion of such a computation.
Please note that I only use the applicative part of monad, because the computation neither depends on a previous value nor needs short circuiting. Besides I know that Reader is almost always used as part of a transformer stack, but I want to keep this example as simple as possible:
// Applicative
const funAp = tf => tg => x =>
tf(x) (tg(x));
const id = x => x;
const ask = id;
// functions with a common parameter
const myInc = env => x => {
const r = x + 1;
if (env.debug)
console.log(r);
return r;
};
const mySqr = env => x => {
const r = x * x;
if (env.debug)
console.log(r);
return r;
};
const myAdd = env => x => y => {
const r = x + y;
if (env.debug)
console.log(r);
return r;
}
// inside sharedEnv we can use f and g without explicitly passing env
// if a fun (env.op) isn't lifted yet we ask for env
const sharedEnv = env => f => g => x =>
env.defaultOp
? f(g(x))
: funAp(env.op) (ask) (f(x)) (g(x));
// MAIN
const main = funAp(
funAp(sharedEnv)
(myInc))
(mySqr)
({debug: true, defaultOp: false, op: myAdd});
console.log(
main(5));
Does this example conform with the notion of a shared environment?

Related

How to merge 2 serializable functions into 1 serializable function in JS?

As you know, there are certain cases (e.g., use in workers) when functions need to be serializable (and, at the other side, deserializable of course). I have such a case; a library function (i.e. I cannot change it) doSomething(serializableFunction) expects such a function from me. Also it expects that serializableFunction accepts 1 parameter x and gives x to it.
So far, this was not a problem for me; I had one function f(x) from another library that fulfilled my purposes (so I just called doSomething(f) so far). But now, I have a second serializable function g(x), also from a library, and I want the function I give to doSomething to execute either f or g, dependent on x. So I tried:
doSomething(function h(x) {
if (x < 5) return f(x);
else return g(x);
});
This would work well without serialization, but with it, an error is thrown of course because h is not serializable anymore: It calls f and g, which are no longer known at deserialization time!
So currently I'm doing:
doSomething(eval(`function h(x) {
const f = ${f.toString()};
const g = ${g.toString()};
if (x < 5) return f(x);
else return g(x);
}`));
This works, but it's a nightmare! So my question is: Is there any better approach to merge 2 serializable functions (like fand g) into 1 function that is also serializable? Note that I cannot change any of f, g, or doSomething, just h!
I think of a solution replacing eval with new Funciton.
Not the best, but a bit better.
Example below:
function f(a) {
console.log("f", a);
}
function g(b) {
console.log("f", b);
}
function doSomething(func) {
const y = 1;
func(y);
}
const funcBody = `return function () {
const f = ${f.toString()};
const g = ${g.toString()};
if (x < 5) return f(x);
else return g(x);
}()`;
const func = new Function("x", funcBody);
doSomething(func);
Or you can add some abstraction like below:
function f(a) {
console.log("f", a);
}
function g(b) {
console.log("f", b);
}
function doSomething(func) {
const y = 1;
func(y);
}
const getFuncArray = (name = "x") => [
name,
`return function () {
const f = ${f.toString()};
const g = ${g.toString()};
if (${name} < 5) return f(${name});
else return g(${name});
}()`,
];
const res = new Function(...getFuncArray());
doSomething(res);

Can the idea of Affine (relaxed Linear) Types be implemented in an untyped setting to enable safe mutations?

It'd be useful if I could use safe in-place destructive updates on Arrays and Maps every now and then. Linear types are a technique to allow safe mutations by restricting the consumption of values to an exactly once semantics. While it seems not possible to implement exactly once in Javascript, here is an implementation of the relaxed at most once variant, which corresponds to affine types:
class LinearProxy {
constructor() {
this.once = false;
}
get(o, k) {
if (this.once)
throw new TypeError("non-linear type usage");
else this.once = true;
if (k === "run")
return once(f => {
const r = f(o);
if (r === o)
throw new TypeError("non-linear type usage");
else return r;
});
return o[k];
}
set(o, k, v) {
if (this.once)
throw new TypeError("non-linear type usage");
o[k] = v;
return true;
}
}
const linear = o => new Proxy(o, new LinearProxy());
const once = f => {
let called = false;
return x => {
if (called)
throw new TypeError("non-linear type usage");
else {
called = true;
return f(x);
}
};
};
const run = f => tx =>
tx["run"] (f);
const id = x => x;
const last = xs => xs[xs.length - 1];
const dup = xs => [...xs, ...xs];
const xs = linear([1,2,3]),
ys = linear([1,2,3]),
zs = linear([1,2,3]);
xs[3] = 4;
xs[4] = 5;
console.log(
"run(last) (xs):",
run(last) (xs)); // 5
try {run(last) (xs)}
catch(e) {console.log("run(last) (xs):", e.message)} // type error (A)
try {const x = xs[4]}
catch(e) {console.log("x = xs[4]:", e.message)} // type error (A)
try {xs[0] = 11}
catch(e) {console.log("xs[0] = 11:", e.message)} // type error (B)
try {run(id) (ys)}
catch(e) {console.log("run(id) (ys):", e.message)} // type error (C)
console.log(run(dup) (zs)); // [1,2,3,1,2,3] (D)
Subsequent read access (A) and write access (B) of once consumed linear types throw a type error. The attempt to gain immediate access to the reference (C) of a linear type also throws a type error. The latter is just a naive check to prevent returning the reference by accident. One could easily bypass it, so we must rely on convention for this case.
However, D is not affine, because the argument is consumed twice. Does the fact that the non-linear use happens within a function scope means that it is still (relatively) safe? Is there a more clever way to implement linear types in Javascript?

How to implement guarded recursion in strictly evaluated languages?

I implemented a Scott encoded List type in Javascript along with an overloaded append function that mimics the Semigroup typeclass.
append works just fine but for large lists it will blow the stack. Here is the decisive part of my implementation:
appendAdd("List/List", tx => ty => tx.runList({
Nil: ty,
Cons: x => tx_ => Cons(x) (append(tx_) (ty))
}));
Usually I use a trampoline to avoid a growing stack, but this presupposes tail recursion and thus won't work in this case.
Since this implementation is based on Haskell's, I guess lazy evaluation and guarded recursion/tail recursion modulo cons make the difference:
(++) [] ys = ys
(++) (x:xs) ys = x : xs ++ ys
Provided I understand it correctly, due to lazy evaluation (almost) nothing happens in Haskell when I append a list to another, until I actually do something with this new list. In the example below I fold it.
What I don't understand is how guarded recursion can keep the recursion from growing the call stack and whether this behavior can be implemented explicitly in a strictly evaluated language like Javascript.
Hopefully this question isn't too broad.
For a better understanding, here is the full implementation/example:
// type constructor
const Type = name => {
const Type = tag => Dcons => {
const t = new Tcons();
Object.defineProperty(
t,
`run${name}`,
{value: Dcons});
t[TAG] = tag;
return t;
};
const Tcons =
Function(`return function ${name}() {}`) ();
Tcons.prototype[Symbol.toStringTag] = name;
return Type;
};
const TAG = Symbol("TAG");
const List = Type("List");
// data constructors
const Cons = x => tx => List("Cons") (cases => cases.Cons(x) (tx));
const Nil = List("Nil") (cases => cases.Nil);
// overload binary functions
const overload2 = (name, dispatch) => {
const pairs = new Map();
return {
[`${name}Add`]: (k, v) => pairs.set(k, v),
[`${name}Lookup`]: k => pairs.get(k),
[name]: x => y => {
if (typeof x === "function" && (VALUE in x))
x = x(y);
else if (typeof y === "function" && (VALUE in y))
y = y(x);
const r = pairs.get(dispatch(x, y));
if (r === undefined)
throw new TypeError("...");
else if (typeof r === "function")
return r(x) (y);
else return r;
}
}
};
const dispatcher = (...args) => args.map(arg => {
const tag = Object.prototype.toString.call(arg);
return tag.slice(tag.lastIndexOf(" ") + 1, -1);
}).join("/");
// Semigroup "typeclass"
const {appendAdd, appendLookup, append} =
overload2("append", dispatcher);
// List instance for Semigroup
appendAdd("List/List", tx => ty => tx.runList({
Nil: ty,
Cons: x => tx_ => Cons(x) (append(tx_) (ty))
}));
// fold
const foldr = f => acc => {
const aux = tx =>
tx.runList({
Nil: acc,
Cons: x => tx_ => f(x) (aux(tx_))})
return aux;
};
// data
const tx = Cons(1) (Cons(2) (Nil));
const ty = Cons(3) (Cons(4) (Nil));
const tz = append(tx) (ty);
// run
console.log(
foldr(x => acc => `${x}${acc}`) (0) (tz) // "12340"
);
This isn't a real answer but conclusions I drew after further study:
Tail Recursion modulo Cons - TRMC (and "modulo" for other operations) refers only to a strictly evaluated context, whereas guarded recursion refers to a lazy evaluated one
TRMC is an expensive compiler technique and it (probably) doesn't make sense to implement it in userland
TRMC requires the operation to be associative (form at least a Semigroup), so that the parentheses can be rearranged
This Q&A is also helpful: a tail-recursion version list appending function
.

How to reconcile Javascript with currying and function composition

I love currying but there are a couple of reasons why a lof of Javascript devs reject this technique:
aesthetic concerns about the typical curry pattern: f(x) (y) (z)
concerns about performance penalties due to the increased number of function calls
concerns about debugging issues because of the many nested anonymous functions
concerns about readability of point-free style (currying in connection with composition)
Is there an approach that can mitigate these concerns so that my coworkers don't hate me?
Note: #ftor answered his/her own question. This is a direct companion to that answer.
You're already a genius
I think you might've re-imagined the partial function – at least, in part!
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys);
and it's counter-part, partialRight
const partialRight = (f, ...xs) => (...ys) => f(...ys, ...xs);
partial takes a function, some args (xs), and always returns a function that takes some more args (ys), then applies f to (...xs, ...ys)
Initial remarks
The context of this question is set in how currying and composition can play nice with a large user base of coders. My remarks will be in the same context
just because a function may return a function does not mean that it is curried – tacking on a _ to signify that a function is waiting for more args is confusing. Recall that currying (or partial function application) abstracts arity, so we never know when a function call will result in the value of a computation or another function waiting to be called.
curry should not flip arguments; that is going to cause some serious wtf moments for your fellow coder
if we're going to create a wrapper for reduce, the reduceRight wrapper should be consistent – eg, your foldl uses f(acc, x, i) but your foldr uses f(x, acc, i) – this will cause a lot of pain amongst coworkers that aren't familiar with these choices
For the next section, I'm going to replace your composable with partial, remove _-suffixes, and fix the foldr wrapper
Composable functions
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys);
const partialRight = (f, ...xs) => (...ys) => f(...ys, ...xs);
const comp = (f, g) => x => f(g(x));
const foldl = (f, acc, xs) => xs.reduce(f, acc);
const drop = (xs, n) => xs.slice(n);
const add = (x, y) => x + y;
const sum = partial(foldl, add, 0);
const dropAndSum = comp(sum, partialRight(drop, 1));
console.log(
dropAndSum([1,2,3,4]) // 9
);
Programmatic solution
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys);
// restore consistent interface
const foldr = (f, acc, xs) => xs.reduceRight(f, acc);
const comp = (f,g) => x => f(g(x));
// added this for later
const flip = f => (x,y) => f(y,x);
const I = x => x;
const inc = x => x + 1;
const compn = partial(foldr, flip(comp), I);
const inc3 = compn([inc, inc, inc]);
console.log(
inc3(0) // 3
);
A more serious task
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys);
const filter = (f, xs) => xs.filter(f);
const comp2 = (f, g, x, y) => f(g(x, y));
const len = xs => xs.length;
const odd = x => x % 2 === 1;
const countWhere = f => partial(comp2, len, filter, f);
const countWhereOdd = countWhere(odd);
console.log(
countWhereOdd([1,2,3,4,5]) // 3
);
Partial power !
partial can actually be applied as many times as needed
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys)
const p = (a,b,c,d,e,f) => a + b + c + d + e + f
let f = partial(p,1,2)
let g = partial(f,3,4)
let h = partial(g,5,6)
console.log(p(1,2,3,4,5,6)) // 21
console.log(f(3,4,5,6)) // 21
console.log(g(5,6)) // 21
console.log(h()) // 21
This makes it an indispensable tool for working with variadic functions, too
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys)
const add = (x,y) => x + y
const p = (...xs) => xs.reduce(add, 0)
let f = partial(p,1,1,1,1)
let g = partial(f,2,2,2,2)
let h = partial(g,3,3,3,3)
console.log(h(4,4,4,4))
// 1 + 1 + 1 + 1 +
// 2 + 2 + 2 + 2 +
// 3 + 3 + 3 + 3 +
// 4 + 4 + 4 + 4 => 40
Lastly, a demonstration of partialRight
const partial = (f, ...xs) => (...ys) => f(...xs, ...ys);
const partialRight = (f, ...xs) => (...ys) => f(...ys, ...xs);
const p = (...xs) => console.log(...xs)
const f = partialRight(p, 7, 8, 9);
const g = partial(f, 1, 2, 3);
const h = partial(g, 4, 5, 6);
p(1, 2, 3, 4, 5, 6, 7, 8, 9) // 1 2 3 4 5 6 7 8 9
f(1, 2, 3, 4, 5, 6) // 1 2 3 4 5 6 7 8 9
g(4, 5, 6) // 1 2 3 4 5 6 7 8 9
h() // 1 2 3 4 5 6 7 8 9
Summary
OK, so partial is pretty much a drop in replacement for composable but also tackles some additional corner cases. Let's see how this bangs up against your initial list
aesthetic concerns: avoids f (x) (y) (z)
performance: unsure, but i suspect performance is about the same
debugging: still an issue because partial creates new functions
readability: i think readability here is pretty good, actually. partial is flexible enough to remove points in many cases
I agree with you that there's no replacement for fully curried functions. I personally found it easy to adopt the new style once I stopped being judgmental about the "ugliness" of the syntax – it's just different and people don't like different.
The current prevailing approach provides that each multi argument function is wrapped in a dynamic curry function. While this helps with concern #1, it leaves the remaining ones untouched. Here is an alternative approach.
Composable functions
A composable function is curried only in its last argument. To distinguish them from normal multi argument functions, I name them with a trailing underscore (naming is hard).
const comp_ = (f, g) => x => f(g(x)); // composable function
const foldl_ = (f, acc) => xs => xs.reduce((acc, x, i) => f(acc, x, i), acc);
const curry = f => y => x => f(x, y); // fully curried function
const drop = (xs, n) => xs.slice(n); // normal, multi argument function
const add = (x, y) => x + y;
const sum = foldl_(add, 0);
const dropAndSum = comp_(sum, curry(drop) (1));
console.log(
dropAndSum([1,2,3,4]) // 9
);
With the exception of drop, dropAndSum consists exclusively of multi argument or composable functions and yet we've achieved the same expressiveness as with fully curried functions - at least with this example.
You can see that each composable function expects either uncurried or other composable functions as arguments. This will increase speed especially for iterative function applications. However, this is also restrictive as soon as the result of a composable function is a function again. Look into the countWhere example below for more information.
Programmatic solution
Instead of defining composable functions manually we can easily implement a programmatic solution:
// generic functions
const composable = f => (...args) => x => f(...args, x);
const foldr = (f, acc, xs) =>
xs.reduceRight((acc, x, i) => f(x, acc, i), acc);
const comp_ = (f, g) => x => f(g(x));
const I = x => x;
const inc = x => x + 1;
// derived functions
const foldr_ = composable(foldr);
const compn_ = foldr_(comp_, I);
const inc3 = compn_([inc, inc, inc]);
// and run...
console.log(
inc3(0) // 3
);
Operator functions vs. higher order functions
Maybe you noticed that curry (form the first example) swaps arguments, while composable does not. curry is meant to be applied to operator functions like drop or sub only, which have a different argument order in curried and uncurried form respectively. An operator function is any function that expects only non-functional arguments. In this sence...
const I = x => x;
const eq = (x, y) => x === y; // are operator functions
// whereas
const A = (f, x) => f(x);
const U = f => f(f); // are not operator but a higher order functions
Higher order functions (HOFs) don't need swapped arguments but you will regularly encounter them with arities higher than two, hence the composbale function is useful.
HOFs are one of the most awesome tools in functional programming. They abstract from function application. This is the reason why we use them all the time.
A more serious task
We can solve more complex tasks as well:
// generic functions
const composable = f => (...args) => x => f(...args, x);
const filter = (f, xs) => xs.filter(f);
const comp2 = (f, g, x, y) => f(g(x, y));
const len = xs => xs.length;
const odd = x => x % 2 === 1;
// compositions
const countWhere_ = f => composable(comp2) (len, filter, f); // (A)
const countWhereOdd = countWhere_(odd);
// and run...
console.log(
countWhereOdd([1,2,3,4,5]) // 3
);
Please note that in line A we were forced to pass f explicitly. This is one of the drawbacks of composable against curried functions: Sometimes we need to pass the data explicitly. However, if you dislike point-free style, this is actually an advantage.
Conclusion
Making functions composable mitigates the following concerns:
aesthetic concerns (less frequent use of the curry pattern f(x) (y) (z)
performance penalties (far fewer function calls)
However, point #4 (readability) is only slightly improved (less point-free style) and point #3 (debugging) not at all.
While I am convinced that a fully curried approach is superior to the one presented here, I think composable higher order functions are worth thinking about. Just use them as long as you or your coworkers don't feel comfortable with proper currying.

Why does function composition compose from right to left in Javascript?

Function composition composes from right to left:
const comp = f => g => x => f(g(x));
const inc = x => x + 1;
const dec = x => x - 1;
const sqr = x => x * x;
let seq = comp(dec)(comp(sqr)(inc));
seq(2); // 8
seq(2) is transformed to dec(sqr(inc(2))) and the application order is inc(2)...sqr...dec. Thus the functions are invoked in the reverse order in which they are passed to comp. This isn't intuitive for Javascript programmers, since they're used to method chaining, which goes from left to right:
o = {
x: 2,
inc() { return this.x + 1, this },
dec() { return this.x - 1, this },
sqr() { return this.x * this.x, this }
}
o.dec().sqr().inc(); // 2
I consider that confusing. Here's a reversed composition:
const flipped = f => g => x => g(f(x));
let seql = flipped(dec)(flipped(sqr)(inc));
seql(2); // 2
Are there any reasons why function composition goes from right to left?
To answer the original question: Why does function composition compose from right to left?
So it is traditionally made in mathematics
comp(f)(g)(x) has the same order as f(g(x))
It is trivial to create a reversed or forward composition (see example)
Forward function composition:
const comp = f => g => x => f(g(x));
const flip = f => x => y => f(y)(x);
const flipped = flip(comp);
const inc = a => a + 1;
const sqr = b => b * b;
comp(sqr)(inc)(2); // 9, since 2 is first put into inc then sqr
flipped(sqr)(inc)(2); // 5, since 2 is first put into sqr then inc
This way of calling functions is called currying, and works like this:
// the original:
comp(sqr)(inc)(2); // 9
// is interpreted by JS as:
( ( ( comp(sqr) ) (inc) ) (2) ); // 9 still (yes, this actually executes!)
// it is even clearer when we separate it into discrete steps:
const compSqr = comp(sqr); // g => x => sqr(g(x))
compSqr(inc)(2); // 9 still
const compSqrInc = compSqr(inc); // x => sqr(x + 1)
compSqrInc(2); // 9 still
const compSqrInc2 = compSqrInc(2); // sqr(3)
compSqrInc2; // 9 still
So functions are composed and interpreted (by the JS interpreter) left to right, while on execution, their values flow through each function from right to left. In short: first outside-in, then inside-out.
But flip has the restriction that a flipped composition can't be combined with itself to form a "higher order composition":
const comp2 = comp(comp)(comp);
const flipped2 = flipped(flipped)(flipped);
const add = x => y => x + y;
comp2(sqr)(add)(2)(3); // 25
flipped2(sqr)(add)(2)(3); // "x => f(g(x))3" which is nonsense
Conclusion: The right-to-left order is traditional/conventional but not intuitive.
Your question is actually about the order of arguments in a definition of the function composition operator rather than right- or left-associativity. In mathematics, we usually write "f o g" (equivalent to comp(f)(g) in your definition) to mean the function that takes x and returns f(g(x)). Thus "f o (g o h)" and "(f o g) o h" are equivalent and both mean the function that maps each argument x to f(g(h(x))).
That said, we sometimes write f;g (equivalent to compl(f)(g) in your code) to mean the function which maps x to g(f(x)). Thus, both (f;g);h and f;(g;h) mean the function mapping x to h(g(f(x))).
A reference: https://en.wikipedia.org/wiki/Function_composition#Alternative_notations

Categories

Resources