I'm wandering if I should think of using contramap
when I find myself writing code like (. f) . g
, where f
is in practice preprocessing the second argument to g
.
I will describe how I came up with the code that made me think of the question in the title.
Initially, I had two inputs, a1 :: In
and a2 :: In
, wrapped in a pair (a1, a2) :: (In,In)
, and I needed to do two interacting processing on those inputs. Specifically, I had a function binOp :: In -> In -> Mid
to generate a "temporary" result, and a function fun :: Mid -> In -> In -> Out
to be fed with binOp
's inputs and output.
Given the part "function fed with inputs and output of another function" above, I though of using the function monad, so I came up with this,
finalFun = uncurry . fun =<< uncurry binOp
which isn't very complicated to read: binOp
takes the inputs as a pair, and passes its output followed by its inputs to fun
, which takes the inputs as a pair too.
However, I noticed that in the implementation of fun
I was actually using only a "reduced" version of the inputs, i.e. I had a definition like fun a b c = fun' a (reduce b) (reduce c)
, so I thougth that, instead of fun
, I could use fun'
, alongside reduce
, in the definition of finalFun
; I came up with
finalFun = (. both reduce) . uncurry . fun' =<< uncurry binOp
which is far less easy to read, especially because it features an un-natural order of the parts, I believe. I could only think of using some more descriptive name, as in
finalFun = preReduce . uncurry . fun' =<< uncurry binOp
where preReduce = (. both reduce)
Since preReduce
is actually pre-processing the 2nd and 3rd argument of fun'
, I was wandering if this is the right moment to use contramap
.
lmap f . g
(rather than contramap
, as that would also require an Op
wrapper) might indeed be an improvement over (. f) . g
in terms of clarity. If your readers are familiar with Profunctor
, seeing lmap
will promptly suggest the input to something is being modified, without the need for them to perform dot plumbing in their heads. Note it isn't a widespread idiom yet, though. (For reference, here are Serokell Hackage Search queries for the lmap
version and the dot section one.)
As for your longer example, the idiomatic thing to do would probably be not writing it pointfree. That said, we can get a more readable pointfree version by changing the argument order of fun
/fun'
, so that you can use the equivalent Applicative
instance instead of the Monad
one:
binOp :: In -> In -> Mid
fun' :: In -> In -> Mid -> Out
reduce :: In -> In
both f = bimap f f
finalFun :: (In, In) -> Out
finalFun = uncurry fun' . both reduce <*> uncurry binOp
Pointfree function (<*>)
is arguably less hard to make sense of than pointfree function (=<<)
, as its two arguments are computations in the relevant function functor. Also, this change removes the need for the dot section trick. Lastly, since (.)
is fmap
for functions, we can further rephrase finalFun
as...
finalFun = uncurry fun' <$> both reduce <*> uncurry binOp
... thus getting an applicative style expression, which in my (not so popular!) opinion is a reasonably readable way of using the function applicative. (We might further streamline it using liftA2
, but I feel that in this specific scenario makes it less obvious what is going on.)