haskellmonadsapplicative

Difference between Monad and Applicative in Haskell


I just read the following from typeclassopedia about the difference between Monad and Applicative. I can understand that there is no join in Applicative. But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action. So, if I put a value into Maybe, which makes a monad, what is the result of this "computation"?

Let’s look more closely at the type of (>>=). The basic intuition is that it combines two computations into one larger computation. The first argument, m a, is the first computation. However, it would be boring if the second argument were just an m b; then there would be no way for the computations to interact with one another (actually, this is exactly the situation with Applicative). So, the second argument to (>>=) has type a -> m b: a function of this type, given a result of the first computation, can produce a second computation to be run. ... Intuitively, it is this ability to use the output from previous computations to decide what computations to run next that makes Monad more powerful than Applicative. The structure of an Applicative computation is fixed, whereas the structure of a Monad computation can change based on intermediate results.

Is there a concrete example illustrating "ability to use the output from previous computations to decide what computations to run next", which Applicative does not have?


Solution

  • My favorite example is the "purely applicative Either". We'll start by analyzing the base Monad instance for Either

    instance Monad (Either e) where
      return = Right
      Left e  >>= _ = Left e
      Right a >>= f = f a
    

    This instance embeds a very natural short-circuiting notion: we proceed from left to right and once a single computation "fails" into the Left then all the rest do as well. There's also the natural Applicative instance that any Monad has

    instance Applicative (Either e) where
      pure  = return
      (<*>) = ap
    

    where ap is nothing more than left-to-right sequencing before a return:

    ap :: Monad m => m (a -> b) -> m a -> m b
    ap mf ma = do 
      f <- mf
      a <- ma
      return (f a)
    

    Now the trouble with this Either instance comes to light when you'd like to collect error messages which occur anywhere in a computation and somehow produce a summary of errors. This flies in the face of short-circuiting. It also flies in the face of the type of (>>=)

    (>>=) :: m a -> (a -> m b) -> m b
    

    If we think of m a as "the past" and m b as "the future" then (>>=) produces the future from the past so long as it can run the "stepper" (a -> m b). This "stepper" demands that the value of a really exists in the future... and this is impossible for Either. Therefore (>>=) demands short-circuiting.

    So instead we'll implement an Applicative instance which cannot have a corresponding Monad.

    instance Monoid e => Applicative (Either e) where
      pure = Right
    

    Now the implementation of (<*>) is the special part worth considering carefully. It performs some amount of "short-circuiting" in its first 3 cases, but does something interesting in the fourth.

      Right f <*> Right a = Right (f a)     -- neutral
      Left  e <*> Right _ = Left e          -- short-circuit
      Right _ <*> Left  e = Left e          -- short-circuit
      Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
    

    Notice again that if we think of the left argument as "the past" and the right argument as "the future" then (<*>) is special compared to (>>=) as it's allowed to "open up" the future and the past in parallel instead of necessarily needing results from "the past" in order to compute "the future".

    This means, directly, that we can use our purely Applicative Either to collect errors, ignoring Rights if any Lefts exist in the chain

    > Right (+1) <*> Left [1] <*> Left [2]
    > Left [1,2]
    

    So let's flip this intuition on its head. What can we not do with a purely applicative Either? Well, since its operation depends upon examining the future prior to running the past, we must be able to determine the structure of the future without depending upon values in the past. In other words, we cannot write

    ifA :: Applicative f => f Bool -> f a -> f a -> f a
    

    which satisfies the following equations

    ifA (pure True)  t e == t
    ifA (pure False) t e == e
    

    while we can write ifM

    ifM :: Monad m => m Bool -> m a -> m a -> m a
    ifM mbool th el = do
      bool <- mbool
      if bool then th else el
    

    such that

    ifM (return True)  t e == t
    ifM (return False) t e == e
    

    This impossibility arises because ifA embodies exactly the idea of the result computation depending upon the values embedded in the argument computations.