haskellrecursiondeclarationcorecursion

How does (co)recursive definition work in Haskell?


I'm playing around with the language to start learning and I am puzzled beyond my wits about how a recursive definition works.

For example, let's take the sequence of the Triangular numbers (TN n = sum [1..n])

The solution provided was:

triangularNumbers = scanl1 (+) [1..]

So far, so good.

But the solution I did come up with was:

triangularNumbers = zipWith (+) [1..] $ 0 : triangularNumbers

Which is also correct.

Now my question is: how does this translate to a lower level implementation? What happens exactly behind the scene when such a recursive definition is met?


Solution

  • Here is a simple recursive function that gives you the nth Triangular number:

    triag 0 = 0
    triag n = n + triag (n-1)
    

    Your solution triag' = zipWith (+) [1..] $ 0 : triag' is something more fancy: It's corecursive (click, click). Instead of calculating the nth number by reducing it to the value of smaller inputs, you define the whole infinite sequence of triangular numbers by recursively specifying the next value, given an initial segment.

    How does Haskell handle such corecursion? When it encounters your definition, no calculation is actually performed, it is deferred until results are needed for further computations. When you access a particular element of your list triag', Haskell starts computing the elements of the list based on the definition, up to the element that gets accessed. For more detail, I found this article on lazy evaluation helpful. In summary, lazy evaluation is great to have, unless you need to predict memory usage.

    Here is a similar SO question, with a step-by step explanation of the evaluation of fibs = 0 : 1 : zipWith (+) fibs (tail fibs), a corecursive definition of the Fibonacci sequence.