recursionoptimizationschemesicpcoin-change

SICP making change


So; I'm a hobbyist who's trying to work through SICP (it's free!) and there is an example procedure in the first chapter that is meant to count the possible ways to make change with American coins; (change-maker 100) => 292. It's implemented something like:

(define (change-maker amount)
  (define (coin-value n)
    (cond ((= n 1) 1)
          ((= n 2) 5)
          ((= n 3) 10)
          ((= n 4) 25)
          ((= n 5) 50)))

  (define (iter amount coin-type)
    (cond ((= amount 0) 1)
          ((or (= coin-type 0) (< amount 0)) 0)
          (else (+ (iter amount
                         (- coin-type 1))
                   (iter (- amount (coin-value coin-type))
                         coin-type)))))

  (iter amount 5))

Anyway; this is a tree-recursive procedure, and the authors "leave as a challenge" finding an iterative procedure to solve the same problem (i.e. in fixed space). I have not had luck figuring this out or finding an answer after getting frustrated. I'm wondering if I'm just stuck, or the authors are playing with me.


Solution

  • The simplest / most general way to eliminate recursion, in general, is to use an auxiliary stack -- instead of making the recursive calls, you push their arguments into the stack, and iterate. When you need the result of the recursive call in order to proceed, again in the general case, that's a tad more complicated because you're also going to have to be able to push a "continuation request" (that will come off the auxiliary stack when the results are known); however, in this case, since all you're doing with all the recursive call results is a summation, it's enough to keep an accumulator and, every time you get a number result instead of a need to do more call, add it to the accumulator.

    However, this, per se, is not fixed space, since that stack will grow. So another helpful idea is: since this is a pure function (no side effects), any time you find yourself having computed the function's value for a certain set of arguments, you can memoize the arguments-result correspondence. This will limit the number of calls. Another conceptual approach that leads to much the same computations is dynamic programming [[aka DP]], though with DP you often work bottom-up "preparing results to be memoized", so to speak, rather than starting with a recursion and working to eliminate it.

    Take bottom-up DP on this function, for example. You know you'll repeatedly end up with "how many ways to make change for amount X with just the smallest coin" (as you whittle things down to X with various coin combinations from the original amount), so you start computing those amount values with a simple iteration (f(X) = X/value if X is exactly divisible by the smallest-coin value value, else 0; here, value is 1, so f(X)=X for all X>0). Now you continue by computing a new function g(X), ways to make change for X with the two smallest coins: again a simple iteration for increasing X, with g(x) = f(X) + g(X - value) for the value of the second-smallest coin (it will be a simple iteration because by the time you're computing g(X) you've already computed and stored f(X) and all g(Y) for Y < X -- of course, g(X) = 0 for all X <= 0). And again for h(X), ways to make change for X with the three smallest coins -- h(X) = g(X) + g(X-value) as above -- and from now on you won't need f(X) any more, so you can reuse that space. All told, this would need space 2 * amount -- not "fixed space" yet, but, getting closer...

    To make the final leap to "fixed space", ask yourself: do you need to keep around all values of two arrays at each step (the one you last computed and the one you're currently computing), or, only some of those values, by rearranging your looping a little bit...?