algorithmdata-structuresbacktrackingdivide-and-conquercoin-change

Divide and Conquer vs Backtracking


Let’s use as an example the problem LeetCode 322. Coin Change

I know it is best solved by using Dynamic Programming, but I want to focus on my Brute Force solution:

class Solution:
    def coinChange(self, coins: List[int], amount: int) -> int:
        curr_min = float('inf')

        def helper(amount):
            nonlocal curr_min
        
            if amount < 0:
                return float('inf')
        
            if amount == 0:
                return 0
        
            for coin in coins:
                curr_min = min(curr_min, helper(amount-coin) + 1)
            
            return curr_min
            
    ans = helper(amount)
    return -1 if ans == float('inf') else ans

The Recursion Tree looks like: Recursion Tree

I can say it is Divide and Conquer: We are dividing the problem into smaller sub-problems, solving individually and using those individual results to construct the result for the original problem.

I can also say it is Backtracking: we are enumerating all combinations of coin frequencies which satisfy the constraints.

I know both are implemented via Recursion, but I would like to know which paradigm my Brute Force solution belongs to: Divide and Conquer or Backtracking.


Solution

  • A complication in categorizing your algorithm is that there aren’t clear, well-defined boundaries between different classes of algorithms and different people might have slightly different definitions in mind.

    For example, generally speaking, divide-and-conquer algorithms involve breaking the problem apart into non-overlapping subproblems. (See, for example, mergesort, quicksort, binary search, closest pair of points, etc.) In that sense, your algorithm doesn’t nicely map onto the divide-and-conquer paradigm, since the subproblems you’re considering involve some degree of overlap in the subproblems they solve. (Then again, not all divide-and-conquer algorithms have this property. See, for example, stoogesort.)

    Similarly, backtracking algorithms usually, but not always, work by committing to a decision, recursively searching to see whether a solution exists given that decision, then unwinding the choice if it turns out not to lead to a solution. Your algorithm doesn’t have this property, since it explores all options and then takes the best. (When I teach intro programming, I usually classify algorithms this way. But my colleagues sometimes describe what you’re doing as backtracking!)

    I would classify your algorithm as belonging to a different family of exhaustive search. The algorithm you’ve proposed essentially works by enumerating all possible ways of making change, then returning the one that uses the fewest coins. Exhaustive search algorithms are ones that work by trying all possible options and returning the best, and I think that’s the best way of classifying your strategy.