time-complexitybacktrackingspace-complexityn-queensrecursive-backtracking

How to identify time and space complexity of recursive backtracking algorithms with step-by-step analysis


Background Information: I solved the N-Queens problem with the C# algorithm below, which returns the total number of solutions given the board of size n x n. It works, but I do not understand why this would be O(n!) time complexity, or if it is a different time complexity. I am also unsure of the space used in the recursion stack (but am aware of the extra space used in the boolean jagged array). I cannot seem to wrap my mind around understanding the time and space complexity of such solutions. Having this understanding would be especially useful during technical interviews, for complexity analysis without the ability to run code.

Preliminary Investigation: I have read several SO posts where the author directly asks the community to provide the time and space complexity of their algorithms. Rather than doing the same and asking for the quick and easy answers, I would like to understand how to calculate the time and space complexity of backtracking algorithms so that I can do so moving forward.

I have also read in numerous locations within and outside of SO that generally, recursive backtracking algorithms are O(n!) time complexity since at each of the n iterations, you look at one less item: n, then n - 1, then n - 2, ... 1. However, I have not found any explanation as to why this is the case. I also have not found any explanation for the space complexity of such algorithms.

Question: Can someone please explain the step-by-step problem-solving approach to identify time and space complexities of recursive backtracking algorithms such as these?

public class Solution {
    public int NumWays { get; set; }
    public int TotalNQueens(int n) {
        if (n <= 0)
        {
            return 0;
        }
        
        NumWays = 0;
        
        bool[][] board = new bool[n][];
        for (int i = 0; i < board.Length; i++)
        {
            board[i] = new bool[n];
        }
        
        Solve(n, board, 0);
        
        return NumWays;
    }
    
    private void Solve(int n, bool[][] board, int row)
    {
        if (row == n)
        {
            // Terminate since we've hit the bottom of the board
            NumWays++;
            return;
        }
        
        for (int col = 0; col < n; col++)
        {
            if (CanPlaceQueen(board, row, col))
            {
                board[row][col] = true; // Place queen
                Solve(n, board, row + 1);
                board[row][col] = false; // Remove queen
            }
        }
    }
    
    private bool CanPlaceQueen(bool[][] board, int row, int col)
    {
        // We only need to check diagonal-up-left, diagonal-up-right, and straight up. 
        // this is because we should not have a queen in a later row anywhere, and we should not have a queen in the same row
        for (int i = 1; i <= row; i++)
        {
            if (row - i >= 0 && board[row - i][col]) return false;
            if (col - i >= 0 && board[row - i][col - i]) return false;
            if (col + i < board[0].Length && board[row - i][col + i]) return false;
        }
        
        return true;
    }
}

Solution

  • First of all, it's definitely not true that recursive backtracking algorithms are all in O(n!): of course it depends on the algorithm, and it could well be worse. Having said that, the general approach is to write down a recurrence relation for the time complexity T(n), and then try to solve it or at least characterize its asymptotic behaviour.

    Step 1: Make the question precise

    Are we interested in the worst-case, best-case or average-case? What are the input parameters?

    In this example, let us assume we want to analyze the worst-case behaviour, and the relevant input parameter is n in the Solve method.

    In recursive algorithms, it is useful (though not always possible) to find a parameter that starts off with the value of the input parameter and then decreases with every recursive call until it reaches the base case.

    In this example, we can define k = n - row. So with every recursive call, k is decremented starting from n down to 0.

    Step 2: Annotate and strip down the code

    No we look at the code, strip it down to just the relevant bits and annotate it with complexities.

    We can boil your example down to the following:

    private void Solve(int n, bool[][] board, int row)
        {
            if (row == n) // base case
            {
               [...] // O(1)
               return;
            }
            
            for (...) // loop n times
            {
                if (CanPlaceQueen(board, row, col)) // O(k)
                {
                    [...] // O(1)
                    Solve(n, board, row + 1); // recurse on k - 1 = n - (row + 1)
                    [...] // O(1)
                }
            }
        }
    

    Step 3: Write down the recurrence relation

    The recurrence relation for this example can be read off directly from the code:

    T(0) = 1         // base case
    T(k) = k *       // loop n times 
           (O(k) +   // if (CanPlaceQueen(...))
           T(k-1))   // Solve(n, board, row + 1)
         = k T(k-1) + O(k)
    

    Step 4: Solve the recurrence relation

    For this step, it is useful to know a few general forms of recurrence relations and their solutions. The relation above is of the general form

    T(n) = n T(n-1) + f(n)
    

    which has the exact solution

    T(n) = n!(T(0) + Sum { f(i)/i!, for i = 1..n })
    

    which we can easily prove by induction:

    T(n) = n T(n-1) + f(n)                                          // by def.
         = n((n-1)!(T(0) + Sum { f(i)/i!, for i = 1..n-1 })) + f(n) // by ind. hypo.
         = n!(T(0) + Sum { f(i)/i!, for i = 1..n-1 }) + f(n)/n!)
         = n!(T(0) + Sum { f(i)/i!, for i = 1..n })                 // qed
    

    Now, we don't need the exact solution; we just need the asymptotic behaviour when n approaches infinity.

    So let's look at the infinite series

    Sum { f(i)/i!, for i = 1..infinity }
    

    In our case, f(n) = O(n), but let's look at the more general case where f(n) is an arbitary polynomial in n (because it will turn out that it really doesn't matter). It is easy to see that the series converges, using the ratio test:

    L = lim { | (f(n+1)/(n+1)!) / (f(n)/n!) |, for n -> infinity }
      = lim { | f(n+1) / (f(n)(n+1)) |, for n -> infinity }
      = 0  // if f is a polynomial
      < 1, and hence the series converges
    

    Therefore, for n -> infinity,

    T(n) -> n!(T(0) + Sum { f(i)/i!, for i = 1..infinity })
          = T(0) n!, if f is a polynomial
    

    Step 5: The result

    Since the limit of T(n) is T(0) n!, we can write

    T(n) ∈ Θ(n!)
    

    which is a tight bound on the worst-case complexity of your algorithm.

    In addition, we've proven that it doesn't matter how much work you do within the for-loop in addition to the recursive calls, as long as it's polynomial, the complexity stays Θ(n!) (for this form of recurrence relations).

    For a similar analysis with a different form of recurrence relation, see here.

    Update

    I made a mistake in the annotation of the code (I'll leave it because it is still instructive). Actually, both the loop and the work done within the loop do not depend on k = n - row but on the initial value n (let's call it n0 to make it clear).

    So the recurrence relation becomes

    T(k) = n0 T(k-1) + n0
    

    for which the exact solution is

    T(k) = n0^k (T(0) + Sum { n0^(1-i), for i = 1..k })
    

    But since initially n0 = k, we have

    T(k) = k^k (T(0) + Sum { n0^(1-i), for i = 1..k })
         ∈ Θ(k^k)
    

    which is a bit worse than Θ(k!).