pythontime-complexitycomplexity-theorybacktrackingpruning

Can't tell the difference between two python n-queens solutions


Reading up on backtracking led me to a page on geeksforgeeks.org about solutions to the n-queens problem. The first solution is introduced as the "naive approach" that generates all possible permutations and is the least efficient at O(n! * n). The second solution is "Instead of generating all possible permutations, we build the solution incrementally, while doing this we can make sure at each step that the partial solution remains valid. If a conflict occur then we’ll backtrack immediately, this helps in avoiding unnecessary computations." and allegedly is O(n!).

Analyzing the code, I simply cannot see the difference between the two (in terms of recursion/backtracking/pruning) apart from differences in comments, some variable names, the way diagonal conflicts are calculated/recorded and some other trivial things that make no difference like using [:] instead of .copy().

First solution (least efficient):

#Python program to find all solution of N queen problem 
#using recursion

# Function to check if placement is safe
def isSafe(board, currRow, currCol):
    for i in range(len(board)):
        placedRow = board[i]
        placedCol = i + 1
        
        # Check diagonals
        if abs(placedRow - currRow) == \
           abs(placedCol - currCol):
            return False # Not safe
          
    return True # Safe to place

# Recursive utility to solve N-Queens
def nQueenUtil(col, n, board, res, visited):
  
    # If all queens placed, add to res
    if col > n:
        res.append(board.copy())
        return
      
    # Try each row in column
    for row in range(1, n+1):
      
        # If row not used
        if not visited[row]:
          
            # Check safety
            if isSafe(board, row, col):
              
                # Mark row
                visited[row] = True
                
                # Place queen
                board.append(row)
                
                # Recur for next column
                nQueenUtil(col+1, n, board, res, visited)
                
                # Backtrack
                board.pop()
                visited[row] = False

# Main N-Queen solver
def nQueen(n):
    res = []
    board = []
    visited = [False] * (n + 1)
    nQueenUtil(1, n, board, res, visited)
    return res

if __name__ == "__main__":
    n = 4
    res = nQueen(n)
    for row in res:
        print(row)

Second solution (backtracking with pruning, allegedly more efficient):

# Python program to find all solutions of the N-Queens problem
# using backtracking and pruning

def nQueenUtil(j, n, board, rows, diag1, diag2, res):

    if j > n:
        # A solution is found
        res.append(board[:])
        return

    for i in range(1, n + 1):
        if not rows[i] and not diag1[i + j] and not diag2[i - j + n]:

            # Place queen
            rows[i] = diag1[i + j] = diag2[i - j + n] = True
            board.append(i)

            # Recurse to the next column
            nQueenUtil(j + 1, n, board, rows, diag1, diag2, res)

            # Remove queen (backtrack)
            board.pop()
            rows[i] = diag1[i + j] = diag2[i - j + n] = False

def nQueen(n):
    res = []
    board = []

    # Rows occupied
    rows = [False] * (n + 1)

    # Major diagonals (row + j) and Minor diagonals (row - col + n)
    diag1 = [False] * (2 * n + 1)
    diag2 = [False] * (2 * n + 1)

    # Start solving from the first column
    nQueenUtil(1, n, board, rows, diag1, diag2, res)
    return res

if __name__ == "__main__":
    n = 4
    res = nQueen(n)

    for temp in res:
        print(temp)

Solution

  • The difference is in the diagonal checks.

    The second, efficient version will know for a given square with O(1) time complexity whether it sits on any of the occupied diagonals. For this it makes use of the diag1 and diag2 lists, which have flags for each of the diagonals whether they are occupied or not.

    The less efficient version does not have this data structure, and for a given square it iterates the already placed queens to see if any of those occupies the same diagonal. This happens in the isSafe function and has an average time complexity of O(𝑛) for each of the checked squares. This explains the difference in the overall time complexities of these two algorithms.