Question: How can I efficiently compute the minimum distance between two axis-aligned boxes in n-dimensions?
Box format: The boxes, A and B, are given by their minimum and maximum points, A_min, A_max, B_min, B_max, each of which is a n-dimensional vector. That is, the boxes may be written mathematically as the following cartesian products of intervals:
A = [A_min(1), A_max(1)] x [A_min(2), A_max(2)] x ... x [A_min(n), A_max(n)]
B = [B_min(1), B_max(1)] x [B_min(2), B_max(2)] x ... x [B_min(n), B_max(n)]
Picture: here is a picture demonstrating the idea in 2D:
Note: Note: I ask this question, and answer it myself, because this question (in general n-dimensional form) appears to be absent from stackoverflow even after all these years. Good answers to this question are hard to find on the internet more generally. After googling around, I eventually had to figure this out myself, and am posting here to spare future people the same trouble.
The minimum distance between the boxes is given by:
dist = sqrt(||u||^2 + ||v||^2)
where
u = max(0, A_min - B_max)
v = max(0, B_min - A_max)
The maximization is done entrywise on the vectors (i.e., max(0, w) means replace all negative entries of vector w with zero, but leave the positive entries unchanged). The notation ||w|| means the euclidean norm of the vector w (square root of the sum of the squares of the entries).
This does not require any case-by-case analysis, and works for any dimension regardless of where the boxes are with respect to each other.
python code:
import numpy as np
def boxes_distance(A_min, A_max, B_min, B_max):
delta1 = A_min - B_max
delta2 = B_min - A_max
u = np.max(np.array([np.zeros(len(delta1)), delta1]), axis=0)
v = np.max(np.array([np.zeros(len(delta2)), delta2]), axis=0)
dist = np.linalg.norm(np.concatenate([u, v]))
return dist