So I implemented standard Depth First Search for a tree of nodes, where each node encapsulates a state of a problem I'm solving and I also added the method below to check that I'm not going to repeat a move by expanding a node which encapsulates a state that I have already checked in some previous node. My question is: does this method somehow alter the time or space complexity of the algorithm or are they still the typical for DFS O(b^m) and O(bm), respectively (where here b - branching factor and m - maximum depth).
//additional method which prevents us from repeating moves
public boolean checkForRepeatedMoves(Node node) {
Node nodeBeingChecked = node;
//if the current node's state is the same as the state of its parent or its parent's parent and so on.. then we are repeating a move
while (node.getParent() != null) {
if(node.getParent().getState().isEqualTo(nodeBeingChecked.getState())) {
return true;
}
node = node.getParent();
}
//if we have reached the root and no repetition is detected - return false
return false;
}
Space complexity
checkForRepeatedMoves
does not seem to generate new object allocation or to pile-up invocations on the stack, therefore it should left the overall space complexity of the DFS unchanged.
Time complexity
Let's assume checkForRepeatedMoves
is called for every node of the tree and that the tree is fully populated at every level (loosely speaking).
Let's call c
the unit of work to perform the check of the state of the node being compared with a parent node. Let's assume c
is constant.
Let's breakdown the cost at each level of the tree, from 1 (root) down to m
.
1
: 0
(0 parent visited for 1 node)2
: b.c
(1 parent visited for b
root's children) 3
: 2.b^2.c
(2 parents visited for b^2
nodes)m + 1
: m.b^m.c
(m parents visited for b^m
nodes)The total cost C(m)
can be written as C(m) = c.S(m)
where S(m)
is the sum:
[1]: S(m) = 0 + b + 2.b^2 + ... + m.b^m
Let's find an asymptotic equivalent of S(m)
. First, let's observe that
[2]: b.S(m) = 0 + b^2 + 2.b^3 + ... + m.b^(m+1)
Subtracting (1) from (2) gives:
[3]: (b - 1).S(m) = 0 - b - b^2 - b^3 - ... - b^m + m.b^(m+1)
Where we can identify the geometric sum b + b^2 + ... + b^m
, which simplifies to [4]: (b^(m+1) - 1) / (b - 1) - 1
.
Substituting the geometric sum in the right-hand size of the equality [3] by [4] and then grouping terms by descending asymptotic dominance gives
(b - 1).S(m) = m.b^(m+1) + p.b^m + q
where p
and q
are constant with respect to m
.
When m -> +Inf
, we have (b - 1).S(m)
~ (equivalent to) m.b^(m+1)
,
Therefore S(m) ~ [m/(b - 1)].b^(m+1) ~ m.b^m
Hence the cost is equivalent to C(m) ~ c.m.b^m
So C(m) = O(m.b^m)
.
Since m.b^m
"dominates" b^m
when m -> +Inf
, the overall complexity of your algorithm becomes O(m.b^m)
, from O(b^m)
for a conventional DFS. The time complexity is therefore increased.