algorithmbinary-treespace-complexityinorder

How can Morris inorder tree traversal have a space complexity of O(1)?


I was reading about Morris inorder Traversal (geekforgeeks, other SO question).

It seems that the advantage of Morris Traversal as opposed to recursion or iteration with a stack, is that the space complexity is O(1) instead of O(h) with h the height of the tree.

But when I am looking at the algorithm, it looks like you could have added up to h new pointers. Consider this spindly tree as a simple example:

      1
     /
    2
   / 
  3
 /
4

By the time you reach node 4, every node except the root would have a new pointer to its parent as a right child, which is O(h) new pointers.

Is the reason that in most programming languages, the null pointer takes the same space as a non-null pointer so changing the right pointer doesn't take any new space?

If it's the reason, I find it strange to have a language-agnostic algorithm relying on implementation details for its complexity. Looks like a convenient trick more than anything.

For example the space complexity would be wrong in JavaScript where changing {left : node2} to {left: node2, right: node1} would be taking more space.


Solution

  • In the article "Traversing Binary Tree Siply and Cheaply", Joseph M. Morris, 16 December 1979, the author described the traversal that got to bear his name. The introduction states:

    The present algorithm uses no auxiliary storage other than two pointers.

    It continues with the following assumptions:

    Each node in a tree has a unique name (in machine terms, an address), the empty node having the name nil. A ‘pointer’ is a variable of type ‘node name’.

    Each node t has two fields of interest, the left and right pointer fields, which are denoted by t.l and t.r, respectively. A pointer field (or ‘link’) whose value is not nil corresponds to an ‘edge’ of the tree.

    Here we can see that the author defines a node as always having a field for the left child pointer and one for the right child pointer. It is only with that assumption that we can say no other auxiliary memory is needed.

    If we define binary trees in a different way, for instance by using a dynamically sized node structure (e.g. with a dynamic array for the node's children), then this assumption is no longer true, and so we cannot take the implication for granted either.