pythonalgorithmbitwise-operatorsspace-complexity

Which Python method is more efficient for checking powers of 2?


Consider the following two Python implementations for checking if a number is a power of 2:

Method 1: Using Math.log()

import math

def is_power_of_two(n):
    if n <= 0:
        return False
    v = math.log(n, 2)
    return int(v) == v

Method 2: Using Bitwise Operator

def is_power_of_two(n):
    return n > 0 and ((n & (n - 1)) == 0)

Both methods return the correct results. However, can you explain the difference in time and space efficiency between these two methods?

Could you explain the computational complexity of each method and identify any scenarios where one method would be more efficient or appropriate to use than the other?


Solution

  • math.log converts the argument to a floating-point representation and computes the logarithm there. Therefore, should the conversion end up being inexact so will be the result. For "double" precision (i.e. 64-bits) floats that gives you 2^53 upper bound.

    So only the later one is correct. As for what the speed goes, in CPython I doubt you will see any significant difference but a few integer instructions are definitely going to be faster than computing log in floating point (+ the conversion), in general.

    Out of curiosity, you could also try:

    def is_power_of_two(n):
        return (1 << (n.bit_length() - 1)) == n