algorithmtime-complexitycomplexity-theoryfactorial

Why log(n!) is O(nlog(n)) and not O(log(n!))


I understand the math here

n! < n^n. 

But why can't the complexity simply be just O(log(n!))?

Why is there a need, given f(n) ~ O(g(n)), f(n) != g(n) is a must ? I repeatedly see the pattern in textbooks. Can't comprehend the need for it.

Is it because we want to map all f(n)'s to a limited list of g(n)'s - logn, n, nlogn, n^2, n^3 etc?

Ex - Is log(n!) = Θ(n·log(n))? Why can't it just be O(log(n!))


Solution

  • When working with n!, Stirling's approximation can be used:

    n! ~ sqrt(2 * pi * n) * n^n / exp(n) 
    

    For O(log n!) we have:

    O(log(n!)) = O(log(sqrt(2*pi*n) * n^n / exp(n))) =
               = O(log(sqrt(2*pi*n))) + O(log(n^n)) - O(log(exp(n))) =
               = O(1) + O(log(n)) + O(n * log(n)) - O(n) =
               = O(n * log(n)) 
    

    Note, that O(n * log(n)) is simpler form of O(log(n!)) that's why we often prefer O(n * log(n)) to original formula in textbooks.