According to the definition of big O f(n) <= C*g(n)(which means f(n) = O(g(n)), it could be deduced that:
f(n) <= C
f(n) <= 2C
I think there are no big differences between these two. What I could come up with is:
f(n) = 1 - 1 / n
f(n) = 2 - 1 / n
C = 1
But what differs this two complexities,since both are constant complexity?
Could you show some real world code to demonstrate the differences between O(1) and O(2).
There is no difference between O(1) and O(2). Algorithms classifying as O(1) are O(2) and vice versa. In fact, O(c1) is O(c2) for any positive constants c1 and c2.
O(c) where c is a positive constants simply means that the runtime is bounded independent of the input or problem size. From this it is clear (informally) that O(1) and O(2) are equal.
Formally, consider a function f in O(1). Then there is a constant c such that f(n) <= c * 1 for all n. Let d = c / 2. Then f(n) <= c = (c / 2) * 2 = d * 2 which shows that f is O(2). Similarly if g is O(2) there is a constant c such that g(n) <= c * 2 for all n. Let d = 2 * c. Then g(n) <= c * 2 = d = d * 1 which shows that g is O(1). Therefore O(1) = O(2).