How is computational complexity defined if the algorithm:
k
cannot be faster than O(k) ) or per element (then the estimate must be multiplied to compare it with non-set-producing algorithms)?
An example that suits both cases is selecting k
elements from N
. E.g. is there a difference in the estimate depending on whether ~k
or ~N
steps are required?
I'd like to see some hard evidence: the formal definition of the term in these cases and/or how these ambiguities are eliminated in CS papers rather than just random thoughts and/or personal experience. The goal is to work out a complete, state-of-the-art solution to eliminate these ambiguities in my (and others') texts once and for all.
The questions that had me puzzled about this are: Unique (non-repeating) random numbers in O(1)?, How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N, Algorithm to select a single, random combination of values?, Efficiently selecting a set of random elements from a linked list.
According to the answer at CS.SE:
For instance, a O(log n) delay algorithm returns results one at a time, taking at most O(log n) steps (running time) to output each result.