algorithmcomplexity-theorycomputer-science-theory

Computational complexity for the case of many answers or multiple parameters


How is computational complexity defined if the algorithm:

An example that suits both cases is selecting k elements from N. E.g. is there a difference in the estimate depending on whether ~k or ~N steps are required?

I'd like to see some hard evidence: the formal definition of the term in these cases and/or how these ambiguities are eliminated in CS papers rather than just random thoughts and/or personal experience. The goal is to work out a complete, state-of-the-art solution to eliminate these ambiguities in my (and others') texts once and for all.

The questions that had me puzzled about this are: Unique (non-repeating) random numbers in O(1)?, How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N, Algorithm to select a single, random combination of values?, Efficiently selecting a set of random elements from a linked list.


Solution

  • According to the answer at CS.SE: