In pp.296 Sedgewick & et al.'s Algorithm, 4rd edition, the author wrote:
The optimum value of the cutoff M is system-dependent, but any value between 5 and 15 is likely to work well in most situations.
But I don't understand what it means by the cutoff value is system-dependent because the performance of an algorithm is measured is based on the number of operations it performs not and independent of the speed of the computer processor?
I have a 2nd Edition, from 1984. On page 112, under the heading "Small Subfiles" in a discussion of Quicksort:
The second improvement stems from the observation that a recursive program is guaranteed to call itself for many small subfiles, so it should be changed to use a better method when small subfiles are encountered. One obvious way to do this is to change the test at the beginning of the recursive routine from
if r>l then
to a call on insertion sort (modified to accept parameters defining the subfile to be sorted), that isif r-l <= M then insertion(l,r)
. Here, M is some parameter whose exact value depends upon the implementation. The value chosen for M need not be the best possible: the algorithm works about the same for M in the range from about 5 to about 25. The reduction in the running time is on the order of 20% for most applications.
There are a couple of points here.
You're right: the performance of an algorithm is measured based on the number of operations it performs, not on the speed of the processor.
As I explained in my answer to a similar question, Quicksort is a hugely complicated algorithm when compared to Insertion sort. There is considerable bookkeeping overhead involved. That is, there are certain fixed costs with Quicksort, regardless of how large or small the subarray you're sorting. As the array gets smaller, the percentage of time spent in overhead increases.
Insertion sort is a very simple sorting algorithm. There is very little bookkeeping overhead. Insertion sort can be faster than Quicksort for small arrays because Insertion sort actually performs fewer operations than Quicksort does when the arrays are very small. The variation (as the text says, from 5 to 25) depends on the exact algorithm implementations.
Sedgewick's book, as with many other algorithms texts, often blurs the line between the theoretical and the practical. I think it's good to keep the practical in mind, but either the author should make it clear when he's talking about actual performance and theoretical performance, or the instructor should clarify that in class.