So I encountered this very weird behavior on an edge case when sorting a vector using a custom comparator.
When running this code, it will not halt, and goes forever:
int main() {
auto comp = [](int lhs, int rhs) {
return true;
};
vector<int> vec{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
sort(vec.begin(), vec.end(), comp);
for (int num : vec)
cout << num;
return 0;
}
however, when I change the true
to false
, it works perfectly.
auto comp = [](int lhs, int rhs) {
return false;
};
What's weirder, when I decrease the number of 0
's to be sorted, it also works perfectly. (It works with 16 or less 0
's, when I add one more 0
to make it 17, the program will not halt again. (Will g++ switch to another sorting algorithm if the length exceeds 16?)
vector<int> vec{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
Why is this the case? Am I missing some important concepts in C++'s sort()
function?
This comparator:
auto comp = [](int lhs, int rhs) {
return true;
};
violates the requirement of std::sort
that the comparator must establish a strict-weak-ordering. Note that this function returns true
regardless of the order of arguments, implying that 2 elements can both be less than the other, which doesn't really make sense. Violating this requirement of std::sort
invokes undefined behavior (this is sufficient to explain the differing behavior you see with different vector
sizes).
On the other hand, this comparator:
auto comp = [](int lhs, int rhs) {
return false;
};
is perfectly fine. It basically says that no elements compare less than any other (i.e. they are all equivalent). This satisfies strict-weak-ordering, so std::sort
will work just fine with it.
Of course, std::sort
won't do anything useful with the second comparator, since all the elements are already "sorted". This might reorder the elements though; but if you use std::stable_sort
then the original range is guaranteed to be unchanged.