This question is about whether there is some abstract similarity between the solutions that leads to the appearance of log in problems such as sorting and searching. Or, more simply, why does log appear so frequently in algorithmic complexity?
Logarithms often show up when a problem can be repeatedly reduced in size by a multiplicative factor. By definition there's a logarithmic number of steps required to reduce the problem to constant size (e.g. size 1).
A typical example would be repeatedly eliminating half of the dataset, as is done in binary search. This gives O(log2(n))
complexity. Some sorting algorithms work by repeatedly splitting the dataset in half, and therefore also have a logarithmic term in their time complexity.
More generally, logarithms frequently show up in solutions to divide-and-conquer recurrence relations. See Master Theorem in Wikipedia for further discussion.