Doesn't a list require O(n) time to increase its size? How, then, could heap.heappush be O(log n)?
A list
has amortized O(1)
appends; every once in a long while, it needs to expand the underlying capacity, but usually an append
just needs to claim already allocated capacity.
So yes, every once in a while, heapq.heappush
will incur O(n)
work to reallocate the underlying list
's storage, but the vast majority of the time, adding the extra item (done via append
internally) is O(1)
, which is followed by a O(log n)
sift down operation to move it to the correct position in the heap (reestablishing the heap invariant); the sift down operation is implemented with element swaps, which are all O(1)
, not insertions and deletions (which would be O(n)
each)