Edited to clarify the question
I'm about to turn in a laboratory project at Uni on two well known compression algorithms (Huffman coding and Lempel-Ziv 77). My implementation of Huffman coding is similar to the greedy approach, in which the tree is built with the following steps:
1. Calculate frequencies for all unique characters and place them in a minimum heap
2. While there are more than two nodes in heap:
2.1 Take a value from the minimum heap and place it as the left child
2.2 Take a value from the minimum heap and place it as the right child
2.3 Create a parent node with a frequency of 'frequency of the left child + frequency
of the right child'
2.4 Place the parent node to the minimum heap and continue
3. Finalize the tree with a root node (children are the last two nodes in the minimum heap)
I've been able to find good sources on the time complexities of all other steps for both algorithms, except for the decompression phase of Huffman coding. My current understanding is that even though this implementation of Huffman tree is unbalanced and in the worst case has the longest path of length n-1 (in which n equals the count of unique characters), the traversal times are balanced by the frequencies of different characters.
In an unbalanced Huffman tree the leaf nodes with higher frequencies are more common and have a shorter traversal paths. This balances the total time cost and my intuition states that the total time would approach a time complexity of O(k log n), in which k is the length of the uncompressed content and n the unique characters in the Huffman tree.
I'd feel much more comfortable if I had reliable source to reference regarding the time complexity of the decompression phase that either support my intuition or counters it. If anyone has good tips on books or not-super-difficult-to-read-articles that cover this particular question, I'd very much appreciate the help! I've put quite a lot of effort into my project and I don't think this one detail is going to make a big dent, even if I don't find the answer in time. I'm mainly just super-fascinated by the topic and want to learn as much as possible.
And just in case anyone ever stumbles into this question on a similar question, here's my project. Keep in mind that it's a student project and it's good to keep a critical mind while reviewing it.
Generating a Huffman code is O(n) for n symbols being coded, when the frequencies are sorted. The sorting in general takes O(n log n), so that is the time complexity for the Huffman algorithm.
The priority queue in your example is one way to do the sorting. There are other approaches where the sorting is done before doing Huffman coding.
Generally, LZ77 and Huffman coding are combined.