operating-systemtheorystarvation

Starvation vs Convoy Effect


Is the only difference between starvation and convoy effect that convoy effect is mainly defined on FCFS scheduling algorithms and starvation is on priority based scheduling?

I researched on both effects but couldn't find a comparison. This is based on operating systems theory which I learned for my college degree.


Solution

  • Starvation and convoys can occur both algorithms. The simplest, starvation, can be simulated by a task entering this loop (I hope it isn't UDB):

    while (1) {
    }
    

    In FCFS, this task will never surrender the CPU, thus all tasks behind it will starve. In a Priority based system, this same task will starve every task of a lower priority.

    Convoys can be more generally recognized as a resource contention problem; one task has the resources (cpu), and other tasks have to wait until it is done with it. In a priority-based system, this is manifest in priority inversion where a high priority task is blocked because it needs a resource owned by a lower priority task. There are ways to mitigate these, including priority inheritance and ceiling protocols. Absent these mechanisms, tasks contending for a resource will form a convoy much like in the fcfs; unlike the fcfs, tasks not contending for the resource are free to execute at will.

    The aspirations of responsiveness, throughput and fairness are often at odds, which is partly why we don't have a true solution to scheduling problems.