In this article, in the section titled Fairness with difficult workloads, they talk about unfairness and quantify it on a scale that goes from 0% (best) to 50% (worst). How is that measured? What do these numbers indicate?
Unfairness compares actual running time to ideal running time. If you have C cpus, T tasks and S seconds, then ideal running time for each task is
t(ideal) = C * S / T
If you have one CPU and 2 tasks, and one task gets all of the time when it should get half the time, then unfairness is 50%. As it says in the article, 50% is the worst case unfairness.