performancecpubranch-prediction

How much does a mispredicted conditional branch cost?


On x86-64 whatever micro architecture and ARM64 devices, how many clock cycles does a mispredicted conditional branch cost? And I suppose I should also ask what the figure is for a successfully predicted branch taken/not taken ? I can try and find this in Agner Fog’s tables but I’m interested in ARM equally.

Is there a reasonably easy way of getting this data out of the processor itself?


Solution

  • Mispredicted branches just stall the front-end, not the entire pipeline. So the cost in terms of overall performance impact depends on the code. If it was bottlenecked purely on the front-end, losing 15 to 19 cycles of front-end throughput costs that many cycles of total time, but many other programs can somewhat hide the bubble since they have other work in flight to still be working on.

    See


    It's something you can microbenchmark, but it's somewhat tricky to construct such a benchmark. https://www.7-cpu.com/ has numbers for many CPUs, e.g.

    I suspect those numbers are from vendor manuals, unless 7-cpu has a standard benchmark they use.

    Also yes, Agner Fog attempted to microbenchmark this for many x86 CPUs, but hard numbers are hard to measure; he reports that measurements were pretty noisy on some CPUs. e.g. for Haswell/Broadwell he writes in his microarch PDF

    There may be a difference in branch misprediction penalty between the three sources of µops, but I have not been able to verify such a difference because the variance in the measurements is high. The measured misprediction penalty varies between 16 and 20 clock cycles in all three cases.