.netpowershellbooleanclrcompiler-optimization

How does the CLR optimize boolean comparison operation under the hood?


NOTE: I'm using PowerShell for examples in this question, but the same applies to any .NET language.


In the .NET CLR, I'm curious to know how the different ways of evaluating boolean logic work under the hood.



Generally (I'm not asking for a comprehensive list of each modern CPU arhitecture's machine language instruction set), how does the .NET CLR "optimize" each of these boolean comparisons, and at the end of the day, is the CPU using different instructions to evaluate these seemingly identical comparisons?


Solution

  • Charlieface has provided the crucial pointer:

    sharplab.io is a great site that allows you to inspect what a given snippet of C# / F# / Visual Basic code compiles to in terms of IL (or JIT ASM)

    Using it, you can observe the following, based on the following C# code:

    public class C {
        bool b = false;
        int dummy = 0;
        public void M() {
            // equivalent positive tests
            if(b) { ++dummy; }
            if(b == true) { ++dummy; }
            if(b != false) { ++dummy; }
            // equivalent negative tests
            if(!b) { ++dummy; } 
            if(b == false) { ++dummy; }
            if(b != true) { ++dummy; }
        }
    }
    

    As an aside: Note how the logic is reversed: testing for (effective) true results in a brfalse.s instruction, i.e. where to jump to if the test is not true, and vice versa (brtrue.s)


    To experiment with the results yourself: