cbitmask

Bit masking. Why do we need bitwise shift operator?


I have seen two kinds of bit masking implementations:

  1. The one that uses bitwise shift operator to set flags |= 1 << MASK_VALUE and clear the mask flags &= ~(1 << MASK_VALUE). This is the approach that is used most frequently.
  2. The other one doesn't use bitwise shift operator and only uses logical operators to set flags |= MASK_VALUE and clear the mask flags &= ~(MASK_VALUE)
#define MASK_VALUE 4

int flags = 0;
    
flags |= 1 << MASK_VALUE;
printf("flags |= 1 << MASK_VALUE: %d\n", flags);

flags &= ~(1 << MASK_VALUE);
printf("flags &= ~(1 << MASK_VALUE): %d\n", flags);
    
flags |= MASK_VALUE;
printf("flags |= MASK_VALUE: %d\n", flags);
    
flags &= ~(MASK_VALUE);
printf("flags &= ~(MASK_VALUE): %d\n", flags);

outputs

flags |= 1 << MASK_VALUE: 16
flags &= ~(1 << MASK_VALUE): 0
flags |= MASK_VALUE: 4
flags &= ~(MASK_VALUE): 0

Is there any reason to use bitwise shift operator? Is the first approach preferable over the second one?


Solution

  • In the first case, MASK_VALUE is misnamed, it is not a mask, it is a bit number.

    So for example if you wanted to mask bit 4 you would use a value 1<<4 which is 16.

    A bit mask with value 4 would be (1 << 2). So your examples are not semantically identical.

    In more realistic code, you might have:

    #define BIT_NUM 4
    #define BIT_MASK (1 << BIT_NUM)
        
    flags |= BIT_MASK;    // set bit 4
    flags &= ~(BIT_MASK); // clear bit 4
    

    So the shift is used as a means of calculating a compile time constant, with self documenting code, that is less error prone and more easily maintained than just using the literal value 16 (or more likely for bit masks 0x10u).

    It comes into its own when you create more complex masks:

    #define BIT_FIELD_A_MSK (0x3u << 8) // b8:b9 field
    #define BIT_B_MSK (0x1u << 4) // b4 flag
    
    #define MASK_BITS (BIT_FIELD_A_MSK | BIT_B_MSK) // 0x0310 (bits 4,8 and 9)