language-agnosticbit-manipulationprotocol-buffersbitfoozigzag-encoding

Zig Zag Decoding


In the google protocol buffers encoding overview, they introduce something called "Zig Zag Encoding", this takes signed numbers, which have a small magnitude, and creates a series of unsigned numbers which have a small magnitude.

For example

Encoded => Plain
0 => 0
1 => -1
2 => 1
3 => -2
4 => 2
5 => -3
6 => 3

And so on. The encoding function they give for this is rather clever, it's:

(n << 1) ^ (n >> 31) //for a 32 bit integer

I understand how this works, however, I cannot for the life of me figure out how to reverse this and decode it back into signed 32 bit integers


Solution

  • Try this one:

    (n >> 1) ^ (-(n & 1))
    

    Edit:

    I'm posting some sample code for verification:

    #include <stdio.h>
    
    int main()
    {
      unsigned int n;
      int r;
    
      for(n = 0; n < 10; n++) {
        r = (n >> 1) ^ (-(n & 1));
        printf("%u => %d\n", n, r);
      }
    
      return 0;
    }
    

    I get following results:

    0 => 0
    1 => -1
    2 => 1
    3 => -2
    4 => 2
    5 => -3
    6 => 3
    7 => -4
    8 => 4
    9 => -5