cmatlabgalois-field

Galois field algorithm from C to Matlab


I'm transcribing a C code for Matlab from a multiplication by 2 in the Gallois field, the problem is that my matlab code is not displaying the same value as the C code. Apparently everything is ok, I commented the code in matlab to identify the adaptations of the C code, below the codes.

C:

#include <stdio.h>
#include <stdlib.h>

int main()
{
  unsigned char value = 0xaa;
  signed char temp;
  // cast to signed value
  temp = (signed char) value;
  printf("\n%d",temp);
  // if MSB is 1, then this will signed extend and fill the temp variable with 1's
  temp = temp >> 7;
  printf("\n%d",temp);
  // AND with the reduction variable
  temp = temp & 0x1b;
  printf("\n%d",temp);
  // finally shift and reduce the value
  printf("\n%d",((value << 1)^temp));
}

Output:

-86
-1
27
335

MatLab:

hex = uint8(hex2dec('1B'));                     % define 0x1b
temp = uint8(hex2dec('AA'));                    % temp = (signed char) value;
disp(temp);
value = uint8(hex2dec('AA'));                   % unsigned char value = 0xaa
temp = bitsra(temp,7);                          % temp = temp >> 7;
disp(temp);
temp = bitand(temp,hex);                        % temp = temp and 0x1b  
disp(temp);
galois_value =  bitxor(bitsll(value,1),temp);   % ((value << 1)^temp)
disp(galois_value);                             % printf ("\n%u",...)                    

Output:

 170

    1

    1

   85

The C code works correctly, I'm printing %d in the C code to show the integer value of the varible because the cast in the beggining of the code.

Someone knows what is happening


Solution

  • Try this:

    hex = uint8(hex2dec('1B')); 
    
    temp = typecast(uint8(hex2dec('AA')), 'int8');
    disp(temp);
    
    temp = bitshift(temp,-7); 
    disp(temp);
    
    temp = bitand(typecast(temp,'uint8'),hex); 
    disp(temp);
    
    galois_value =  bitxor(bitshift(uint16(hex2dec('AA')),1),uint16(temp));   
    disp(galois_value);