I want to make xor of an uint16
with a floating point number like the following:
uint16_t a=20000;
double r,x,xo;
r=3.8;
xo=.1;
x=(int) r*xo*(1-xo);
c=a^x;
When I run the test the following error occurs:
invalid operand to binary ^
How can I convert x
to an integer value with 16 bit?
The problem is that x
is still a double value. The cast in
x=(int) r*xo*(1-xo);
truncates the number, but it's still a double number.
To do what you want you need to declare x
as int
or cast right before xor:
c=a^((int)x);