The context of this question is the creation of a side-channel resistant implementation of a IEEE-754 compliant single-precision square root for a 32-bit RISC-V platform without hardware support for floating-point arithmetic and without the Zbb extension for advanced bit manipulation. Integer multiplies, in particular the MUL
and MULHU
instructions, are supported by the hardware and can be assumed to have fixed latency. Counting the leading zero bits is required for normalization of subnormal operands, and the CLZ
emulation should be branchless because of the side-channel resistant design.
I started with C99 code for a 32-bit leading-zero count that I used twenty years ago on ARMv4t processors. This is a full-range implementation, i.e. it returns 32 for an input of zero.
uint32_t cntlz (uint32_t a)
{
uint32_t n = 0;
#if 0
n = __builtin_clz (a);
#else
n = !a + 1;
if (a < 0x00010000u) { n |= 16; a <<= 16; }
if (a < 0x01000000u) { n |= 8; a <<= 8; }
if (a < 0x10000000u) { n |= 4; a <<= 4; }
if (a < 0x40000000u) { n += 2; a <<= 2; }
n = n - (a >> 31);
#endif
return n;
}
As a sanity check, I compiled the above source with clang 18.1 -marm -march=armv4t
, resulting in the following code that, at 16 instruction without function return, uses one instruction more than the best ARMv4t implementation I am aware of (which uses 15 instructions without the function return):
cntlz:
mov r1, #1
cmp r0, #0
moveq r1, #2
cmp r0, #65536
lsllo r0, r0, #16
orrlo r1, r1, #16
cmp r0, #16777216
lsllo r0, r0, #8
orrlo r1, r1, #8
cmp r0, #268435456
lsllo r0, r0, #4
orrlo r1, r1, #4
cmp r0, #1073741824
addlo r1, r1, #2
lsllo r0, r0, #2
add r0, r1, r0, asr #31
bx lr
I am currently working without access to a RISC-V development platform and used Compiler Explorer to compile for a 32-bit RISC-V target. I could not figure out how to specify extensions properly to turn off floating-point support, so I used clang 18.1 with -march=rv32gc
, which resulted in the following assembly code being generated:
cntlz: # @cntlz
seqz a1, a0
srli a2, a0, 16
seqz a2, a2
slli a2, a2, 4
or a1, a1, a2
sll a0, a0, a2
srli a2, a0, 24
seqz a2, a2
slli a2, a2, 3
or a1, a1, a2
sll a0, a0, a2
srli a2, a0, 28
seqz a2, a2
slli a2, a2, 2
or a1, a1, a2
sll a0, a0, a2
srli a2, a0, 30
seqz a2, a2
slli a2, a2, 1
or a1, a1, a2
sll a0, a0, a2
srai a0, a0, 31
add a0, a0, a1
addi a0, a0, 1
ret
I am unable to identify any improvements to the code generated by Clang, that is, it appears to be as tight as possible. I am aware that RISC-V implementations could implement macro-op fusion. See: Christopher Celio, et al., "The Renewed Case for the Reduced Instruction Set Computer:
Avoiding ISA Bloat with Macro-Op Fusion for RISC-V", UC Berkeley technical report EECS-2016-130. But none of the fusion idioms discussed in the report appear to apply to this code, leading me to assume an execution time of 24 cycles for this 24 instruction sequence (without the function return). I was curious what __builtin_clz()
resolves to. Compiling with that code path enabled results in a 31-instruction sequence that converts the leading zeros into a left-justified mask of 1-bits and then applies a population count computation to the mask:
srli a1, a0, 1
or a0, a0, a1
srli a1, a0, 2
or a0, a0, a1
srli a1, a0, 4
or a0, a0, a1
srli a1, a0, 8
or a0, a0, a1
srli a1, a0, 16
or a0, a0, a1
not a0, a0 // a0 now left-aligned mask of 1-bits
srli a1, a0, 1
lui a2, 349525
addi a2, a2, 1365
and a1, a1, a2
sub a0, a0, a1
lui a1, 209715
addi a1, a1, 819
and a2, a0, a1
srli a0, a0, 2
and a0, a0, a1
add a0, a0, a2
srli a1, a0, 4
add a0, a0, a1
lui a1, 61681
addi a1, a1, -241
and a0, a0, a1
lui a1, 4112
addi a1, a1, 257
mul a0, a0, a1
srli a0, a0, 24
ret
Again, I am not sure what instructions could be subject to macro-op fusion here, but the most likely candidate seems to be the LUI
/ADDI
idiom used to load 32-bit immediate data, similar to the way modern ARM processors fuse MOVW
/MOVT
pairs. With that assumption, the code would still appear to be slower than what I currently have. I tried half a dozen additional integer-based variants of 32-bit CLZ
emulation and did not find any that resulted in fewer than 24 instructions. I also searched the internet and was unable to find anything superior to my current code.
Are there any branchless full-range implementations of leading-zero count for 32-bit RISC-V platforms that require fewer than 24 cycles? Conservatively, I want to assume the absence of macro-op fusion as this seems like an expensive feature in a low-end microcontroller, but answers relying on macro-op fusion as present in existing RISC-V implementations are also welcome.
Note: Table-based methods are not suitable in this context as table access could trigger cache misses which can be exploited for side-channel attacks.
Update 6/9/2024: After working on this issue for another 1.5 days, I found a variant that should reduce the number of instructions required from 24 to 22, however, not all compilers can actually deliver the desired code.
The basic observation is that the RISC-V ISA is not orthogonal with regard to SET
-type instructions, in that it only supports a "less than" flavor. Other comparisons may require inversion of the comparison followed by inversion of the result, for example by XOR-ing 1, or applying SEQZ
, adding one instruction per instance. My idea now is to avoid this per-instance inversion by transposing a positive operand into a negative one once, allowing the direct use of "less than" comparisons. Expressed in portable C++11 code, and annotated with the RISC-V instructions I expect to be generated:
// https://stackoverflow.com/a/74563384/780717
// Only applies right shift to non-negative values to avoid implementation-defined behavior
int32_t sra (int32_t x, int32_t y)
{
return (x < 0) ? (~(~x >> y)) : (x >> y);
}
uint32_t cntlz_rv32 (uint32_t a)
{
uint32_t n, t;
int32_t as;
n = !a; // 1: seqz
t = ((a >> 16)!=0)*16; n = n - t; a = a >> t; // 5: srli, snez, slli, sub, srl
as = (int32_t)(~a); // 1: not
t = (as < -256) * 8; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
t = (as < -16) * 4; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
t = (as < -4) * 2; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
t = (as < -2) * 1; n = n - t; // 2: slti, sub
n += 31; // 1: addi
return n; // 22 instructions total w/o ret
}
With gcc 13.3 (not 14.1), the following branchless 22-instruction sequence (not counting the ret
) is generated:
cntlz_rv32(unsigned int):
srli a3,a0,16
snez a3,a3
slli a3,a3,4
srl a4,a0,a3
not a4,a4
slti a1,a4,-256
slli a1,a1,3
sra a4,a4,a1
slti a2,a4,-16
slli a2,a2,2
seqz a5,a0
addi a5,a5,31
sra a0,a4,a2
slti a4,a0,-4
sub a5,a5,a3
slli a4,a4,1
sub a5,a5,a1
sub a5,a5,a2
sra a0,a0,a4
sub a5,a5,a4
slti a0,a0,-2
sub a0,a5,a0
ret
For reasons I do not understand, gcc 14.1 refuses to generate the expected SLTI
with -256
. By moving the comparison prior to the application of the 1's complement it then requires the addition of an SEQZ
to invert the result of the comparison. clang 18.1 generates an inefficient 33-instruction sequence for the above source code for no reason that I can discern.
After some more thinking, I invented hybrid approach, when long shifts by 16/8/4 performed as in the 1st version (binary searches), and instead of last two ones, performed "table lookup", when table - just 32-bit word. I not sure, is this better than two previous approaches, but try to check it, also:
uint32_t myclz2(uint32_t x) {
int r = !x, c;
c = (x < 0x00010000) << 4;
r += c; x <<= c; // off 16
c = (x < 0x01000000) << 3;
r += c; x <<= c; // off 8
c = (x < 0x10000000) << 2;
r += c; x <<= c; // off 4
c = (x >> (32 - 4 - 1)) & 0x1e;
r += (0x55af >> c) & 3;
return r;
}