The orpd
instruction is a "bitwise logical OR of packed double precision floating point values". Doesn't this do exactly the same thing as por
("bitwise logical OR")? If so, what's the point of having it?
Remember that SSE1 orps
came first. (Well actually MMX por mm, mm/mem
came even before SSE1.)
Having the same opcode with a new prefix be the SSE2 orpd
instruction makes sense for hardware decoder logic, I guess, just like movapd
vs. movaps
. Several instructions like this are redundant between between ps
and pd
versions, but some aren't, like addps
vs. addpd
or unpcklps
vs. unpcklpd
being different shuffles.
The reason for SSE2 also introducing 66 0F EB /r por xmm,xmm/mem
is at least partly for consistency with MMX 0F EB /r por mm, mm/mem
, again same opcode with a new mandatory prefix. Just like paddb mm, mm
vs. paddb xmm, xmm
.
But also for the possibility of different bypass-forwarding domains for vec-integer vs. FP. Different microarchitectures have had different behaviours for how they actually decoded and ran those different instructions. Some ran all the XMM or
instructions the same way, creating extra latency for forwarding between FP and simd-integer domains.
No CPUs have ever actually had different fowarding domains for FP-float vs. FP-double, so yes, movapd
and orpd
are in practice useless wastes of space that you should never use. Use the smaller orps
encoding instead.
(Or with VEX encoding it doesn't matter; vorps
and vorpd
are the same size: 2 byte prefix + opcode + modrm ...)
por
vs. orps
For more about bypass delay when using por
between FP math instructions like addps
, or orps
between SIMD-integer insns like paddb
, see
And in case anyone was wondering, the answer to the other interpretation of the title: bitwise booleans on FP values are mostly used to set, clear, or toggle the sign bit. Or to do stuff with cmpps/pd
masks like blending, or just zeroing element where a compare was true or false (ANDN or AND).