While typing a program as a high level programmer, n = 0;
looks more efficient and clean.
But is n = 0;
really more efficient than if (n != 0) n = 0;
?
when n
is more likely to be 0
.
when n
is less likely to be 0
.
when n
is absolutely uncertainty.
Language: C (C90)
Compiler: Borland's Turbo C++
Minimal reproducible code
void scanf();
void main()
{
int n; // 2 bytes
n=0; // Expression 1
scanf("%d",&n); // Absolutely uncertain
if(n!=0) n=0; // Expression 2
}
Note: I have mentioned the above code only for your reference. Please don't go with it's flow.
If your not comfortable with the above language/standard/compiler, then please feel free to explain the above 3 cases in your preferred language/standard/compiler.
If n
is a 2's complement integral type or an unsigned integral type, then writing n = 0
directly will certainly be no slower than the version with the condition check, and a good optimising compiler will generate the same code. Some compilers compile assignment to zero as XOR'ing a register value with itself, which is a single instruction.
If n
is a floating point type, a 1s' complement integral type, or a signed magnitude integral type, then the two code snippets differ in behaviour. E.g. if n
is signed negative zero for example. (Acknowledge @chqrlie.) Also if n
is a pointer on a system than has multiple null pointers representations, then if (n != 0) n = 0;
will not assign n
, when n
is one of the various null pointers. n = 0;
imparts a different functionality.
"will always be more efficient" is not true. Should reading n
have a low cost, writing n
a high cost (Think of re-writing non-volatile memory that needs to re-write a page) and is likely n == 0
, then n = 0;
is slower, less efficient than if (n != 0) n = 0;
.