With the following code:
int main(){
printf("%f\n",multiply(2));
return 0;
}
float multiply(float n){
return n * 2;
}
When I try to compile, I get one warning: "'%f' expects 'double', but the argument has type 'int'" and two errors: "conflicting types for 'multiply'", "previous implicit declaration of 'multiply' was here."
Question 1: I am guessing that it's because, given the compiler has no knowledge of function 'multiply' when he comes across it the first time, he will invent a prototype, and invented prototypes always assume 'int' is both returned and taken as parameter. So the invented prototype would be "int multiply(int)", and hence the errors. Is this correct?
Now, the previous code won't even compile. However, if I break the code into two files like this:
#file1.c
int main(){
printf("%f\n",multiply(2));
return 0;
}
#file2.c
float multiply(float n){
return n * 2;
}
and execute "gcc file1.c file2.c -o file" it will still give one warning (that printf is expecting double but is getting int), but the errors won't show up anymore and it will compile.
Question 2: How come when I break the code into 2 files, it compiles?
Question 3: Once I run the program above (the version split into 2 files), the result is that 0.0000 is printed on the screen. How come? I am guessing the compiler again invented a prototype that doesn't match the function, but why is 0 printed? And if I change the printf("%f") to printf("%d"), it prints a 1. Again, any explanation of what's going on behind the scenes?
So the invented prototype would be "int multiply(int)", and hence the errors. Is this correct?
Absolutely. This is done for backward compatibility with pre-ANSI C that lacked function prototypes, and everything declared without a type was implicitly int
. The compiler compiles your main
, creates an implicit definition of int multiply(int)
, but when it finds the real definition, it discovers the lie, and tells you about it.
How come when I break the code into 2 files it compiles?
The compiler never discovers the lie about the prototype, because it compiles one file at a time: it assumes that multiply
takes an int
, and returns an int
in your main
, and does not find any contradictions in multiply.c
. Running this program produces undefined behavior, though.
Once I run the program above (the version split into 2 files) the result is that 0.0000 is printed on the screen.
That's the result of undefined behavior described above. The program will compile and link, but because the compiler thinks that multiply
takes an int
, it would never convert 2
to 2.0F
, and multiply
will never find out. Similarly, the incorrect value computed by doubling an int
reinterpreted as a float
inside your multiply
function will be treated as an int
again.