In this case
void f(int *);
void f(const int *);
...
int i;
f(&i);
the situation is pretty clear - f(int *) gets called which seems right.
However, if I have this (it was done like that by mistake(*) ):
class aa
{
public:
operator bool() const;
operator char *();
};
void func(bool);
aa a;
func(a);
operator char *() gets called. I cannot figure out why would such decision path be better than going to operator bool(). Any ideas?
(*) If const is added to the second operator, the code works as expected, of course.
Because for user-defined conversions with a conversion operator the conversion of the returned type to the destination type (i.e. char*
to bool
) is considered after the object argument conversion, i.e. the conversion of the object argument a
to the implicit object parameter. [over.match.best]/1:
Given these definitions, a viable function
F1
is defined to be a better function than another viable functionF2
if for all arguments i, ICSi (F1
) is not a worse conversion sequence than ICSi(F2
), and then
for some argument j, ICSj(
F1
) is a better conversion sequence than ICSj(F2
), or, if not that,the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the standard conversion sequence from the return type of
F1
to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type ofF2
to the destination type.
So because the implicit object parameter, which is a reference, is not a const
-reference for operator char*
, it is a better match according to the first bullet point.