The following succeeds with GCC and Clang but causes an error when compiled with Visual Studio: (compiler explorer link)
#include <functional>
#include <algorithm>
#include <ranges>
#include <print>
namespace r = std::ranges;
namespace rv = std::ranges::views;
int main() {
auto test = r::count_if(
rv::cartesian_product(
rv::iota(0, 20), rv::iota(0, 20), rv::iota(0, 20)
),
[&](auto&&) -> bool {
return true;
}
);
std::println("{}", test);
return 0;
}
The error from Visual Studio is basically that the count_if
is returning a 128-bit integer and println
doesn't know what to do with one of those:
D:\work\stackoverflow\stackoverflow.cpp(19): error C3615: consteval
function 'std::__p2286::_Compile_time_parse_format_specs' cannot
result in a constant expression C:\Program Files\Microsoft Visual
Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include\format(3557):
note: failure was caused by control reaching the end of a consteval
function D:\work\stackoverflow\stackoverflow.cpp(19): note: the call
stack of the evaluation (the oldest call first) is
D:\work\stackoverflow\stackoverflow.cpp(19): note: while evaluating
function 'std::basic_format_string<char,std::_Signed128
&>::basic_format_string<char[3]>(const _Ty (&))'
...
I can fix the above by just casting the result of the count_if
to an int
. Further, removing the third iota(0, 20)
from the Cartesian product makes the problem go away in Visual Studio too. I am wondering if this is an error in Visual Studio or if the standard allows this behavior?
ranges::count_if(r, pred)
returns R
's difference_type
. In this case R
is a cartesian_product_view
, so we have to go see what that is.
[range.cartesian.product] defines cartesian_product
's difference_type
as:
iterator::difference_type
is an implementation-defined signed-integer-like type.Recommended practice:
iterator::difference_type
should be the smallest signed-integer-like type that is sufficiently wide to store the product of the maximum sizes of all underlying ranges if such a type exists.
In MSVC's implementation, that is based on this:
template <class... _Rngs>
requires (sizeof...(_Rngs) > 0)
_NODISCARD consteval auto _Cartesian_product_optimal_size_type() noexcept {
constexpr int _Optimal_size_type_bit_width = (_Cartesian_product_max_size_bit_width<_Rngs>() + ...);
if constexpr (_Optimal_size_type_bit_width <= 8) {
return uint8_t{};
} else if constexpr (_Optimal_size_type_bit_width <= 16) {
return uint16_t{};
} else if constexpr (_Optimal_size_type_bit_width <= 32) {
return uint32_t{};
} else if constexpr (_Optimal_size_type_bit_width <= 64) {
return uint64_t{};
} else {
return _Unsigned128{};
}
}
Which is how we end up with u128
for this size, that later gets turned into i128
to be signed.
gcc's implementation, on the other hand, doesn't do this. Instead, it picks the common type of ptrdiff_t
and all the underlying difference_type
s, which will never pick a wider integer. So gcc ends up with i64
here instead.
Both answers are conforming. It looks like libc++ doesn't implement cartesian_product
yet.
However, if MSVC is going to yield wider integers from its range operations, I think it's probably an MSVC bug that they are then not formattable. Probably worth opening an issue?