I have the following bit of code:
#include <cmath>
#include <stdexcept>
#include <vector>
void zoo(double& i) {
std::vector<int> v(static_cast<std::size_t>(i));
}
void bar1(double& i) noexcept {
++i;
zoo(i);
}
void foo1(double& i) noexcept {
bar1(i);
}
void bar2(double& i) noexcept {
i = std::sin(i) + std::cos(i) / std::exp(2*i);
}
void foo2(double& i) noexcept {
bar2(i);
}
Godbolt: https://godbolt.org/z/j5o4jcbvc
Where we have two call flows:
For the second flow, gcc and clang don't seem to generate exception handling code (std::terminate), here I assume that is because the optimizer can see into the calls for sin/cos/exp and see that they don't throw exceptions
For the first flow, it seems GCC doesn't generate exception handling code where as clang does.
My question: Does the standard specify if this is an implementation defined detail or is there a specification of what needs to be done under the above situations?
This has less to do with noexcept
and more with whether or not the functions are throwing exceptions. If the function can not be determined to no throw an exception, then there must be handling for noexcept
to call std::terminate
, eventually.
std::vector<int> v(static_cast<std::size_t>(i));
uses std::allocator<int>::allocate
in order to allocate memory.
However, zoo
also immediately destroys the vector again, which means that std::allocator<int>::deallocate
is used to deallocate that memory again.
std::allocator<int>::allocate
uses, in an unspecified manner, the global replaceable ::operator new
to obtain memory and similarly std::allocator<int>::deallocate
uses, in an unspecified manner, the global replaceable ::operator delete
.
Both of these potentially have side effect, for one because ::operator new
can throw an exception on allocation failure and further because these functions are user-replaceable.
Generally, a compiler is not allowed to optimize away operations that have an observable effect.
However, given that std::allocator
is specified to use these functions in an unspecified manner, that seems sufficient to permit the compiler to not call them, if memory isn't actually needed. In particular, because you don't use the vector's contents for anything, the compiler doesn't need the memory at all and can simply elide all access to the elements of the vector completely.
Now, that seems like it should be fine to simply optimize std::vector<int> v(static_cast<std::size_t>(i));
away completely, in which case there can't be any exceptions thrown and no std::terminate
handling for noexcept
should be needed.
However, that behavior by GCC is maybe not actually strictly conforming to the standard, while Clang's behavior may be. It is not entirely clear to me whether std::vector<int>
is required to call std::allocator<int>::allocate
, or whether it can presume the memory to be obtained in another way. If it is not required to call that function, then Clang's behavior is simply a missed optimization. Otherwise:
The problem is that std::allocator<int>::allocate
is also specified to throw a std::bad_array_new_length
exception if std::numeric_limits<size_t>::max() / sizeof(int) < n
where n
is the requested number of elements to allocate. The problem is that this is not part of the unspecified use of operator new
which may be elided, but of std::allocator<int>::allcoate
itself. That means the special permission to elide even if the observable behavior is affected is not given here.
Therefore, the compiler, according to the standard, must assure that if you pass a sufficiently large (but still convertible to size_t
) value to zoo
, a std::bad_array_new_length
exception is thrown, even if the vector is otherwise completely optimized away. This exception in turn can trigger noexcept
's mechanism to call std::terminate
.
If you make sure that Clang knows std::numeric_limits<size_t>::max() / sizeof(int) < n
never happens, then it will also optimize away the std::terminte
handling for noexept
. For example if you cast i
to unsigned int
instead of std::size_t
(on a 64bit architecture).