I was strolling through core::time
and was surprised at the implementation of Nanoseconds
.
Why does instantiating it require an unsafe
block?
I understand the comments regarding the range restrictions of Nanoseconds
(must be < NANOS_PER_SEC
) but how does unsafe
help here in this case?
If I copy the code into a custom crate (except the compiler internals, that are unstable):
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[repr(transparent)]
struct Nanoseconds(u32);
impl Default for Nanoseconds {
#[inline]
fn default() -> Self {
// SAFETY: 0 is within the valid range
unsafe { Nanoseconds(0) }
}
}
fn main() {
let nanos = Nanoseconds(0);
println!("Nanos: {}", nanos.0);
}
I get the following warning from rustc:
warning: unnecessary `unsafe` block
--> src/main.rs:9:9
|
9 | unsafe { Nanoseconds(0) }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: `test_nanos` (bin "test_nanos") generated 1 warning
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.00s
Running `target/debug/test_nanos`
Nanos: 0
Since Nanoseconds
is just a tuple struct, it should work to instantiate it without a surrounding unsafe
block, right?
Because the #[rustc_layout_scalar_valid_range_*(…)]
attributes tell the compiler it can optimize1 code based on the assumption that the value stored in a Nanoseconds
never is outside the bounds. Therefore it is undefined behavior (UB) to create a Nanoseconds
with a value outside the bounds:
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[repr(transparent)]
#[rustc_layout_scalar_valid_range_start(0)]
#[rustc_layout_scalar_valid_range_end(999_999_999)]
struct Nanoseconds(u32);
The relevant section in the reference about UB:
- Producing an invalid value, even in private fields and locals. "Producing" a value happens any time a value is assigned to or read from a place, passed to a function/primitive operation or returned from a function/primitive operation. The following values are invalid (at their respective type):
[…]
Invalid values for a type with a custom definition of invalid values. In the standard library, this affects
NonNull<T>
andNonZero*
.Note:
rustc
achieves this with the unstablerustc_layout_scalar_valid_range_*
attributes.
The compiler is allowed to assume the value in Nanoseconds
is always within 0..=999_999_999
and optimizes accordingly.
That's similar to bool
for example which occupies a whole byte, but only the values 0
and 1
are allowed while 2..=255
are invalid values.
1) For example it can omit bounds checks like nanos.0 <= 999_999_999
, or it can use 1_000_000_000
and up as a nieche in an enum