C++ Core Guidelines recommends a gsl::not_null
type. As stated in I.12: Declare a pointer that must not be null as not_null
:
To help avoid dereferencing nullptr errors. To improve performance by avoiding redundant checks for nullptr.
...
By stating the intent in source, implementers and tools can provide better diagnostics, such as finding some classes of errors through static analysis, and perform optimizations, such as removing branches and null tests.
(If interested, this is Microsoft's implementation of gsl::not_null
: GitHub)
The guideline doc says it helps performance by "removing branches and null tests". But, it also adds an overhead because methods operator->()
and operator*()
are to be called if I want to access the underlying pointer (that is not counting the Microsoft implementation's overhead of runtime checks in these methods).
Given that method inlining is not guaranteed, how did the doc conclude the net performance gain is positive?
But, it also adds an overhead because methods operator->() and operator*()
Except, those functions are defined inline and are extremely small, thereby optimiser will (very likely) expand them inline, which would remove that potential overhead entirely.
how did the doc conclude the net performance gain is positive?
As you quoted, the document doesn't even acknowledge associated overhead, so such conclusion is trivial.
If you mean how did the authors of the document come to such conclusion, only those authors know. It may range from "they measured its effects" to "they made an assumption".