Most build environments I've seen have at least two strategies: debug build vs final/optimized/release build. With gcc, this usually means some version of -g
vs -O
. Now I'm seeing a situation where the optimized build is built with -O3
while the debug version is built with -g3
and -O3
. man gcc
does indicate that is possible, but this seems counterintuitive to me for real debugging purposes.
Reviewing http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html reminded me of -Og
, which allows optimizations that do not interfere with debugging. That makes sense to me, but what compelling reason is there to debug with -O3 -g3
unless you are basically trying to debug gcc's own optimization abilities?
Sometimes people write bad code - maybe code that causes undefined behaviour, for example. Now let's say that that undefined behaviour appears to work "correctly" in the case of low- or no optimizations, but that it causes a disastrous crash at -O3
. You're going to want to debug this problem at -O3
, right? So then you have no choice but to add a -g
flag and go to town, even though the debugging experience might be somewhat compromised by optimizations.
There is a big problem in general with build systems conflating the "debug/release" axis with the "optimized/unoptimized" axis. Really, they should be orthogonal - it's often desirable to have a "debug" build with logging, for example, but still have it run fast with optimizations enabled. Similarly, it could potentially be very difficult to track down an optimizer-related bug without having debug symbols available in your optimized build.
+--------------------------------+
| Optimizations |
+-----------------+--------------+
| On | Off |
+----------+------+-----------------+--------------+
| | On | Debug optimized | Best debug |
| Debug | | code | experience |
| Logging/ +------+-----------------+--------------+
| Symbols | Off | Release build | Probably not |
| | | for customers | useful |
+----------+------+-----------------+--------------+