I am trying to avoid the situation described in this Stackoverflow entry: Debugging core files generated on a Customer's box. If I compile all the libraries statically will I avoid having to always gather the shared libraries when it core dumps? I essentially want to be in a situation where I can just load up the core file with gdb and examine the crashed application.
What should I watch out for if I go down the route of statically linking all the libraries we need. I figure glib and pthreads might cause the biggest problems.
Will Valgrind cease to be useful? If I load up Valgrind against the binary which has everything statically compiled will it find errors? Or shall we maintain a binary that isn't statically compiled so Valgrind continues to work. How about strace?
We crashes quite often as we have a large install base and it is also a legacy application. Gathering up all the shared libraries is becoming intractable - I need another solution.
Edit: fixed typo.
If I compile all the libraries statically will I avoid having to always gather the shared libraries when it core dumps
Yes.
However, contrary to popular belief, statically linked binaries are less portable than dynamically linked ones (at least on Linux).
In particular, if you use any of these functions: gethostbyname
, gethostbyaddr
, getpwent
, getpwnam
, getpwuid
(and a whole lot more), you will get a warning at link time, similar to this:
Using 'getpwuid' in statically linked applications requires at runtime
the shared libraries from the glibc version used for linking.
What this warning means is that IF your customer has a different version of glibc installed from the one you used at link time (i.e. different from your system libc), THEN your program will likely crash. Since that is a clearly worse situation than your current one, I don't believe static linking is a good solution for you.