I saw a common pattern when installing a C/C++ package from source on Linux (Ubuntu 16.04):
./autogen.sh
./configure
make
make install
I understand make
and make install
, and I guess configure
creates a Makefile
based on user preferences, but I don't see why autogen.sh
is necessary.
Does anyone know what it is there for?
The steps:
The autogen.sh
script generates the configure
script (from configure.ac
, using autoconf) and any files it needs (like creating Makefile.in
from Makefile.am
using automake). This requires autotools to be installed on your system, and it must be run when checking out the project from source control (if configure
isn’t checked in). People who download source tarballs can usually skip this step, because output of this step is included in source tarballs.
Note This is usually equivalent to autoreconf --install
. If there is not autogen.sh
file, then just run autoreconf --install
instead. If you have inherited a project with an autogen.sh
, consider deleting it if you can use autoreconf --install
.
The configure
script generates Makefile
and other files needed to build. Typically Makefile.in
is used as a template to generate Makefile
(and config.h.in
to generate config.h
). This process happens using only standard tools installed on your system, like sed and awk, and doesn't require autotools to be installed.
The make
command builds the software.
The make install
command installs it.
These are broken into different steps because they are often run at different times. The autogen.sh
step is traditionally run by people who are developing the software, since they are expected to install autoconf on their systems and they make changes to configure.ac
. End-users are not expected to have autotools installed.
These expectations have been changed a bit now that end-users are more likely to check a project out of source control instead of downloading source releases.