I was using the nlminb() function for optimization in R software.
function takes an optional argument "gradient" that takes the same arguments as objective function to be optimized and evaluates the gradient of objective at its first argument. Must return a vector as long as the length of parameter set.
I want to know which method is used by the nlminb() function to find the gradient of the objective function if the gradient function is not given to the arguments?
I have used the nlminb() function for optimization without giving the "gradient" argument.
As I suspected, the answer is "finite differences", but it's not easy to find. If you dig deep enough you find the functions DRMNF (for unbounded optimization: DRMNFB is the analogous function for bounded optimization) from the Port library, and the comment here
SUBROUTINE DRMNF(D, FX, IV, LIV, LV, N, V, X)
C
C *** ITERATION DRIVER FOR DMNF...
C *** MINIMIZE GENERAL UNCONSTRAINED OBJECTIVE FUNCTION USING
C *** FINITE-DIFFERENCE GRADIENTS AND SECANT HESSIAN APPROXIMATIONS.
and here:
C [...] INSTEAD OF CALLING CALCG TO OBTAIN THE
C GRADIENT OF THE OBJECTIVE FUNCTION AT X, DRMNF CALLS DS7GRD,
C WHICH COMPUTES AN APPROXIMATION TO THE GRADIENT BY FINITE
C (FORWARD AND CENTRAL) DIFFERENCES USING THE METHOD OF REF. 1.
Ref 1 is Stewart, G. W. 1967. “A Modification of Davidon’s Minimization Method to Accept Difference Approximations of Derivatives.” Journal of the ACM 14 (1): 72–83. https://doi.org/10.1145/321371.321377.