restimation

Problem Implementing MLE Estimation in bbmle Package (for R)


I am trying to verify the MLEs obtained for $\alpha$, $\beta$ and $\lambda$ for the Logistic-Lomax distribution in the paper entitled A Study of Logistic-Lomax Distribution by Zubair et al when using Data Set 1. The paper uses the following code to do this (see Appendix B):

library(bbmle)
x <- c(66, 117, 132, 111, 107, 85, 89, 79, 91, 97, 138, 103, 111, 86, 78, 96, 93, 101, 102, 110, 95, 96, 88, 122, 115, 92, 137, 91, 84, 96, 97, 100, 105, 104, 137, 80, 104, 104, 106, 84, 92, 86, 104, 132, 94, 99, 102, 101, 104, 107, 99, 85, 95, 89, 102, 100, 98, 97, 104, 114, 111, 98, 99, 102, 91, 95, 111, 104, 97, 98, 102, 109, 88, 91, 103, 94, 105, 103, 96, 100, 101, 98, 97, 97, 101, 102, 98, 94, 100, 98, 99, 92, 102, 87 , 99, 62, 92, 100, 96, 98) 
n <- length(x)
ll_LLx <- function(alpha, beta, lambda){
n*log(lambda*alpha/beta)-sum(log(1+x/beta))
-(lambda+1)*sum(log(log((1+x/beta)^alpha)))
-2*sum(log(1+(log((1+x/beta)^alpha))^(-lambda)))
}
mle.res <- mle2(ll_LLx, start=list(alpha=alpha, beta=beta, lambda=lambda),
hessian.opt=TRUE)
summary(mle.res)

The paper obtains the values $\hat{\alpha} = 0.5302, \hat{\beta} = 17.6331, \hat{\lambda} = 35.6364$ for the MLEs fo this dataset using this code. My question is simply this: how do I implement this code in R without it spitting out an error? This code appears to list the parameters as 'alpha', 'beta' and 'lambda', but does not assign them numerical starting values. So I tried to put reasonable starting values for these parameters before the code as follows:

alpha=0.5
beta=17
lambda=35

However, this gave the unexpected error:

Error in optim(par = c(alpha = 0.5, beta = 17, lambda = 35), fn = function (p)  : 
  non-finite finite-difference value [1]
In addition: There were 50 or more warnings (use warnings() to see the first 50)

What has happened here and how can I fix it?


Solution

  • There are two problems.

    First

    > alpha=0.53
    > beta=17.6
    > lambda=35.6
    > ll_fragment<-function(alpha,beta,gamma) -2*sum(log(1+(log((1+x/beta)^alpha))^(-lambda)))
    > 
    > ll_LLx(alpha,beta,gamma)
    [1] -205.132
    > ll_fragment(alpha,beta,gamma)
    [1] -205.132
    

    That is, the printing of the code introduced line breaks, and when you copy the code from the PDF, you end up with a series of three expressions. The value returned by the function is then just the value of the last expression.

    Second, if you compare the code to the loglikelihood defined in equation number ... defined in the unnumbered equation before equation 6.1, the equation starts with $n\log\frac{\lambda\alpha}{\beta}$. The code has n*log(lambda*alpha/beta). These look the same, but mle2 is a minimiser, so they should have opposite sign. The equation for the loglikelihood matches the pdf in equation 2.2, so I assume it's right and the code as given will attempt to minimise the loglikelihood.

    If I fix the line breaks and the sign, I get

    > mle.res <- mle2(ll1a, start=list(alpha=alpha, beta=beta, lambda=lambda),hessian=TRUE)
    Warning messages:
    1: In log(lambda * alpha/beta) : NaNs produced
    2: In log(log((1 + x/beta)^alpha)) : NaNs produced
    3: In log(lambda * alpha/beta) : NaNs produced
    4: In log(log((1 + x/beta)^alpha)) : NaNs produced
    5: In log(lambda * alpha/beta) : NaNs produced
    6: In log(log((1 + x/beta)^alpha)) : NaNs produced
    > summary(mle.res)
    Maximum likelihood estimation
    
    Call:
    mle2(minuslogl = ll1a, start = list(alpha = alpha, beta = beta, 
        lambda = lambda), hessian.opts = TRUE)
    
    Coefficients:
            Estimate Std. Error z value     Pr(z)    
    alpha   0.530499   0.018522  28.641 < 2.2e-16 ***
    beta   17.649226   1.357944  12.997 < 2.2e-16 ***
    lambda 35.636033   2.260395  15.765 < 2.2e-16 ***
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    
    -2 log L: 771.2541 
    

    in good agreement with the paper.

    This is why code in a repository rather than in a PDF should be required, but even getting code is a big step forward for this sort of paper