I have an optimization problem with no constraints (at least none for now). I'm wondering if the error I'll describe arises from not specifying any constraints? I wouldn't think so.
I have a function with two arguments, lagg
and thres
I have a vector of zeros and ones representing recession dates. I want to manipulate an economic time series (turning it into zeros and ones) and get it as close to "dating" recessions as possible.
Here is the function, which is passed to eval_f
hoping to maximize the F1 score.
fxn_unsmoothed2 <- function(input)
{ lagg <- input[1]
thres <- input[2]
truth <- data.matrix(USREC[-1:-lagg,])
chng <- percentchange(goodsmtx,lagg)
chng[chng<thres] <- 1
chng[chng!=1] <- 0
FP <- length(which(chng==1 & truth==0))
FN <- length(which(chng==0 & truth==1))
TP <- length(which(chng==1 & truth==1))
TN <- length(which(chng==0 & truth==0))
return (2*(TP/(TP+FP))*(TP/(TP+FN)) / ((TP/(TP+FP))+(TP/(TP+FN))))
}
The whole input[1]
thing is confusing to me. I saw it on a separate post.
I then do
x0 <- c(11,.005)
opts <- list("algorithm"="NLOPT_GN_ISRES",maxeval=100000)
res <- nloptr( x0=x0,
eval_f=fxn_unsmoothed2,
opts=opts)
It runs zero iterations and tells me:
NLOPT_INVALID_ARGS: Invalid arguments (e.g. lower bounds are bigger than upper bounds, an unknown algorithm was specified, etcetera
I have no idea what I'm doing wrong. At least I finally figured out how to pass initial arguments, lol.
tl;dr you have to specify some reasonable constraints (the range could be very broad). You could also switch optimizers, but you would have to move out of the "GN" (global, derivative-free) class: according to the NLopt documentation
All of the global-optimization algorithms currently require you to specify bound constraints on all the optimization parameters.
(I usually use "NLOPT_LN_BOBYQA" for derivative-free optimization. Or you could use optim(..., method = "Nelder-Mead")
from base R.)
You didn't give us a reproducible example ... Using the example from ?nloptr::isres
, which is a convenient wrapper for the NLOPT_GN_ISRES
optimizer (you can look at the source code to see, it's not very complicated):
This (example as given) works fine:
library(nloptr)
fn <- function(x)
return( 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2 )
x0 <- c( -1.2, 1 )
lb <- c( -3, -3 )
ub <- c( 3, 3 )
isres(x0 = x0, fn = fn, lower = lb, upper = ub)
If I try to leave out the bounds it complains:
isres(x0 = x0, fn = fn)
argument "lower" is missing, with no default
If I do this "raw" I get the same behaviour you saw:
opts <- list("algorithm"="NLOPT_GN_ISRES",maxeval=100000)
res <- nloptr( x0=x0,
eval_f=fn,
opts=opts)
If I try to set the limits to -Inf
/+Inf
(which works for optim()
):
isres(x0 = x0, fn = fn, lower = rep(-Inf, 2), upper = rep(Inf, 2))
I also get an "NLOPT_INVALID_ARGS" error.
If I specify finite but ridiculously large bounds, the optimizer [silently!] fails (does all the iterations requested, but stays stuck at the starting point):
isres(x0 = x0, fn = fn, lower = rep(-1e9, 2), upper = rep(1e9, 2))
However, if I use bounds that are way larger than the original choice (±100 instead of ±3) it gives reasonable answers, although not quite as precise as the original fit:
isres(x0 = x0, fn = fn, lower = rep(-100, 2), upper = rep(100, 2))