I am trying to do MLE (Maximum Likelihood Estimation) with analytical derivatives using Optim. The tutorial of Optim.jl says that we can make ! function as follows
function g!(G, x)
G[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
G[2] = 200.0 * (x[2] - x[1]^2)
end
optimize(f, g!, x0, LBFGS())
Now, my situation is that I have an analytical derivative function with extra input argument that is not subject to optimization.
function g_RPM!(G, ω::Real, λ::Real, Data::Array, ω⃰::Vector)
G[1] = grad_wrt_ω_RPM(ω,λ,Data,ω⃰)
G[2] = grad_wrt_λ_RPM(ω,λ,Data,ω⃰)
end
Then, for the estimation, I redefined the function as follows.
function g_RPM_Data!(G,ω::Real,λ::Real)
g_RPM!(G,ω,λ,Data,ω⃰)
end
As I made a following code for actual optimization, it gives an error.
opt_RPM_LBFGS_analytical = optimize(x->-loglikelihood_RPM_Gumbel(x[1],x[2],Data,ω⃰),g_RPM_Data!,ones(2),LBFGS())
ERROR: MethodError: no method matching g_RPM_Data!(::Vector{Float64}, ::Vector{Float64})
Closest candidates are:
g_RPM_Data!(::Any, ::Real, ::Real)
@ Main c:\Users\LG\OneDrive\24 Spring\Income_consid\SCP\simulation013124.jl:160
To me it seems the G
for the ! function (overwriting the existing data) is making a problem.
I might just eliminate the G
, but I would want to save my memory storage.
Could someone help to fix this problem?
I expect this problem could be solved without eliminated G
as an input.
As explained in the Julia REPL error, the issue is that the solver is expecting a gradient with the following signature:
g_RPM_Data!(::Vector{Float64}, ::Vector{Float64})
You could use keyword arguments to match this definition:
function g_RPM!(G, Data::Array; ω::Real = 1.0, λ::Real = 1.0, ω⃰::Vector = ones(2))
G[1] = grad_wrt_ω_RPM(ω,λ,Data,ω⃰)
G[2] = grad_wrt_λ_RPM(ω,λ,Data,ω⃰)
end
or pass the following function
(G, x) -> g_RPM!(G, x, ω, λ, ω⃰) # replacing ω, λ, ω⃰ by actual values.