I have run a bayesian model (regression) using pymc5. I would like to get a score indicator (AIC|BIC or LOO) to compare different models. I am a newcomer to pymc and I don't understand how to get this indicator. For reference here the basic code:
# bayes model
basic_model = pm.Model()
with basic_model:
cost = pm.Normal("cost", mu = 0, sigma=2)
a= pm.Normal('a_array', mu=0, sigma=2)
b = pm.Normal('b_array', mu=0, sigma=2)
c= pm.HalfNormal('c_array', sigma=2)
sigma = pm.HalfNormal("sigma", sigma=1)
# Expected value of outcome
mu = cost + a* a_array + b* b_array + c*c_array
# Likelihood (sampling distribution) of observations
Y_obs = pm.Normal("Y_obs", mu=mu, sigma=sigma, observed=y_array)
with basic_model:
# draw 1000 posterior samples
idata = pm.sample(1000)
I tried this code:
f_logp = basic_model.compile_logp()
# k is the total number of estimated parameters
k = sum(list(map(len, map_est.values())))
aic = 2 * k - 2 * f_logp(map_est)
but it doesn't work (len() of unsized object)
Could anyone help me? All the best
You can get pymc to compute the log likelihood, and then Arviz for calculating the metrics:
import arviz as az
with basic_model:
# draw 1000 posterior samples
idata_01 = pm.sample(1000)
with basic_model:
pm.compute_log_likelihood(idata_01)
# for loo
basic_model_loo = az.loo(idata_01)
or alternatively passing idata_kwargs={"log_likelihood": True}
to sample
:
with basic_model:
# draw 1000 posterior samples
idata_01 = pm.sample(1000, idata_kwargs={"log_likelihood": True})
# for loo
basic_model_loo = az.loo(idata_01)
See the the pymc documentation for more information on model comparison.