rjags

The reason for using update() on a model created with jags


I'm using rjags to do an analysis in R (based off of this blogpost: http://www.sumsar.net/blog/2013/08/bayesian-estimation-of-correlation/) and I've got a question. Is there any difference between using update() to update the model with 500 samples and just setting the n.adapt parameter in the original model creation to 1000 instead of 500?

In other words: is there an important difference between run1 and run2 when I do the following:

require(mvtnorm)
require(rjags)

model_string <- "
  model {
    for(i in 1:n) {
      x[i,1:2] ~ dmnorm(mu[], prec[ , ])
    }

    prec[1:2,1:2] <- inverse(cov[,])
    cov[1,1] <- sigma[1] * sigma[1]
    cov[1,2] <- sigma[1] * sigma[2] * rho
    cov[2,1] <- sigma[1] * sigma[2] * rho
    cov[2,2] <- sigma[2] * sigma[2]

    sigma[1] ~ dunif(0, 1000) 
    sigma[2] ~ dunif(0, 1000)
    rho ~ dunif(-1, 1)
    mu[1] ~ dnorm(0, 0.001)
    mu[2] ~ dnorm(0, 0.001)

    x_rand ~ dmnorm(mu[], prec[ , ])
  }

"

mu <- c(10, 30)
sigma <- c(20, 40)
rho <- -0.7
cov_mat <- rbind(c(     sigma[1]^2       , sigma[1]*sigma[2]*rho ),
                 c( sigma[1]*sigma[2]*rho,      sigma[2]^2       ))
x <- rmvnorm(30, mu, cov_mat)

data_list = list(x = x, n = nrow(x))
inits_list = list(mu = c(mean(x[, 1]), mean(x[, 2])),
                  rho = cor(x[, 1], x[, 2]),
                  sigma = c(sd(x[, 1]), sd(x[, 1])))


jags_model <- jags.model(textConnection(model_string), data = data_list, inits = inits_list,
                         n.adapt = 500, n.chains = 3, quiet = T)
update(jags_model, 500)
mcmc_samples <- coda.samples(jags_model, c("mu", "rho", "sigma", "x_rand"),
                             n.iter = 5000)

run1<-summary(mcmc_samples)

jags_model <- jags.model(textConnection(model_string), data = data_list, inits = inits_list,
                         n.adapt = 1000, n.chains = 3, quiet = T)
mcmc_samples <- coda.samples(jags_model, c("mu", "rho", "sigma", "x_rand"),
                             n.iter = 5000)
run2<-summary(mcmc_samples)

Solution

  • The function update performs burn-in iterations for your Markov chain. If you fail to run those burn-in iterations, then your chain will not have converged to its stationary distribution and any samples you generate will not actually be from the correct posterior distribution. In short, you absolutely need to update your chain, and you should additionally perform convergence checks (trace plots/acfs) on your final samples to make sure you've burned your chain for long enough.

    The adaptive iterations that JAGS performs are not burn in iterations. From the documentation, "The sequence of samples generated during this adaptive phase is not a Markov chain, and therefore may not be used for posterior inference on the model." Running the your model in the adaptive phase can improve efficiency, but you will still need to update/burn-in your chain so that you're actually sampling from the proper posterior distribution.