pythonstatisticsdistributionpymc3pymc

How to decide on what priors distributions to use for parameters in PyMC3?


I am looking into PyMC3 package where I was interested in implementing the package in a scenario where I have several different signals and each signal has different amplitudes.

However, I am stuck on what type of priors I would need to use in order to implement PyMC3 into it and likelihood distribution to implement. Scenario example is shown in the following image:

Signal plot

I tried to implement it here, but, every time I keep on getting the error:

pymc3.exceptions.SamplingError: Bad initial energy

My Code

## Signal 1:
    with pm.Model() as model:
        # Parameters:
        # Prior Distributions:
        # BoundedNormal = pm.Bound(pm.Exponential, lower=0.0, upper=np.inf)
        # c = BoundedNormal('c', lam=10)
        # c = pm.Uniform('c', lower=0, upper=300)
        alpha = pm.Normal('alpha', mu = 0, sd = 10)
        beta = pm.Normal('beta', mu = 0, sd = 1)
        sigma = pm.HalfNormal('sigma', sd = 1)
        mu = pm.Normal('mu', mu=0, sigma=1)
        sd = pm.HalfNormal('sd', sigma=1)

        # Observed data is from a Multinomial distribution:
        # Likelihood distributions:
        # bradford = pm.DensityDist('observed_data', logp=bradford_logp, observed=dict(value=S1, loc=mu, scale=sd, c=c))
        # observed_data = pm.Beta('observed_data', mu=mu, sd=sd, observed=S1)
        observed_data = pm.Beta('observed_data', alpha=alpha, beta=beta, mu=mu, sd=sd, observed=S1)

    with model:
        # obtain starting values via MAP
        startvals = pm.find_MAP(model=model)

        # instantiate sampler
        # step = pm.Metropolis()
        step = pm.HamiltonianMC()
        # step = pm.NUTS()

        # draw 5000 posterior samples
        trace = pm.sample(start=startvals, draws=1000, step=step, tune=500, chains=4, cores=1, discard_tuned_samples=True)

        # Obtaining Posterior Predictive Sampling:
        post_pred = pm.sample_posterior_predictive(trace, samples=500)
        print(post_pred['observed_data'].shape)

    plt.title('Trace Plot of Signal 1')
    pm.traceplot(trace, var_names=['mu', 'sd'], divergences=None, combined=True)
    plt.show(block=False)
    plt.pause(5)  # Pauses the program for 5 seconds
    plt.close('all')

    pm.plot_posterior(trace, var_names=['mu', 'sd'])
    plt.title('Posterior Plot of Signal 1')
    plt.show(block=False)
    plt.pause(5)  # Pauses the program for 5 seconds
    plt.close('all')

Side questions

I also have been looking into the idea of implementing goodness to fitness test and Kalman filter while using different distributions other than Gaussian distribution, so, if you have the time I would appreciate it if can you have a look at them?. Both questions can be found here:

Goodness-to-fit tests link: Goodness-to-fit test

Kalman filter link: Kalman Filter


Edit 1

Say I have around 5 signals and would like to implement the Bayesian interface in order to see the difference in the PDF of the signals. How do I approach this problem? Do I need to create multiple models and get their posterior distribution? Like in this image:

Distribution plots

If I need to get the posterior distribution, Do I use the following code?

# Obtaining Posterior Predictive Sampling:
post_pred = pm.sample_posterior_predictive(trace, samples=500)

Edit 2

If I have multiple signals can I implement it this way in order to see changes in alpha and beta in all signals?

        observed_data_S1 = pm.Beta('observed_data_S1', alpha=alpha[0], beta=beta[0], observed=S1[0])
        observed_data_S2 = pm.Beta('observed_data_S2', alpha=alpha[1], beta=beta[1], observed=S2[0])
        observed_data_S3 = pm.Beta('observed_data_S3', alpha=alpha[2], beta=beta[2], observed=S3[0])
        observed_data_S4 = pm.Beta('observed_data_S4', alpha=alpha[3], beta=beta[3], observed=S4[0])
        observed_data_S5 = pm.Beta('observed_data_S5', alpha=alpha[4], beta=beta[4], observed=S5[0])
        observed_data_S6 = pm.Beta('observed_data_S6', alpha=alpha[5], beta=beta[5], observed=S6[0])

Edit 3:

how can I plot multiple traces in one plot? because I am looking at multiple signals and thought of combing all alphas and betas together in one plot.


Solution

  • First Mistake: Beta distribution's parameters alpha and beta must be positive. You have used a Normal prior on them which allows that RV to take negative and 0 values. You can easily fix that by using pm.Bound on pm.Normal distribution or using pm.HalfNormal distribution instead.

    Second Mistake: The other inconsistency is specifying mu and sigma along with alpha and beta parameters. Beta either accepts mu and sigma or alpha and beta but not both. The default behaviour is to use alpha and beta parameters over mu and sigma parameters. You are wasting a lot of computational power inferring mu and sigma out.

    Other Comments: You should not use sd parameter in any distribution as of version 3.8 as it has been deprecated and will be removed in 3.9. Use sigma instead.

    Corrected Version:

    import numpy as np
    import theano
    import theano.tensor as tt
    import pymc3 as pm
    import matplotlib.pyplot as plt
    
    S1 = np.random.rand(10)
    
    ## Signal 1:
    with pm.Model() as model:
        # Parameters:
        # Prior Distributions:
        # BoundedNormal = pm.Bound(pm.Exponential, lower=0.0, upper=np.inf)
        # c = BoundedNormal('c', lam=10)
        # c = pm.Uniform('c', lower=0, upper=300)
        alpha = pm.HalfNormal('alpha', sigma=10)
        beta = pm.HalfNormal('beta', sigma=1)
    
        # Observed data is from a Multinomial distribution:
        # Likelihood distributions:
        # bradford = pm.DensityDist('observed_data', logp=bradford_logp, observed=dict(value=S1, loc=mu, scale=sd, c=c))
        # observed_data = pm.Beta('observed_data', mu=mu, sd=sd, observed=S1)
        observed_data = pm.Beta('observed_data', alpha=alpha, beta=beta, observed=S1)
    
    with model:
        # obtain starting values via MAP
        startvals = pm.find_MAP(model=model)
    
        # instantiate sampler
        # step = pm.Metropolis()
        step = pm.HamiltonianMC()
        # step = pm.NUTS()
    
        # draw 5000 posterior samples
        trace = pm.sample(start=startvals, draws=1000, step=step, tune=500, chains=4, cores=1, discard_tuned_samples=True)
    
        # Obtaining Posterior Predictive Sampling:
        post_pred = pm.sample_posterior_predictive(trace, samples=500)
        print(post_pred['observed_data'].shape)
    
    plt.title('Trace Plot of Signal 1')
    pm.traceplot(trace, var_names=['alpha', 'beta'], divergences=None, combined=True)
    plt.show(block=False)
    plt.pause(5)  # Pauses the program for 5 seconds
    plt.close('all')
    
    pm.plot_posterior(trace, var_names=['alpha', 'beta'])
    plt.title('Posterior Plot of Signal 1')
    plt.show(block=False)
    plt.pause(5)  # Pauses the program for 5 seconds
    plt.close('all')