pythonscikit-learngaussian-processuncertainty

How to incorporate individual measurement uncertainties into Gaussian Process?


I have a set of observations, f_i=f(x_i), and I want to construct a probabilistic surrogate, f(x) ~ N[mu(x), sigma(x)], where N is a normal distribution. Each observed output, f_i, is associated with a measurement uncertainty, sigma_i. I would like to incorporate these measurement uncertainties into my surrogate, f_i, so that mu(x) predicts the observations, f_i(x_i), and that the predicted standard deviation, sigma(x_i), envelops the uncertainty in the observed output, epsilon_i.

The only way I can think of to accomplish this is through a combination of Monte Carlo sampling and Gaussian Process modeling. It would be ideal to accomplish this with a single Gaussian process, without Monte Carlo samples, but I can not make this work.

I show three attempts to accomplish my goal. The first two avoid Monte Carlo sampling, but do not predict an average of f(x_i) with uncertainty bands that envelop epsilon(x_i). The third approach uses Monte Carlo sampling and accomplishes what I want to do.

Is there a way to create a Gaussian Process that on average predicts the mean observed output, with uncertainty that will envelop uncertainty in the observed output, without using this Monte Carlo approach?

import matplotlib.pyplot as plt
import numpy as np
import matplotlib
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, Matern, ExpSineSquared, WhiteKernel

# given a set of inputs, x_i, and corresponding outputs, f_i, I want to make a surrogate f(x). 
# each f_i is measured with a different instrument, that has a different uncertainty. 

# measured inputs
xs = np.array([44, 77, 125])

# measured outputs
fs = [8.64, 10.73, 12.13]

# uncertainty in measured outputs
errs = np.array([0.1, 0.2, 0.3])

# inputs to predict
finex = np.linspace(20, 200, 200)


#############
### approach 1: uncertainty in kernel
# - the kernel is constant and cannot change as a function of the input
# - uncertainty in measurements can be incorporated using a whitenoisekernel
# - the white noise uncertainty can be specified as the average of the observation error

# RBF + whitenoise kernel
kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3)) + WhiteKernel(errs.mean(), noise_level_bounds=(errs.mean() - 1e-8, errs.mean() + 1e-8))
gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True)
gaussian_process.fit((np.atleast_2d(xs).T), (fs))
mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True)
plt.scatter(xs, fs, zorder=3, s=30)
plt.fill_between(finex, (mu - std), (mu + std), facecolor='grey')
plt.plot(finex, mu, c='w')
plt.errorbar(xs, fs, yerr=errs, ls='none')
plt.xlabel('input')
plt.ylabel('output')
plt.title('White Noise Kernel - assumes uniform sensor error')
plt.savefig('gp_whitenoise')
plt.clf()


####################
### Aproach 2: incorporate measurement uncertainty in the likelihood function
# - the likelihood function can be altered throught the alpha parameter
# - this assumes gaussian uncertainty in the measured input
kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3))
gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, alpha=errs)
gaussian_process.fit((np.atleast_2d(xs).T), (fs))
mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True)
plt.scatter(xs, fs, zorder=3, s=30)
plt.fill_between(finex, (mu - std), (mu + std), facecolor='grey')
plt.plot(finex, mu, c='w')
plt.errorbar(xs, fs, yerr=errs, ls='none')
plt.xlabel('input')
plt.ylabel('output')
plt.title('uncertainty in likelihood - assumes measurements may be innacruate')
plt.savefig('gp_alpha')
plt.clf()

####################
### Aproach 3: Monte Carlo of measurement uncertainty + GP
# - The Gaussian process represents uncertainty in creating the surrogate f(x)
# - The uncertainty in observed inputs can be propogated using Monte Carlo
# - downside: less computationally efficient, no analytic solution for mean or uncertainty
kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3))
posterior_history = np.zeros((finex.size, 100 * 50))
for sample in range(100):
   simulatedSamples = fs + np.random.normal(0, errs)
   gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True)
   gaussian_process.fit((np.atleast_2d(xs).T), (simulatedSamples))
   posterior_sample = gaussian_process.sample_y((np.atleast_2d(finex).T), 50)
   plt.plot(finex, posterior_sample, c='orange', alpha=0.005)
   posterior_history[:, sample * 50 : (sample + 1) * 50] = posterior_sample
plt.plot(finex, posterior_history.mean(1), c='w')
plt.fill_between(finex, posterior_history.mean(1) - posterior_history.std(1), posterior_history.mean(1) + posterior_history.std(1), facecolor='grey', alpha=1, zorder=5)
plt.scatter(xs, fs, zorder=6, s=30)
plt.errorbar(xs, fs, yerr=errs, ls='none', zorder=6)
plt.xlabel('input')
plt.ylabel('output')
plt.title('Monte Carlo + RBF Gaussian Process. Accurate but expensive.')
plt.savefig('gp_monteCarlo')
plt.clf()

enter image description here

enter image description here

enter image description here


Solution

  • Using your second approach, only slightly changing Alpha

    kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3))
    gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, alpha=errs**2)
    gaussian_process.fit((np.atleast_2d(xs).T), (fs))
    mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True)
    

    Approach 2