For my model there are two different types of data. Let us say data of type X1 and data of type X2.
Is it possible to implement different kernels for both data types? So starting from data of type X1, kernel X1 is used and starting from data of type X2, kernel X2 is used? (I use the VGP model)
The background is that I know the hyperparameters for one of the two data types and not for the other data type.
Thanks for your help!
You will need to collect all data in a single array X
, but you can define kernels that act on different dimensions of this array using the active_dims
argument to kernel constructors, as demonstrated in one of the notebook tutorials. You'll have to think about what covariance you want between the different kernels, e.g. the additive kernel in that notebook example is a very strong constraint, you might want to multiply them instead:
N, D1 = X1.shape
N, D2 = X2.shape
X = np.concatenate([X1, X2], axis=-1)
assert X.shape == (N, D1 + D2)
k1 = gpflow.kernels.Matern32(lengthscales=np.ones(D1), active_dims=slice(0, D1))
k2 = gpflow.kernels.SquaredExponential(lengthscales=np.ones(D2), active_dims=slice(D1, D1+D2))
kernel = k1 * k2
You can pass in either a slice object as above or an explicit list of dimensions (e.g. [0,1,2, 8,9]
) to the active_dims
argument. Active dims of different sub-kernels may overlap.
If what you actually want is the same kernel for all dimensions, but just fix some of the lengthscales, have a look at How to fix some dimensions of a kernel lengthscale in gpflow?