Context:
I am implementing Gaussian Bernoulli RBM, it is like the popular RBM but with real-valued visible units.
True that the procedure of sampling hidden values p(h=1|v)
are the same for both, i.e.
Problem:
My problem is in coding (using Python) p(v|h)
, which is,
I am a little bit confused as to how N() works. Do I simply add Gaussian noise using the data's standard deviation to b + sigma * W.dot(h)
?
Thank you in advance.
The notation X ~ N(μ, σ²) means that X is normally distributed with mean μ and variance σ², so in the RBM training routine, v should be sampled from such a distribution. In NumPy terms, that's
v = sigma * np.random.randn(v_size) + b + sigma * W.dot(h)
Or use scipy.stats.norm
for better readable code.