(11th-December-2020)
Traditional neural networks implement a deterministic transformation of some input variables x. When developing generative models, we often wish to extend neural networks to implement stochastic transformations of x. One straightforward way to do this is to augment the neural network with extra inputs z that are sampled from some simple probability distribution, such as a uniform or Gaussian distribution. The neural network can then continue to perform deterministic computation internally, but the function f(x z , ) will appear stochastic to an observer who does not have access to z. Provided that f is continuous and differentiable, we can then compute the gradients necessary for training using back-propagation as usual.
Comments