(9th-September-2020)
Instead of creating a sample and then rejecting it, it is possible to mix sampling with inference to reason about the probability that a sample would be rejected. In importance sampling, each sample has a weight, and the sample average is computed using the weighted average of samples. The weights of samples come from two sources:
The samples do not have to be selected in proportion to their probability, but they can be selected according to some other distribution, called the proposal distribution.
Evidence is used to update the weights and is used to compute the probability that a sample would be rejected.
Particle Filtering
Importance sampling enumerates the samples one at a time and, for each sample, assigns a value to each variable. It is also possible to start with all of the samples and, for each variable, generate a value for that variable for each of the samples. For example, for the data of Figure 6.10, the same data could be generated by generating all of the values for Tampering before generating the values for Fire. The particle filtering algorithm generates all the samples for one variable before moving to the next variable. It does one sweep through the variables, and for each variable it does a sweep through all of the samples. This algorithm has an advantage when variables are generated dynamically and there can be unboundedly many variables. It also allows for a new operation of resampling.
Given a set of samples on some of the variables, resampling consists of taking n samples, each with their own weighting, and generating a new set of n samples, each with the same weight. Resampling can be implemented in the same way that random samples for a single random variable are generated, but samples, rather than values, are selected. Some of the samples are selected multiple times and some are not selected.
Comentários