1. Variable elimination: Pointwise product, Summing out a variable

2. Sampling: Prior Sampling, Rejection Sampling (reject incorrect), Likelihood Weighting (weight each sample by P(Xi|Parents(Xi)) ), Gibbs Sampling (1. fix evidence, 2. randomly initialize other variables, 3. choose a non-evidence variable X and resample from P(X|Markov_blanket(X)), repeat).

3. HMM Filtering algorithm: $P(X_{t+1}|e_{1:t+1}) = \alpha P(e_{t+1}|X_{t+1}) \sum_{x_t}P(x_t|e_{1:t})P(X_{t+1}|x_t)$

4. Define the following belief distribution

• $B(W_t) = P(W_t|O_1, ..., O_t)$ : Belief about state $W_t$ given all the observations up until (and including) timestep $t$.
• $B'(W_t) = P(W_t|O_1, ..., O_{t−1})$ : Belief about state $W_t$ given all the observations up until (but not including) timestep $t$.

Forward Algorithm:

• Prediction update: $B'(W_{t+1}) = \sum_{w_t} P(W_{t+1}|w_t)B(w_t)$
• Observation update: $B(W_{t+1}) ∝ P(O_{t+1}|W_{t+1})B'(W_{t+1})$
5. Particle Filtering: Instead of storing a full probability table mapping each state to its belief probability, we’ll instead store a list of n particles, where each particle is in one of the d possible states in the domain of our time-dependent random variable. Once we’ve sampled an initial list of particles, the simulation takes on a similar form to the forward algorithm, with a time elapse update followed by an observation update at each timestep:

• Prediction update - Update the value of each particle according to the transition model. For a particle in state $W_t$ , sample the updated value from the probability distribution given by $Pr(W_{t+1}|w_t)$ .
• Observation update - During the observation update for particle filtering, we use the sensor model $Pr(O_t|W_t)$ to weight each particle according to the probability dictated by the observed evidence and the particle’s state. Specifically, for a particle in state $w_t$ with sensor reading $o_t$ , assign a weight of $Pr(o_t|w_t)$ . The algorithm for the observation update is as follows:

1. Calculate the weights of all particles as described above.
2. Calculate the total weight for each state.
3. If the sum of all weights across all states is 0, reinitialize all particles.
4. Else, normalize the distribution of total weights over states and resample your list of particles from this distribution.
6. Value of Information: $VPI(E'|e) = (\sum_{e'} P(e'|e)MEU(e, e')) - MEU(e) = MEU(e, E') - MEU(e)$ $\forall E', e: VPI(E'|e) \geq 0$ ; Not additive; Not order-dependent.

7. Data: Training set; Held out set; Test set

8. Naïve Bayes model: $P(Y, F_1, ..., F_n) = P(Y)\prod_iP(F_i|Y)$

9. Maximum Likelihood Estimation: $\widehat{\theta}_{MLE} = P_{ML}(x) = \frac{count(x)}{total\space samples}$

10. Laplace Smoothing: $P_{LAP, k}(x) = \frac{c(x) + k}{N + k|X|}$ , $P_{LAP, k}(x|y) = \frac{c(x, y) + k}{c(y) + k|X|}$

11. Binary Perceptron:

• Classify: $y = +1 (if\space w \cdot f(x) \geq 0), -1 (if\space w\cdot f(x) < 0)$

• If correct, no change.

• If wrong: $w = w + y^* \cdot f$

12. Multiclass perceptron:

• Predict: $y = argmax_y\space w_y \cdot f(x)$
• If wrong: $w_y = w_y - f(x)$ , $w_{y^*} = w_{y^*} + f(x)$
本文阅读量
本站访客量