diff --git a/slides/talks/2017-1-EDBT-Inference/index.html b/slides/talks/2017-1-EDBT-Inference/index.html
index e7fdeb71..9e9e645c 100644
--- a/slides/talks/2017-1-EDBT-Inference/index.html
+++ b/slides/talks/2017-1-EDBT-Inference/index.html
@@ -71,11 +71,11 @@
Ying could not be here today. If you like her ideas, get in touch with her. (she's on the job market) (If you don't, blame my presentation) (Also, Ying is on the job market)Disclaimer
Joint probability distributions are expensive to store
$$p(D, I, G, S, J)$$
Bayes rule lets us break apart the distribution
+
Bayes rule lets us break apart the distribution
$$= p(D, I, G, S) \cdot p(J | D, I, G, S)$$
And conditional independence lets us further simplify
+
And conditional independence lets us further simplify
$$= p(D, I, G, S) \cdot p(J | G, S)$$
This is basis for a type of graphical model called a "Bayes Net"
@@ -136,6 +136,7 @@ $\cdot\;0.95$ $\cdot\;0.25$ $\cdot\;0.8$ + $=\;0.0665$$G$ | # | $\sum p_{\psi_2}$ |
---|---|---|
1 | 3 | 0.348 |
2 | 4 | 0.288 |
3 | 4 | 0.350 |
1 | 1 | 0.126 |
$D$ | $G$ | # | $\sum p_{\psi_1}$ |
---|---|---|---|
0 | 1 | 1 | 0.126 |
1 | 1 | 2 | 0.222 |
0 | 2 | 2 | 0.238 |
1 | 2 | 2 | 0.050 |
0 | 3 | 2 | 0.322 |
1 | 3 | 2 | 0.028 |
$I$ | $G$ | # | $\sum p_{\psi_1}$ |
0 | 1 | 1 | 0.18 |
$D$ | $G$ | # | $\sum p_{\psi_1}$ |
---|---|---|---|
$I$ | $G$ | # | $\sum p_{\psi_1}$ |
0 | 1 | 2 | 0.140 |
1 | 1 | 2 | 0.222 |
0 | 2 | 2 | 0.238 |
3 | 4 | 0.350 |
$D$ | $G$ | # | $\sum p_{\psi_1}$ |
---|---|---|---|
$I$ | $G$ | # | $\sum p_{\psi_1}$ |
0 | 1 | 2 | 0.140 |
1 | 1 | 2 | 0.222 |
0 | 2 | 2 | 0.238 |