Hidden Markov Model 三大問題:Likelihood

Hidden Markov Model (HMM) 的用途包羅萬象,including communication, speech recognition, reinforcement learning, bioinformatics. 只要是 (discrete state)+(random process) 有關的問題,都可以發現 Markov Model or Hidden Markov Model.

HMM 包含兩個部分:hidden 和 Markov Model, 定義如下:
Let X_t and Y_t be discrete-time stochastic processes. The pair of (X_t, Y_t) is a hidden markov model if

  1. X_t is a Markov process and is NOT directly observable ("hidden")
  2. Y_t is observable and P(Y_t | X_1, X_2, ..., X_t) = P(Y_t | X_t) .

Markov Process (Chain)

第一 X_n 為 Markov process. 同樣的定義是 \,P(X_{t+1} | X_1, X_2, ..., X_t) = P(X_{t+1} | X_t)
直觀的例子:預測明天的天氣,只要知道今天天氣就有足夠的資訊。就算再多告訴我昨天天氣,或是過去五天的天氣,也沒有提供額外的資訊預測明天的天氣。
注意這並不是說明天的天氣只和今天有關,和昨天或過去無關。也就是
P(X_{t+1} | X_{t-1}) \ne P(X_{t+1})

正確的作法
P(X_{t+1}=i | X_{t-1}=j) = \sum_k P(X_{t+1}=i | X_{t}=k, X_{t-1}=j) P(X_{t}=k | X_{t-1}=j)
= \sum_k P(X_{t+1}=i | X_{t}=k) P(X_{t}=k | X_{t-1}=j)
也就是明天的天氣和今天,昨天都有關。但是我們只要知道今天的天氣,就不再需要昨天的天氣。

Transition Probability Matrix

一般會定義 P_{i,j}(t) = P(X_{t+1}=j | X_{t}=i) , 從 ij 的機率,可以得到 transition probability matrix,
$latex P=\left[\begin{array}{cccccc}
P_{1,1} & P_{1,2} & \ldots & P_{1, j} & \ldots & P_{1, S} \\
P_{2,1} & P_{2,2} & \ldots & P_{2, j} & \ldots & P_{2, S} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
P_{i, 1} & P_{i, 2} & \ldots & P_{i, j} & \ldots & P_{i, S} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
P_{S, 1} & P_{S, 2} & \ldots & P_{S, j} & \ldots & P_{S, S}
\end{array}\right] $
and
\sum_{j=1}^{S} P_{i, j}=1
[@wikiMarkovChain2020]

Time-Homogeneous Markov Chain (or Stationary Markov Chain)

  • P_{i,j}(t) = P(X_{t+1}=j | X_{t}=i) = P(X_{t}=j | X_{t-1}=i) = P_{i,j}(t-1) for all t.
  • 也就是 P_{i,j}(t) t 無關。

結合上面兩個性質,可以推出:

  • P(X_{t+1}=j | X_{t-1}=i) = (P*P)_{i,j} = P^2_{i,j}
  • P(X_{t+1}=j | X_{t-2}=i) = (P*P*P)_{i,j} = P^3_{i,j}
  • … 依次類推

另外 P(X_{t+1}=j) = \sum_{k}P(X_{t+1}=j | X_{t}=k) P(X_{t}=k) = \sum_k P_{k,j}P(X_{t}=k) .
定義 \pi_j(t) = P(X_{t}=j) and \pi(t) = [\pi_1(t), \pi_2(t), ..., \pi_S(t)] , with \sum_k \pi_k = 1 可以得到

  • \pi(t+1) = \pi(t) P = \pi(t-1) P^2 = \pi(t-2) P^3 = ... = \pi(1) P^{t}
  • 假設 P is non-singular matrix, 可以做 eigendecomposition P = U\Sigma U^{-1} , and the eigenvalues 1=|\lambda_1| > |\lambda_2|\ge |\lambda_3|\ge ... \ge |\lambda_S|.
  • \pi(t+1) = \pi(1) P^{t} = \pi(1) U \Sigma^{t} U^{-1} where \pi(1) is the initial distribution.
  • Let \pi(1) = \sum_i^S a_i u_i where u_i is the i-th eigenvector corresponding \lambda_i .
  • \pi(t+1) = \pi(1) U \Sigma^{t} U^{-1} = \lambda_1^{t} \{ a_1 u_1 + a_2 (\frac{\lambda_2}{\lambda_1})^{t} u_2 + a_3 (\frac{\lambda_3}{\lambda_1})^{t} u_3 + ... \} with \lambda_1=1
    = a_1 u_1 + a_2 (\lambda_2)^{t} u_2 + a_3 (\lambda_3)^{t} u_3 + ...
    when t\to\infty \pi(\infty) = a_1 u_1  and a_1 is the normalization constant.

Stationary Distribution and Eigenvectors

Markov chain 分為兩大類:

  1. Markov chain with absorbing state, 重點是 transient behavior, 從 initial state 走多少步進入 absorbing state. 常用在 reinforcement learning.

    • Assuming state i is the absorbing state (sink), then P_{i,j}=\delta_{ij} 這會讓 P^{\infty} = [ 0,..,0, 1, 0, ..0 ], only the i-th column is 1 column vector, the rests are 0 column vectors. 對應的 eigenvalue=1 的 eigenvector=[1,0, ..].
    • How about source state, i.e. P_{i,i} = 0 ? 直接變成 INIT state (見下例). 如果不是 INIT state 可以直接忽略這個 state.
  2. Markov chain without absorbing state, 重點是 initial state effect 都消失而進入 stationary distribution (不是 steady state, 因為 MC 會在所有 state 徘徊)。Markov chain stationary distribution 常用於 queue theory, 排隊理論。排隊的人數對應 state of X_t , 可以看的數的清清楚楚。這和後面的 Hidden Markov Model 最大的不同是 state 是 invisible or hidden. 需要透過 observable Y 推論。

    • \pi = \pi P\, with \sum_k \pi_k = 1
    • \pi is the normalized eigenvector of P corresponding eigenvalue=1

Hidden Markov Model

Hidden Markov Model 把 Markov chain 提升一整個層次。數學上很簡單,X_t 變成 hidden or invisible, 但透過 observable Y_t 可以得到 X_t 的 information, i.e. P(Y_t | X_t) = P(Y | X) assuming time homogeneous. 這個機率稱為 emission probability,一般用 B matrix 表示如下圖, where P(Y=y_j | X=x_i) = b_{ij} . 直接可以得到 P(Y) = \sum_X P(Y|X)P(X) , 用矩陣表示:\pi(t) B 就是 Y_t 的機率分佈。[@wikiHiddenMarkov2020]
另一個常用的 backward probability using Bayisian formula:

P( X=x_i | Y=y_j) = P(X=x_i, Y=y_j) / P(Y=y_j) = P(Y=y_j | X=x_i) P(X=x_i) / P(Y=y_j)

= b_{ij} P(X=x_i) / \Sigma_i b_{ij} P(X=x_i) = b_{ij} \pi_i(t) / \Sigma_i b_{ij} \pi_i(t)

我們看一個實際的例子:[@burlandoHiddenMarkov2019]

可以用 Python code 描述:

observations = ('PAINT', 'CLEAN', 'SHOP', 'BIKE')
 
start_probability = {'Rainy': 0.4, 'Sunny': 0.6}
 
transition_probability = {
   'Rainy' : {'Rainy': 0.6, 'Sunny': 0.4},
   'Sunny' : {'Rainy': 0.2, 'Sunny': 0.8},
   }
 
emission_probability = {
   'Rainy' : {'PAINT': 0.3, 'CLEAN': 0.45, 'SHOP': 0.2, 'BIKE': 0.05},
   'Sunny' : {'PAINT': 0.4, 'CLEAN': 0.1, 'SHOP': 0.2, 'BIKE': 0.3},
   }

$latex P=\left[\begin{array}{cccccc}
0.6 & 0.4 \\
0.2 & 0.8
\end{array}\right]\quad $ and $latex \quad B=\left[\begin{array}{cccccc}
0.3 & 0.45 & 0.2 & 0.05 \\
0.4 & 0.1 & 0.2 & 0.3
\end{array}\right] $

先看 Markov chain 的部分,從 P 可以得到 stationary distribution from eigen-vectors.
$latex P=\left[\begin{array}{cccccc}
-0.894 & -0.707 \\
0.447 & -0.707
\end{array}\right]
\left[\begin{array}{cccccc}
0.4 & 0 \\
0 & 1
\end{array}\right]
\left[\begin{array}{cccccc}
-0.745 & 0.745 \\
-0.471 & -0.943
\end{array}\right] $

then
$latex \left[\begin{array}{cccccc}
-0.745 & 0.745 \\
-0.471 & -0.943
\end{array}\right]
P =
\left[\begin{array}{cccccc}
0.4 & 0 \\
0 & 1
\end{array}\right]
\left[\begin{array}{cccccc}
-0.745 & 0.745 \\
-0.471 & -0.943
\end{array}\right] $

then
$latex \left[\begin{array}{cccccc}
-0.745 & 0.745
\end{array}\right]
P = 0.4
\left[\begin{array}{cccccc}
-0.745 & 0.745
\end{array}\right] $ eigenvector with eigenvalue = 0.4 (控制 initial state 消失的速度)
$latex \left[\begin{array}{cccccc}
-0.471 & -0.943
\end{array}\right]
P =
\left[\begin{array}{cccccc}
-0.471 & -0.943
\end{array}\right] $ eigenvector with eigenvalue = 1 (use for stationary distribution)
After normalization
$latex \left[\begin{array}{cccccc}
1/3 & 2/3
\end{array}\right]
P =
\left[\begin{array}{cccccc}
1/3 & 2/3
\end{array}\right] $

The initial distribution: \pi(1) = [0.4 \,\, 0.6] .

The stationary distribution after the long time: \pi(t) = [1/3 \,\, 2/3] independent of initial distribution.

再看如何從 output 推論 state, 重要因為 HMM 只能從 output 推論 state.
P( X=x_i | Y=y_j) = b_{ij} \pi_i(t) / \Sigma_i b_{ij} \pi_i(t)

P(X=r | O=P) = 0.3 \pi_r / (0.3 \pi_r + 0.4 \pi_s)
P(X=s | O=P) = 0.4 \pi_s / (0.3 \pi_r + 0.4 \pi_s)
P(X=r | O=C) = 0.45 \pi_r / (0.45 \pi_r + 0.1 \pi_s)
P(X=s | O=C) = 0.1 \pi_s / (0.45 \pi_r + 0.1 \pi_s)
P(X=r | O=S) = 0.2 \pi_r / (0.2 \pi_r + 0.2 \pi_s)
P(X=s | O=S) = 0.2 \pi_s / (0.2 \pi_r + 0.2 \pi_s)
P(X=r | O=B) = 0.05 \pi_r / (0.05 \pi_r + 0.3 \pi_s)
P(X=s | O=B) = 0.3 \pi_s / (0.05 \pi_r + 0.3 \pi_s)

重點是如何和實際問題關聯,有三種應用情境:[@jurafskyHIddenMarkov2019]

  • The Likelihood problem (forward and backward algorithm)
  • The Decoding problem (viterbi decoding algorithm)
  • The Learning problem (EM algorithm)

Likelihood Problem

假設 observation sequence 是: O = { PAINT, CLEAN, SHOP, BIKE }. What is the likelihood that this observation sequence O can derive from the above HMM \lambda , i.e. P(O|\lambda) ? 到底什麼是 \lambda , an initial state, a known state sequence? or partial state, e.g. the first day is Sunny.
What is the difference between likelihood and (conditional) probability? NO! \lambda is parameter in likelihood problem. 這裡就是指 \lambda = (P, B) matrices.

Wrong Forward algorithm

In any case, 我們可以用 forward algorithm 計算 likelihood.

The following is the wrong interpretation!, 應該用 recursion instead of direct calculation!

  • \pi(t) = \pi(t-1) P = \pi(t-2) P^2 = ... = \pi(1) P^{t-1} , and
  • \pi(t) B = \pi(1) P^{t-1} B is the observable probability at time t.

O = { PAINT, CLEAN, SHOP, BIKE } 為例:先算出 \pi(1) , \pi(2)=\pi(1) P , \pi(3)=\pi(1) P^2 , \pi(4)=\pi(1) P^3 ,

\pi(1) = [\pi_r(1), \pi_s(1)] = [0.4, 0.6] \to P({Paint}(t=1)) = 0.3 \pi_r (1) + 0.4 \pi_s (1) = 0.3*0.4+0.4*0.6=0.36
\pi(2) = [\pi_r(2), \pi_s(2)] = [0.36, 0.64] \to P({Clean}(t=2)) = 0.45 \pi_r (2) + 0.1 \pi_s (2) = 0.226
\pi(3) = [\pi_r(3), \pi_s(3)] = [0.344, 0.656] \to P({Shop}(t=3)) = 0.2 \pi_r (3) + 0.2 \pi_s (3) =0.2
\pi(4) = [\pi_r(4), \pi_s(4)] = [0.3376, 0.6624] \to P({Paint}(t=4)) = 0.05 \pi_r (4) + 0.3 \pi_s (4) = 0.2156

\pi(1), \pi(2), \pi(3), \pi(4) , 可以得到
P(Y_1)=\pi(1)B, P(Y_2)=\pi(2)B, P(Y_3)=\pi(3)B, P(Y_4)=\pi(4)B

但這不是我們要的!
我們要找出 P(O) = P(Y_1=O_1, Y_2=O_2, Y_3=O_3, Y_4=O_4)

where O_1 : PAINT, O_2 :CLEAN, O_3 :SHOP, O_4 :BIKE.

正確的方法必須要用 recursion!!

Correct Forward algorithm

整體的邏輯:P(O_1)\to P(O_1, O_2) \to ...\to P(O_1, O_2, ..., O_{t-1}, O_t
分為三步 [@burlandoHiddenMarkov2019]

  • Initialization: P(O_1)
  • Recursion: P(O_1,...,O_{t})\to P(O_1,..., O_t, O_{t+1})
  • Termination

Initialization

如下圖: P(O_1) = P(O_1,X_1=s)+P(O_1,X_1=r)=\alpha_1(s)+\alpha_1(r)
也就是 \alpha_1(i) = P(O_1,X_1=i) where i \in {sunny, rainy}
上式可以用 conditional probability 展開:
\alpha_1(i) = P(O_1,X_1=i) = P(O_1 | X_1=i)P(X_1=i) = b_{i O_1} \pi_i(1)
P(O_1) = \sum_{i\in\{s,r\}}\alpha_1(i) = \sum_{i\in\{s,r\}} b_{i O_1} \pi_i(1)

我們看實際的例子: \alpha_1(i) = P(O_1,X_1=i)

  • i = "sunny": \alpha_1(s) = \pi_s(1)*b_{s O_1}=0.6*0.4=0.24
  • i = "rainy": \alpha_1(r) = \pi_r(1)*b_{r O_1}=0.4*0.3=0.12
  • P(O_1) = \alpha_1(s) + \alpha_1(r) = 0.36

Why break P(O_1) = \alpha_1(s)+\alpha_1(r) ? 因為 \alpha_t(i) 可以變成 recursive!

Recursion

這和我之前想的完全不一樣。正確作法:我們先從 P(O_1, O_2) 開始。

From O_1 to O_2 :
P(O_1, O_2) = P(O_1,O_2,X_2=s)+P(O_1,O_2,X_2=r) =\alpha_2(s)+\alpha_2(r)

Why break P(O_1, O_2) = \alpha_2(s)+\alpha_2(r) ? 因為 \alpha_t(i) 可以變成 recursive!

\alpha_2(j) = P(O_1,O_2,X_2=j)\quad where j \in \{s(unny), r(ainy)\}
= P(O_2|O_1,X_2=j)P(O_1,X_2=j)\quad 第一項可以忽略 O_1
= P(O_2|X_2=j)P(O_1,X_2=j)=b_{j O_2}P(O_1,X_2=j)
= \sum_{i \in \{s,r\}} b_{j O_2}P(O_1,X_1=i,X_2=j)\quad 想辦法做出 recursion!
= \sum_{i \in \{s,r\}} b_{j O_2}P(X_2=j|O_1,X_1=i)P(O_1,X_1=i)
= \sum_{i \in \{s,r\}} b_{j O_2}P(X_2=j|O_1,X_1=i)\alpha_1(i)\quad 中間項可以忽略 O_1
= \sum_{i \in \{s,r\}} b_{j O_2}p_{ij}\alpha_1(i)

這就是我們要的結果: \alpha_2(j) = f_j(\alpha_1(s), \alpha_1(r))\quad where j \in \{s, r\}
是否能更進一步,得到:P(O_1, O_2) = f( P(O_1) )\to NO!!!

P(O_1, O_2) = \sum_j \alpha_2(j) = \sum \sum_{j,i \in \{s,r\}} b_{j O_2}p_{ij}\alpha_1(i)\quad 無法簡化成 f(\sum_i \alpha_1(i))=f( P(O_1) )

再看實際的例子:
\alpha_2(sunny) = P(O_1,O_2,X_2=sunny) = \alpha_1(sunny)*p_{ss}*b_{sO_2} + \alpha_1(rainy)*p_{rs}*b_{sO_2}
=0.24*0.8*0.1 + 0.12*0.4*0.1 = 0.024
注意 \alpha_1(sunny) = P(O_1, X_1=sunny)\to 同時包含 O_1 X_1=sunny information.

另一路是:
\alpha_2(rainy) = P(O_1,O_2,X_2=rainy) = \alpha_1(sunny)*p_{sr}*b_{rO_2} + \alpha_1(rainy)*p_{rr}*b_{rO_2}
=0.24*0.2*0.45 + 0.12*0.6*0.45 = 0.054
P(O_1, O_2) = alpha_2(sunny) + alpha_2(rainy) = 0.078

From O_{t} to O_{t+1} :
P(O_1,...,O_{t+1}) = P(O_1,...,O_{t+1},X_{t+1}=s)+P(O_1,...,O_{t+1},X_{t+1}=r) =\alpha_{t+1}(s)+\alpha_{t+1}(r)

\alpha_{t+1}(j) = P(O_1,...,O_t,O_{t+1},X_{t+1}=j)\quad where j \in \{s(unny), r(ainy)\}
= P(O_{t+1}|O_1,...,O_t,X_{t+1}=j)P(O_1,...,O_t,X_{t+1}=j)\quad 第一項可以忽略 O_1, ..., O_t
= P(O_{t+1}|X_{t+1}=j)P(O_1,...,O_t,X_{t+1}=j)=b_{j O_{t+1}}P(O_1,...,O_t,X_{t+1}=j)
= \sum_{i \in \{s,r\}} b_{j O_{t+1}}P(O_1,...,O_t,X_t=i,X_{t+1}=j)\quad 想辦法做出 recursion!
= \sum_{i \in \{s,r\}} b_{j O_{t+1}}P(X_{t+1}=j|O_1,...,O_t,X_t =i)P(O_1,...,O_t,X_t =i)
= \sum_{i \in \{s,r\}} b_{j O_{t+1}}P(X_{t+1}=j|O_1,...,O_t,X_t =i)\alpha_t(i)\quad 中間項可以忽略 O_1,...,O_t
= \sum_{i \in \{s,r\}} b_{j O_{t+1}}p_{ij}\alpha_t(i)

The recursion is summarized below:

  • P(O_1, O_2, ..., O_t, O_{t+1}) = \sum_j \alpha_{t+1}(j) . 為什麼要分解成 \sum_j \alpha_{t+1}(j) ?
  • 因為 P(O_1, O_2, ..., O_t, O_{t+1}) 無法變成 recursive, 但是 \alpha_{t+1}(j) 可以變成 recursive!
  • \alpha_{t+1}(j)=\sum_i \alpha_{t}(i) p_{i j} b_{j O_{t+1}} \to P(O_1, O_2, ..., O_t, O_{t+1}) = \sum_j \sum_i \alpha_{t}(i) p_{i j} b_{j O_{t+1}}
  • Forward algorithm 結案! 所以 P(O) = P(O_1,O_2,O_3,O_4) = 0.0028512 + 0.0003048 = 0.003156

Backward algorithm

Usually, to find the solution to the Likelihood problem we don’t really require to know the Backward Algorithm. However, its explanation and resolution is a good litmus test to show that the Forward algorithm works proficiently, and moreover, by understanding it now, we can be ready for when the time will come to use it for the resolution of the third problem of Learning.
The Backward Algorithm is similar to the Forward algorithm, but like the name suggests it goes backward in time. There’s again an Initialization, a Recursion and a Termination.

Initialization

What this simple equation is telling us is that at time T (at the end of the observation sequence) the backward variables of every state is equal to 1. Simple as that.
\beta_T(i) = 1

如下圖: P(O_T) = P(O_T,X_T=s)+P(O_T,X_T=r)=\beta_T(s)+\beta_T(r)

\beta_4(s) = P(O_5 | X_4 = s) = 1
\beta_4(r) = P(O_5 | X_4 = r) = 1

也就是 \beta_T(s) = P(O_T,X_T=s) where i \in \{sunny, rainy\}
上式可以用 conditional probability 展開:
\alpha_1(i) = P(O_1,X_1=i) = P(O_1 | X_1=i)P(X_1=i) = b_{i O_1} \pi_i(1)
P(O_1) = \sum_{i\in\{s,r\}}\alpha_1(i) = \sum_{i\in\{s,r\}} b_{i O_1} \pi_i(1)

我們看實際的例子: \alpha_1(i) = P(O_1,X_1=i)

  • i = "sunny": \alpha_1(s) = \pi_s(1)*b_{s O_1}=0.6*0.4=0.24
  • i = "rainy": \alpha_1(r) = \pi_r(1)*b_{r O_1}=0.4*0.3=0.12
  • P(O_1) = \alpha_1(s) + \alpha_1(r) = 0.36

Recursion

From O_4 to O_3

\beta_3(s) = P(O_4 | X_3=s) = P(O_4, X_4=s | X_3=s)+ P(O_4, X_4=r | X_3=s)

= P(O_4 | X_3=s, X_4=s) P(X_4=s | X_3=s) + P(O_4 | X_3=s, X_4=r) P(X_4=r | X_3=s)\quad Given X_4 可以忽略 X_3 w.r.t O_4

= p_{ss} b_{s O_4} + p_{sr} b_{r O_4} = 0.8*0.3 + 0.2*0.05 = 0.25

\beta_3(r) = P(O_4 | X_3=r) = P(O_4, X_4=s | X_3=r)+ P(O_4, X_4=r | X_3=r)

= P(O_4 | X_3=r, X_4=s) P(X_4=s | X_3=r) + P(O_4 | X_3=r, X_4=r) P(X_4=r | X_3=r)

= p_{rs} b_{r O_4} + p_{rr} b_{r O_4} = 0.4*0.3 + 0.6*0.05 = 0.15

From O_{t+1} to O_{t}

\beta_t(i) = P(O_{t+1},...O_T | X_t=i)
= \sum_j P(O_{t+1},...O_T, X_{t+1}=j | X_t=i)
= \sum_j P(O_{t+1},...O_T | X_{t+1}=j, X_t=i) P(X_{t+1}=j | X_t=i)\quad 忽略第一項的 X_t=i

= \sum_j P(O_{t+1}, O_{t+2}, ...O_T | X_{t+1}=j) p_{ij}\quad

= \sum_j P(O_{t+2}, ...O_T | X_{t+1}=j, O_{t+1}) P( O_{t+1} | X_{t+1}=j) p_{ij}\quad 忽略第一項的 O_{t+1}

= \sum_j P(O_{t+2}, ...O_T | X_{t+1}=j) b_{j O_{t+1}} p_{ij} = \sum_j \beta_{t+1}(j) p_{ij} b_{j O_{t+1}}

Termination

\beta_{t}(j) 是 conditional probability: P(O_{t+1},...,O_T)=\sum_j \beta_{t}(j) P(X_t=j) .

P(O)=P(O_1,...,O_T) = P(O_1,...,O_T | X_0)\quad 加上 X_0 不影響
= \sum_i P(O_1, ..., O_T, X_1=i | X_0)
= \sum_i P(O_1, ..., O_T | X_1=i, X_0) P(X_1=i | X_0)
= \sum_i P(O_1, ..., O_T | X_1=i) P(X_1=i | X_0)
= \sum_i P(O_2, ..., O_T | O_1, X_1=i) P(O_1 | X_1=i) P(X_1=i | X_0)
= \sum_i \beta_1(i) b_{i O_1} P(X_1=i | X_0)

例如:

  • \beta_1(s) = P(O_2, O_3, O_4 | X_1=s)= 0.0071 and \beta_1(r)= P(O_2, O_3, O_4 | X_1=r)=0.0121
  • P(O)=0.0071*0.4*0.6 + 0.0121*0.3*0.4 = 0.003156 consistent with the previous result.

Conclusion

比較 forward \alpha_{t}(j) = P(O_1,...,O_t,X_t=j) 和 backward \beta_t(j) = P(O_{t+1},...O_T | X_t=j) where j \in \{sunny, rainy\}
有幾點不同:

  • \alpha and \beta 都有 recursive formula.
    \alpha_{t+1}(j)=\sum_i \alpha_{t}(i) p_{i j} b_{j O_{t+1}}

    \beta_{t}(i)=\sum_j \beta_{t+1}(j) p_{i j} b_{j O_{t+1}}

  • \alpha_{t}(j) 是 joint probability: P(O_1,...,O_t)=\sum_j \alpha_{t}(j)

  • \beta_{t}(j) 是 conditional probability: P(O_{t+1},...,O_T)=\sum_j \beta_{t}(j) P(X_t=j) . 注意這只適用 t=1, 2, 3, ..., T 也就是最多到 P(O_2, ..., O_T)

  • Backward algorithm 最後一步比較曲折 P(O)=P(O_1,...,O_T) = \sum_i \beta_1(i) b_{i O_1} P(X_1=i | X_0)

  • 如何計算 P(X_t=j) ? 可以用前面的 \pi P^t 得到。 如果是 P(O_T) = \sum_j \beta_{T-1}(j) P(X_{T-1}=j)

  • For example P(O_4)=\sum_j \beta_{3}(j) P(X_3=j) = 0.25*0.656+ 0.15*0.344=0.2156 consistent with the previous computed results below:

  • \pi(4) = [\pi_r(4), \pi_s(4)] = [0.3376, 0.6624] \to P({Paint}(t=4)) = 0.05 \pi_r (4) + 0.3 \pi_s (4) = 0.2156

  • \pi(3) = [\pi_r(3), \pi_s(3)] = [0.344, 0.656] \to P({Shop}(t=3)) = 0.2 \pi_r (3) + 0.2 \pi_s (3) =0.2

One More Thing

上述 \alpha_t(j) \beta_t(j) 可以用 recursion 計算,這是非常棒的結果。每次多一個 observable O_{t+1} 只需要 incremental 加上 extra term, 可以 reuse 之前的結果,不用每次從頭算起。節省計算量和儲存量。

這只是前菜,更重要的是在後面討論 Viterbi decoding, a special case of dynamic programming, recursion 可以大幅降低 decoding 的複雜度。這對 error correct code decoding, speech recognition, etc. 的實用,是絕對的關鍵!

如果你只想到這裡,只能說缺乏想像的勇氣。以下應該是一個 surprise!
P(O \mid \lambda)=\sum_{j} \alpha_{t}(j) \beta_{t}(j)
上式有用之處在於 divide-and-conquer! 可以把計算 P(O) = P(O_1, O_2, ..., O_T) 切成兩段。一段從頭 P(O_1, O_2, .., O_t) 計算,另一段從尾 P(O_T, O_{T-1}, ..., O_{t+1}) 計算。兩段可以獨立平行計算,不受 recursion 限制,最後用上式整合!

如何證明?利用: \quad P(A, B, C)= P(A | B , C) P(B| C) P(C)
\alpha_{t}(j) = P(O_1,...,O_t,X_t=j) and \beta_t(j) = P(O_{t+1},...O_T | X_t=j)
Let A = \{O_1, ..., O_t\} , B = \{O_{t+1}, ..., O_T\} , and C =\{X_t=j\}

P(O) = \sum_j P(O_1, O_2, ..., O_t, O_{t+1}, ..., O_T, X_t=j) = \sum_j P(A, B, C) = (P(A | B,C ) P(B | C) P(C)
= \sum_j P(O_1, .., O_t | X_t=j, O_{t+1}, ..., O_T) P(O_{t+1}, ..., O_T | X_t=j) P(X_t=j)
= \sum_j P(O_1, .., O_t | X_t=j) P(O_{t+1}, ..., O_T | X_t=j) P(X_t=j)
= \sum_j P(O_1, .., O_t, X_t=j) P(O_{t+1}, ..., O_T | X_t=j)
= \sum_j \alpha_t(j) \beta_t(j)

再回到之前的例子

\sum_j \alpha_1(j)\beta_1(j) = 0.24*0.0071 + 0.12*0.0121 = 0.003156
\sum_j \alpha_2(j)\beta_2(j) = 0.024*0.046 + 0.054*0.038 = 0.003156
\sum_j \alpha_3(j)\beta_3(j) = 0.00816*0.25 + 0.00744*0.15 = 0.003156
\sum_j \alpha_4(j)\beta_4(j) = 0.0028512*1 + 0.0003048*1 = 0.003156
實際可以同時計算 \alpha_1 \to \alpha_2 \beta_4=1 \to \beta_3 \to \beta_2 , 結合 \alpha_2 and \beta_2 能最快得到完整 HMM 的 likelihood.

New Exploration (To Be Complete)

Q1: HMM likelihood 可以分多段嗎?e.g. A = \{O_1, ..., O_{t_1}\} , B = \{O_{t_1 +1}, ..., O_{t_2}\} , C=\{O_{t_2 +1}, ..., O_{T}\}

A1: Let E =\{X_{t_1}=i\} and D =\{X_{t_2}=j\}

P(O) = \sum_i \sum_j P(A, B, C, D, E)
= \sum_i \sum_j P(A | B,C, D, E ) P(B| C, D, E )P(C|D, E)P(D|E) P(E)
= \sum_i \sum_j P(A|E)P(E) P(B|D,E) P(C|D)
= \sum_i \sum_j \alpha_{t_1}(i) \beta_{t_2}(j) \gamma^{t_2}_{t_1 +1}(i, j)\quad where \gamma^{t_2}_{t_1 +1}(i, j) = P(B|D,E) = P(O_{t_1 +1}, ..., O_{t_2} | X_{t_1}=i, X_{t_2}=j)   

Q2: HMM likelihood 中間段如何計算? P(O_{t_1 + 1}, ..., O_{t_2})
A2: P(O_{t_1 + 1}, ..., O_{t_2}) = \sum_i \sum_j P(O_{t_1 + 1}, ..., O_{t_2} | X_{t_1}=i, X_{t_2}=j ) P(X_{t_1}=i, X_{t_2}=j)
= \sum_i \sum_j \gamma^{t_2}_{t_1 +1}(i, j) P(X_{t_1}=i, X_{t_2}=j)

This is forward path:
\gamma^{t_2 +1}_{t_1 +1}(i,j)=\sum_i \gamma^{t_2}_{t_1 + 1}(i, j) p_{i j} b_{j O_{t+1}}\quad

This is backward path:
\gamma^{t_2 }_{t_1 +1}(i,j)=\sum_j \gamma^{t_2}_{t_1 }(i, j) p_{i j} b_{j O_{t+1}}\quad this is backward path

The forward path P(B) = \sum_j \alpha_{t_2}(j)
\alpha_{t_1}(j) = P(O_{t_1}, X_{t_1}=j) = P(O_{t_1}|X_{t_1}=j)P(X_{t_1}=j)=b_{j O_{t_1}} \pi_{t_1}(j) where vector \pi_{t_1} = \pi(1) P^{t_1-1}
\quad\quad\quad\alpha_{t+1}(j)=\sum_i \alpha_{t}(i) p_{i j} b_{j O_{t+1}}\quad\quad t from t_1 to t_2

Q3: Use Radix 4 is better than multiple segments! when the state number is small?

Q4: Kalman filter

Q5: Dynamic programming!!

Reference

Burlando, Maria. n.d. “Hidden Markov Models 1: The Likelihood Problem.”
Medium. Accessed September 17, 2020.
https://medium.com/@Ayra_Lux/hidden-markov-models-part-1-the-likelihood-problem-8dd1066a784e.

Wiki. 2020a. “Hidden Markov Model.” Wikipedia.
https://en.wikipedia.org/w/index.php?title=Hidden_Markov_model&oldid=977869964.

———. 2020b. “Markov Chain.” Wikipedia.
https://en.wikipedia.org/w/index.php?title=Markov_chain&oldid=978157011.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Design a site like this with WordPress.com
Get started