Recently I needed to build a Hidden Markov Model (HMM). I have played with HMMs previously, but it was a while ago, so I needed to brush up on the underlying concepts. For that, the Wikipedia article is actually quite effective. My objective was to take an off the shelf HMM implementation, train it and use it to predict (ie, the HMM algorithm itself is a black box).
Scikit-Learn is an open-source Python machine-learning library has several HMM implementations. The documentation is somewhat light, though, so I wanted to see if I could implement the Bob-Alice example from the Wikipedia article (there is a similar example on the Wikipedia article on the Viterbi algorithm), and if the resulting HMM returned believable results.
The Bob-Alice example is described here. Here is the corresponding implementation using Python and scikit-learn.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | from __future__ import division
import numpy as np
from sklearn import hmm
states = ["Rainy", "Sunny"]
n_states = len(states)
observations = ["walk", "shop", "clean"]
n_observations = len(observations)
start_probability = np.array([0.6, 0.4])
transition_probability = np.array([
[0.7, 0.3],
[0.4, 0.6]
])
emission_probability = np.array([
[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]
])
model = hmm.MultinomialHMM(n_components=n_states)
model._set_startprob(start_probability)
model._set_transmat(transition_probability)
model._set_emissionprob(emission_probability)
# predict a sequence of hidden states based on visible states
bob_says = [0, 2, 1, 1, 2, 0]
logprob, alice_hears = model.decode(bob_says, algorithm="viterbi")
print "Bob says:", ", ".join(map(lambda x: observations[x], bob_says))
print "Alice hears:", ", ".join(map(lambda x: states[x], alice_hears))
|
The output of this code is shown below. As you can see, it looks quite reasonable given the constraints in the example.
1 2 | Bob says: walk, clean, shop, shop, clean, walk
Alice hears: Sunny, Rainy, Rainy, Rainy, Rainy, Sunny
|
Even though its a silly little example, it helped me understand how to model a Named Entity Recognizer as a HMM for a Coursera class I am taking. Hopefully it helps you for something (or at least you found it interesting :-)).
Thank you! This helped me understand a bunch. I wish the docs were better.
ReplyDeleteThank you. Certainly, it cleared my doubts.
ReplyDeleteThanks a lot, this helped a lot.
ReplyDeleteThank you all for your kind words, I did this initially for myself, glad to hear that it helped someone else as well.
ReplyDelete