Dynamic programming turns up in many machine learning algorithms, maybe because dynamic programming excels at solving problems involving "non-local" information. I explore one technique used in machine learning, Hidden Markov Models, and how dynamic programming is used when applying this technique. Then, I'll show a few real-world examples where Hidden Markov Models are used.
A Hidden Markov Model deals with inferring the state of a system given some unreliable or ambiguous observations from that system. One important characteristic of this system is the state of the system evolves over time, producing a sequence of observations along the way. By incorporating some domain-specific knowledge, it’s possible to take the observations and work backwards to a maximally plausible ground truth.
This talk explores Hidden Markov Models in three steps:
- First, I define Hidden Markov Models and how they apply to machine learning problems.
- Next, I build up an understanding of the Viterbi algorithm, used to infer the state of the system given a sequence of observations. This involves some basic math, but the goal is to form an intuition for the algorithm. Some sample Python code is presented to demonstrate how simple the algorithm is.
- Finally, I introduce several real-world applications of Hidden Markov Models in machine learning. In this section, real-world considerations like feature extraction and training are discussed.
Basic math knowledge is expected, just the ability to express concepts as equations and an understanding of Big-O notation. Basic Python knowledge is also expected, as code samples will be presented. The goal is build up intuition.