Stochastic models have a variety of applications including Google’s Page Rank algorithm and reinforcement learning algorithms. These models can be thought of as mathematical processes or systems which can be described using a graph of states that are interconnected in some way. In short, any process or system that has an element of randomness to it is considered stochastic.
In mathematics, there are two types of processes:
- a deterministic process, where moving to a new state is based on present and past states (i.e. it’s pre-determined)
- and a stochastic process, where moving to a new state is based on a probablility
I really like Noah Burger’s layman explanation of stochastic processes in which he states
A stochastic process can be thought of as a description of the movement of an object over time. At every step in time, the object could assume one of many possible states, and each state has a probability associated with it. So while we can’t determine the exact path the object will take, we can make inferences about the path it might take based on those probabilities.
In this series of posts, we will be focusing on Markov processes, a type of stochastic process that’s memoryless in that it makes predictions for the future based on its present state. More specifically, we’ll be looking at Markov chains, a type of Markov process that considers only finite and countably infinite states over discrete time.
Now in the context of linear algebra, we’re used to writing vectors as columns and mulitplied them from the right. However, when dealing with stochastic models, it’s more convenient to represent vectors as row vectors and multiply from the left. This makes it more convenient when later interpreting transition matrices like Pij which can be interpreted as the probability of moving from state i to state j.
Lastly, we can introduce a few basic definitions before jumping in. We say that a square matrix is stochastic if each one of its columns sum to 1. In most of our examples, we’ll be considering a row stochastic matrix, where each rows sums to 1. A Markov chain can be represented by a matrix, typically called a transition matrix, where the index of each entry in the matrix represents moving from one state to another with the probability of moving being the entry value. In a Markov chain, we say that two states, i and j, communicate if there exists a m, n ≥ 0 such that the probability of moving from state i to state j in m steps is greater than 0 and the probability of moving from state j to state i in n steps is greater than 0 as well. Trivially, every state communicates with itself.