A few weeks ago I wrote a tutorial on Markov Chains, where the example I gave was a model that generated text. I enjoyed writing that, but I realize if you’re interested in using Markov Chains for a more practical purpose, this tutorial only introduced the idea. So now, I want to provide an example that provides easier access to markov chain related programming.
Markov Chain – a random process, where we assume the previous state holds predictive power in predicting the next.
Car Rental Example
Suppose a car rental agency has three locations in Ottawa: Downtown location, East end location and a West end location. The agency has a group of delivery drivers to serve all three locations. The agency’s statistician has determined the following:
- Of the calls to the Downtown location, 30% are delivered in Downtown area, 30% are delivered in the East end, and 40% are delivered in the West end.
- Of the calls to the East end location, 40% are delivered in Downtown area, 40% are delivered in the East end, and 20% are delivered in the West end
- Of the calls to the West end location, 50% are delivered in Downtown area, 30% are delivered in the East end, and 20% are delivered in the West end.
After making a delivery, a driver goes to the nearest location to make the next delivery. This way, the location of a specific driver is determined only by his or her previous location.
This is a markov chain because where weare currently has predictive power over where we are going next. In this case, the state space is Downtown, East, and West.
Note – the code is on my github, and if you actually want to try things out for yourself, I highly recommend viewing it here instead of through WordPress.
We can model the markov chain by a transition matrix.
There is a package in R that will make our lives easier. It’s called ‘markovchain’. Let’s define our model as a markovchain object. Calling the object is equivalent to it’s show() method.
We can plot it using plot(mcRental).
Ok, let’s ask some more statistical questions that might pop up if this were a real situation. Given that we are downtown, what is the probability being back downtown in exactly 2 trips?
It turns out though, that you can get the same result by raising the transition matrix to the power n, where n is the number of trips.
The matrix shown above represents the probability you will go from one point to another in 2 trips.
We can do this for any number of trips. It’s makes logical sense that as the number of trips n increases, where you started will have less and less predictive power as where you end up. We can demonstrate this fact by raising the transition matrix to a sufficiently large number.
This distribution, when the starting location is completely irrelevant, is known as the stationary distribution. It can be calculated with some linear algebra fanciness, but the markovchain package has a method called steadyStates(). Here is a question to make use of the method.
At the rental car company there are 70 drivers. Each driver is required to make 30 trips each day—how many drivers will end their day at the West location?
First we need to assure that 30 is sufficiently large, and we can do this by raising the transition matrix to 30 and assuring that there is no difference based on starting location.
After that, we can just multiply the scalar 70 times the stationary distribution, and that will give the vector of expected values.
There are several more awesome functions in the markovchain package, two of which I want to highlight.
The conditional distribution just pulls out the row of the transition matrix. I love that there is a method called this, because it highlights the fact that markov chains have at their core conditional probability.
The summary function provides a more comprehensive view of the object. Each aspect of it (transient classes, irreducibility) are beyond the scope of this tutorial, but they’re worth noting.