Monads for Markov chains

This is my first literate Haskell post, and actually my first post on Haskell, and my first attempt at literate programming. I spent (wasted?) a lot of time in trying to make the code segments render nicely on WordPress, with a process that was only partially automated and very tedious. I think I will cook up some script to automate it completely, if I ever get to write another such post.

Suppose you need to model a finite Markov chain in code. There are essentially two ways of doing that: one is to simply run a simulation of the Markov chain using a random number generator to obtain dice rolls and random cards from the decks, the other is to create a stochastic matrix containing the transition probabilities for each pair of states.

In this post I will show how a single monadic description of the Markov chain dynamics can be used to obtain both a simulator and the transition matrix.

> {-# LANGUAGE MultiParamTypeClasses, 
>     FlexibleInstances, 
>     GeneralizedNewtypeDeriving #-}
>
> import Control.Arrow
> import Control.Monad
> import Control.Monad.State.Strict
> import Data.Array
> import Random

Let’s start with an example of Markov chain and how we would like to be able to implement in Haskell. Consider a simplified version of the familiar Monopoly game: there are 40 squares (numbered 0 to 39), you throw two 6-sided dice each turn, some special squares have particular effects (see below), if you get a double roll three times in a row, you go to jail. The special squares are:

30: go to jail
2, 17, 33: Community Chest
7, 22, 36: Chance

Community Chest (CC) and Chance (CH) make you take a card from a deck and move to some other place depending on what’s written on the card. You will find the details on the code, so I won’t explain them here.

This is of course a Markov chain, where the states can be represented by:

> type Square = Int
> data GameState = GS {
>       position :: Square,
>       doubles :: Int } deriving (Eq, Ord, Show)

and a description of the game can be given in a monadic style like this:

> sGO :: Square
> sGO = 0
> 
> sJAIL :: Square
> sJAIL = 10
>
> finalize :: Square -> Game Square
> finalize n
>     | n == 2 || n == 17 || n == 33 = cc n
>     | n == 7 || n == 22 || n == 36 = ch n
>     | n == 30 = return sJAIL
>     | otherwise = return n
> 
> cc :: Square -> Game Square
> cc n = do i <- choose (1 :: Int, 16)
>           return $ case i of
>                      1 -> sGO
>                      2 -> sJAIL
>                      _ -> n
> 
> ch :: Square -> Game Square
> ch n = do i <- choose (1 :: Int, 16)
>           return $ case i of
>                      1 -> sGO
>                      2 -> sJAIL
>                      3 -> 11
>                      4 -> 24
>                      5 -> 39
>                      6 -> 5
>                      7 -> nextR n
>                      8 -> nextR n
>                      9 -> nextU n
>                      10 -> n - 3
>                      _ -> n
>     where
>       nextR n = let n' = n + 5
>                 in n' - (n' `mod` 5)
>       nextU n
>           | n >= 12 && n < 28 = 28
>           | otherwise = 12
> 
> roll :: Game (Int, Int)
> roll = let r1 = choose (1, 6)
>        in liftM2 (,) r1 r1
> 
> markDouble :: Bool -> Game ()
> markDouble True = modify $ \s -> s {
>                     doubles = doubles s + 1 }
> markDouble False = modify $ \s -> s {
>                      doubles = 0
>                    }
>
> goTo :: Square -> Game ()
> goTo n = let n' = n `mod` 40
>          in modify $ \s -> s { position = n' }
> 
> game :: Game ()
> game = do n <- liftM position get
>           (a, b) <- roll
>           markDouble (a == b)
>           d <- liftM doubles get
>           if d == 3
>            then do markDouble False
>                    goTo sJAIL
>            else do let n' = n + a + b
>                    n'' <- finalize n'
>                    goTo n''
> 

As you can see, Game is a state monad, with an additional function choose that gives us a random element of a range:

> class MonadState s m => MonadMC s m where
>     choose :: (Enum a) => (a, a) -> m a

This can be implemented very easily using the (strict) state monad and a random generator:

> newtype MCSim s a = MCSim (State ([s], StdGen) a)
>     deriving Monad
> 
> instance MonadState s (MCSim s) where
>     get = MCSim $ liftM (head . fst) get
>     put x = MCSim . modify $ \(xs, g) -> (x : xs, g)
> 
> instance MonadMC s (MCSim s) where
>     choose (a, b) = MCSim $
>                     do (xs, g) <- get
>                        let bnds = (fromEnum a, fromEnum b)
>                        let (y, g') = randomR bnds g
>                        put (xs, g')
>                        return . toEnum $ y
> 
> -- type Game a = MCSim GameState a
>
> runSim :: StdGen -> Int -> s -> MCSim s () -> [s]
> runSim g n start m = fst $ execState m' ([start], g)
>     where
>       (MCSim m') = foldr (>>) (return ()) $ replicate n m

The runSim function runs the simulation and returns the list of visited states. This is already quite nice, but the best thing is that the same code can be used to create the transition matrix, just swapping in a new implementation of the Game type alias:

> newtype MC s a = MC (s -> [(s, Double, a)])
> 
> instance Monad (MC s) where
>     return x = MC $ \s -> return (s, 1.0, x)
>     (MC m) >>= f = MC $ \s ->
>                    do (s, p, x) <- m s
>                       let (MC m') = f x
>                       (s', q, y) <- m' s
>                       return (s', p * q, y)
> 
> instance MonadState s (MC s) where
>     get = MC $ \s -> return (s, 1.0, s)
>     put x = MC $ \s -> return (x, 1.0, ())
> 
> instance MonadMC s (MC s) where
>     choose (a, b) = let r = [a..b]
>                         p = recip . fromIntegral . length $ r
>                     in MC $ \s -> map (\x -> (s, p, x)) r
> 
> type Game a = MC GameState a

The idea is that we keep track of all possible destination states for a given state, with associated conditional probabilities. For those familiar with Eric Kidd’s series on probability monads, this is basically:

type MC s a = StateT s (PerhapsT [] a)

Now, how to get a transition matrix from such a monad? Of course, we have to require that the states are indexable:

> markov :: Ix s =>
>           MC s () -> (s, s) -> Array (s, s) Double
> markov (MC m) r = accumArray (+) 0.0 (double r) $
>  	     	    range r >>= transitions
>     where
>       mkAssoc s (s', p, _) = ((s, s'), p)
>       transitions s = map (mkAssoc s) $ m s
> 	double (a, b) = ((a, a), (b, b))

So we iterate over all states and use the probability values contained in the monad to fill in the array cells corresponding to the selected state pair.

To actually apply this to our Monopoly example, we need to make GameState indexable:

> nextState :: GameState -> GameState
> nextState (GS p d) = if d == 2
>                      then GS (p + 1) 0
>                      else GS p (d + 1)
> 
> instance Ix GameState where
>     range (s1, s2) = takeWhile (<= s2) .
>                      iterate nextState $ s1
>     index (s1, s2) s =
>         let poss = (position s1, position s2)
>         in index poss (position s) * 3 +
>            doubles s - doubles s1
>     inRange (s1, s2) s = s1 <= s && s <= s2
>     rangeSize (s1, s2) = index (s1, s2) s2 + 1

then finally we can try:

> monopoly :: (GameState, GameState)
> monopoly = (GS 0 0, GS 39 2)
> 
> initialState :: Array GameState Double
> initialState = let n = rangeSize monopoly
>                    p = recip $ fromIntegral n
>                in listArray monopoly $ replicate n p
> 
> statDistr :: Int -> [(GameState, Double)]
> statDistr n = let mat = markov game monopoly
>                   distributions = iterate (.* mat)
>                         initialState
>                   st = distributions !! n
>               in assocs st

where .* is a simple vector-matrix multiplication function:

> infixl 5 .*
> (.*) :: (Ix i, Num a) =>
>            Array i a -> Array (i, i) a -> Array i a
> (.*) x y = array resultBounds
>               [(i, sum [x!k * y!(k,i) | k <- range (l,u)])
>                | i <- range (l'',u'') ]
>         where (l, u) = bounds x
>               ((l', l''), (u', u'')) = bounds y
>               resultBounds
>                 | (l,u)==(l',u') = (l'', u'')
>                 | otherwise = error ".*: incompatible bounds"

Calling statDistr 100 will return an association list of states with corresponding probability in an approximation of the stationary distribution, computed by applying the power method to the transition matrix. The number 100 is a pure guess, I don’t know how to estimate the number of iterations necessary for convergence, but that is out of the scope of this post, anyway.

About these ads

Tags: , , ,

5 Responses to “Monads for Markov chains”

  1. Arnar Birgisson Says:

    Good post. Just wanted to chime in and tell you that I have no problems with highlighting on my WP blog. I use the WP-Syntax plug-in and it pretty much worked on first try. I had to tweak the CSS a bit to get pleasant colours.

    Feel free to email me if you want more information or having problems.

  2. Avertedd Says:

    Даже и не придирешься!

  3. BabetteSmith Says:

    Gday.

    Did you ever think about creating your very own blog? There are many great platforms, but by far the best is WordPress. It is easy to set up, however the themes just never fit my needs. I looked for a simple solution to this problem and realized that there wasn’t one. I then had a template custom made for my needs and was so happy with the outcome. I then decided to build a website that would show the world how to simply hire an expert in wordpress design.

    Custom WordPress Template

  4. Best Blenders Says:

    Do you have a downloadable version of the source code?

  5. pcapriotti Says:

    @Best Blenders: You can copy-paste the blog text in an lhs file and compile it with ghc.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: