# Sheldon Ross 10: Example 4.04 (Transforming a Process into a Markov Chain)

Question: Suppose that whether or not it rains today depends on previous weather conditions through the last two days. Specifically, suppose that if it has rained for the past two days, then it will
rain tomorrow with probability 0.7; if it rained today but not yesterday, then it will rain tomorrow with probability 0.5; if it rained yesterday but not today, then it will rain tomorrow with probability 0.4; if it has not rained in the past two days, then it will rain tomorrow with probability 0.2. If we let the state at time n depend only on whether or not it is raining at time n, then the preceding model is not a Markov chain (why not?). However, we can transform this model into a Markov chain by saying that the state at any time is determined by the weather conditions during both that day and the previous day. In other words, we can say that the process is in:

• state 0 if it rained both today and yesterday
• state 1 if it rained today but not yesterday
• state 2 if it rained yesterday but not today
• state 3 if it did not rain either yesterday or today

Construct a Markov Chain transition matrix for this

Analytical Solution

It can be rather tricky to construct a matrix for this so we will do this using a flowchart

• When the system is at state 0 (RR)
• it stays in state 0 (RR) with a probability of 0.7
• it goes to state 2 (RN) with a probability of 0.5 (RN)
• When the system is at state 1 (NR)
• it goes to state 0 (RR) with a probability of 0.5
• it goes to state 2 (RN) with a probability of 0.5
• When the system is at state 2 (RN)
• it goes to state 1 (NR) with a probability of 0.4
• it goes to state 3 (NN) with a probability of 0.6
• When the system is at state 3 (NN)
• it goes to state 1 (NR) with a probability of 0.2
• it goes to state 3 (NN) with a probability of 0.8

Writing this in the form of a matrix, we get the following.

 0.7 0 0.3 0 0.5 0 0.5 0 0 0.4 0 0.6 0 0.2 0 0.8

Simulation Solution

Running the simulation to see what states are being occupied in the long run would yield the following result. In the image below, what differs from chart to chart is the initial state. The code for the same is presented in the code section. Code

```Module[{iterations = 100000},
Table[Module[{matrix = {{0.7, 0, 0.3, 0}, {0.5, 0, 0.5, 0}, {0, 0.4,
0, 0.6}, {0, 0.2, 0, 0.8}}, data, \[ScriptCapitalP],
happinessAssociation},
happinessAssociation = <|1 -> "Rain Rain", 2 -> "No-Rain Rain",
3 -> "Rain No-Rain", 4 -> "No-Rain No-Rain"|>;
\[ScriptCapitalP] = DiscreteMarkovProcess[initial, matrix];
data =
KeySort@Counts[
RandomFunction[\[ScriptCapitalP], {0, iterations}][[2, 1, 1]]];
Overlay[{BarChart[data, Frame -> True, ImageSize -> 394,
AspectRatio -> 1,
PlotRange -> {Automatic, {0, 0.7 iterations}},
ChartLabels ->
Placed[{ToString[#] <> "" & /@ Values@data,
Rotate[#, Divide[Pi, 2]] & /@ (happinessAssociation[#] & /@
Keys@data)}, {Above, Bottom}],
Epilog ->
Text[Style[
"Initial State = " <>
ToString[happinessAssociation[initial]], 12,
Blue], {2.5, .6750 iterations}]],
Labeled[Grid[matrix, Frame -> All],
Style["  Markov Chain \nTransition Matrix", Darker@Green],
Top]}, Alignment -> {-0.5, 0.5}]
], {initial, Range}] //
Labeled[Grid[Partition[#, 2]],
Rotate[Style[
"Number of iterations per Markov Chain: " <>
ToString@iterations, 14, Red], 0], Top] &]```

End of the post

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.