pythonnumpyscipymarkov

Steady State Probabilities (Markov Chain) Python Implementation


Hi I am trying to generate steady state probabilities for a transition probability matrix. Here is the code I am using:

import numpy as np

one_step_transition = np.array([[0.125     , 0.42857143, 0.75      ],
                                [0.75      , 0.14285714, 0.25      ],
                                [0.125     , 0.42857143, 0.        ]])


def steady_state_prop(p):
    dim = p.shape[0]
    q = (p-np.eye(dim))
    ones = np.ones(dim)
    q = np.c_[q,ones]
    QTQ = np.dot(q, q.T)
    bQT = np.ones(dim)
    return np.linalg.solve(QTQ,bQT)

steady_state_matrix = steady_state_prop(one_step_transition.transpose())

print (steady_state_matrix)

#result is :
#array([0.38268793, 0.39863326, 0.21867882])

#Expected Result = (0.4,0.4,0.2)

My question is why the outcome is slightly different from exact answer?


Solution

  • The expected result is wrong. For the steady state the product of the transition matrix and the steady state must be the steady state again.

    tobe = np.array(((0.4, 0.4, 0.2)))
    print(tobe)
    print(np.dot(one_step_transition.T, tobe))
    print()
    
    result = steady_state_prop(one_step_transition)
    print(result)
    print(np.dot(one_step_transition.T, result))
    print()
    

    Output is

    [0.4 0.4 0.2]
    [0.37142857 0.40714286 0.22142857]
    
    [0.38268793 0.39863326 0.21867882]
    [0.38268793 0.39863326 0.21867882]
    

    So your functions seems to be correct, the result you expect is not.