Limiting probabilities

The probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value which is independent of the intial state. We call this value Pj where Pj is equal to:

$$ \frac{(\lambda_0)(\lambda_1)(\lambda_2)\cdot\cdot\cdot(\lambda_{n-1})}{(\mu_1)(\mu_2)\cdot\cdot\cdot(\mu_n)(1+\sum_{n=1}^\infty \frac{(\lambda_0)(\lambda_1)(\lambda_2)\cdot\cdot\cdot(\lambda_{n-1})}{(\mu_1)(\mu_2)\cdot\cdot\cdot(\mu_n)})},$$

For a limiting probability to exist, it is necessary that

$$\sum_{k=1}^\infty \frac{(\lambda_0)(\lambda_1)(\lambda_2)\cdot\cdot\cdot(\lambda_{n-1})}{(\mu_1)(\mu_2)\cdot\cdot\cdot(\mu_n)}< \infty,$$

This condition may be shown to be sufficient.

We can determine the limiting probabilities for a birth and death process using these equations and equating the rate at which the process leaves a state with the rate at which it enters the state.