Uniformization (probability theory)

In probability theory, uniformization method, (also known as Jensen's method[1] or the randomization method[2]) is a method to compute transient solutions of finite state continuous-time Markov chains, by approximating the process by a discrete time Markov chain.[2] The original chain is scaled by the fastest transition rate γ, so that transitions occur at the same rate in every state, hence the name uniformisation. The method is simple to program and efficiently calculates an approximation to the transient distribution at a single point in time (near zero).[1] The method was first introduced by Winfried Grassmann in 1977.[3][4][5]

Method description

For a continuous-time Markov chain with transition rate matrix Q, the uniformized discrete-time Markov chain has probability transition matrix , which is defined by[1][6][7]

with γ, the uniform rate parameter, chosen such that

In matrix notation:

For a starting distribution π(0), the distribution at time t, π(t) is computed by[1]

This representation shows that a continuous-time Markov chain can be described by a discrete Markov chain with transition matrix P as defined above where jumps occur according to a Poisson process with intensity γt.

In practice this series is terminated after finitely many terms.

Implementation

Pseudocode for the algorithm is included in Appendix A of Reibman and Trivedi's 1988 paper.[8] Using a parallel version of the algorithm, chains with state spaces of larger than 107 have been analysed.[9]

Limitations

Reibman and Trivedi state that "uniformization is the method of choice for typical problems," though they note that for stiff problems some tailored algorithms are likely to perform better.[8]

External links

Notes

  1. 1 2 3 4 Stewart, William J. (2009). Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling. Princeton University Press. p. 361. ISBN 0-691-14062-6.
  2. 1 2 Ibe, Oliver C. (2009). Markov processes for stochastic modeling. Academic Press. p. 98. ISBN 0-12-374451-2.
  3. Gross, D.; Miller, D. R. (1984). "The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes". Operations Research. 32 (2): 343–361. doi:10.1287/opre.32.2.343.
  4. Grassmann, W. K. (1977). "Transient solutions in markovian queueing systems". Computers & Operations Research. 4: 47–00. doi:10.1016/0305-0548(77)90007-7.
  5. Grassmann, W. K. (1977). "Transient solutions in Markovian queues". European Journal of Operational Research. 1 (6): 396–242. doi:10.1016/0377-2217(77)90049-2.
  6. Cassandras, Christos G.; Lafortune, Stéphane (2008). Introduction to discrete event systems. Springer. ISBN 0-387-33332-0.
  7. Ross, Sheldon M. (2007). Introduction to probability models. Academic Press. ISBN 0-12-598062-0.
  8. 1 2 Reibman, A.; Trivedi, K. (1988). "Numerical transient analysis of markov models" (PDF). Computers & Operations Research. 15: 19. doi:10.1016/0305-0548(88)90026-3.
  9. Dingle, N.; Harrison, P. G.; Knottenbelt, W. J. (2004). "Uniformization and hypergraph partitioning for the distributed computation of response time densities in very large Markov models". Journal of Parallel and Distributed Computing. 64 (8): 908–920. doi:10.1016/j.jpdc.2004.03.017.
This article is issued from Wikipedia - version of the 9/4/2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.