![]() ![]() &lambda 1 be the probability that a server fails when both were okay an hour ago We'll model this problem as a Markov chain as follows:Īssume the network can be in one of three states: The problem is to estimate the long-term probability that at least one server in a two-server computer network is working during any given hour. #b IS THE UNIT VECTOR WITH 1 IN POSITION n+1Įxample Application: Reliability of a two-server network #APPEND VECTOR e TO Q AND TRANSPOSE THE RESULT #EXTRACT THE DIMENSION OF THE TRANSITION MATRIX P #MAKE THE PROCEDURE SELF CONTAINED BY LOADING REQUIRED PACKAGES INSIDE THE PROCEDURE The input is a transition matrix P, and the output is the steady-state vector pi reflecting the long-term probability of the system being in each state. To compute the steady state vector, solve the following linear system for Pi, the steady-state vector of the Markov chain:Īppending e to Q, and a final 1 to the end of the zero-vector on the right-hand side ensures that the solution vector Pi has components summing to 1. Let e be the n-vector of all 1's, and b be the (n+1)-vector with a 1 in position n+1 and 0 elsewhere. The procedure steadyStateVector implements the following algorithm: Given an n x n transition matrix P, let I be the n x n identity matrix and Q = P - I. The input transition matrix may be in symbolic or numeric form. We create a Maple procedure called steadyStateVector that takes as input the transition matrix of a Markov chain and returns the steady state vector, which contains the long-term probabilities of the system being in each state. The goal is to compute the long-term probability that at least one server is working.Īlgorithm for Computing the Steady-State Vector As a case study, we'll analyze a two-server computer network whose servers have known probabilities of going down or being fixed in any given hour. This Maple application creates a procedure for answering this question. Ī common question arising in Markov-chain models is, what is the long-term probability that the system will be in each state? The vector containing these long-term probabilities, denoted Pi, is called the steady-state vector of the Markov chain. Clearly, the sum of each row of P is 1.Ī well-known theorem of Markov chains states that the probability of the system being in state j after k time periods, given that the system begins in state i, is the (i, j) entry of P k. The set of probabilities is stored in a transition matrix P, where entry (i, j) is the transition probability from state i to state j. The sum of the transition probabilities out of any node is, by definition, 1. The nodes of the digraph represent the states, and the directed edge weight between two states a and b represents the probability (called the transition probability from a to b) that the system will move to state b in the next time period, given that it is currently in state a. The following Maple techniques are highlighted:Ī Markov Chain is a weighted digraph representing a discrete-time system that can be in any number of discrete states. This worksheet demonstrates the use of Maple to investigate Markov-chain models. Computing the Steady-State Vector of a Markov Chain ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |