Kolmogorov's equations for the probabilities of system states. Limit probabilities of states
Let there be technical system with discrete states, in which Markov random processes with continuous time proceed. Let us assume that all the intensities of the flows of events that transfer the system from state to state constant, i.e. all event flows are the simplest (stationary Poisson).
Let us formulate the following problem: what will happen to the system as t ® ¥ tends? If the functions P i (t) tend to some limits, then we will call them limiting probabilities of states.
We can prove the following general proposition.
If the number of states of the system is finite and from each state in a finite number of steps you can go to any other (closed system, Fig. 2.8a), then the limiting probabilities of states exist and they do not depend on time or on the initial state of the system.
In this case, of course, the following condition is preserved:
Rice. 2.7.8 a) is a graph of a closed system
Rice. 2.7.8 b) - graph of an open system
Thus, as t ® ¥, some limit stationary mode, which consists in the fact that the system randomly changes its states, but the probability of each of them no longer depends on time: each of the states is realized with a certain constant probability P i .
In this case, the marginal probability P i is the average relative time the system spends in a given i-th state, i.e. after the transition of the system to the steady state of operation, it will be in the state S i for a time proportional to P i .
For example, if the system has states S 0 , S 1 , S 2 and the limiting probabilities are equal to 0.4, 0.1, 0.5, then after the transition to the steady state, the system will be in the state S 0 40% of the time, 10% in the state S 1 and 50% are in the state S 2 .
To calculate the limiting probabilities in the Kolmogorov system of differential equations, it is necessary to set the left parts of the equations equal to zero (as derivatives of constants, since now the state probabilities do not depend on time). Then the original system of differential equations is transformed into a system of linear algebraic equations, the solution of which together with (2.85) makes it possible to determine the limit probabilities P i .
The labeled graph of a closed system has the following form.
Rice. 2.7.9. Labeled graph of a closed system.
Kolmogorov's system of differential equations:
Relevant linear system algebraic equations:
The solution of this system will be the values of the marginal probabilities.
Asymptotic estimates in accordance with the well-known theorem of A.A. Markov chains can be obtained for Markov chains with the ergodic property.
Definition 1. If the number of states of a system is finite and one can go from each state to any other in an arbitrary number of steps, then such a system is said to have the ergodic property.
Definition 2. Let a Markov process be characterized by the probabilities of transition from state i to state j in time t
p ij (t) (0?i?n; 0?j?n).
A process is called transitive if there exists t>0 such that p ij (t)>0 (0?i?n; 0?j?n). It follows from Definitions 1 and 2 that processes in Markov chains with the ergodic property are transitive.
Markov's theorem. For any transitive Markov process, the limit exists and does not depend on the initial state i.
This means that at t?? a certain limiting stationary regime is established in the system, characterized by a constant, time-independent, probability of each of the states of the system. In this case, this probability is the average relative time the system stays in this state. This means that if the operating time of the entire system is 100 hours, and the probability of the state S 1 is equal to p 1 = 0.15, then the system will be in the state S 1 for an average of 15 hours.
The limits to which the probabilities of each of the states of the Markov chain with the ergodic property tend at t?? are called the limit probabilities. When considering QS, we will deal only with ergodic Markov chains. Let V be some subset of the state set of the system S , and V' be its complement to S . If the set V has the ergodic property and it is impossible to go from one state of the set V to any of the states of the set V', then the set is called a closed or ergodic set. Ergodic systems consist of a single ergodic set (S=V, V'=?) and are therefore called indecomposable. If the set V"?? is in the system S or several ergodic sets S = V 1 ?V 2 ?…?V n can be distinguished in this system, then such a system is called decomposable. Examples of such systems are shown in Fig. 1.3.
Figure 1.3a shows a system with two ergodic sets V 1 =(S 2 ,S 3 ,S 4) and V 2 (S 5 ,S 6). In Figure 1.3b, the ergodic set consists of only one state (S 4). If the ergodic set consists of only one state, then this state is called an absorbing state, since once it enters it, the process remains forever in an absorbing state. A characteristic feature of the state graph of an indecomposable ergodic Markov system is that each vertex of this graph is incident with arcs with both positive and negative incidence (i.e., each vertex has arcs directed both towards the vertex and away from it, see ., for example, Fig. 1.1 and 1.2).
The calculation of the limiting probabilities of states for such systems is simplified due to the fact that, since all these probabilities are constants, their time derivatives are equal to 0 (dp i /dt=0 for all i). Therefore, the left-hand sides of the Kolmogorov system of equations (1.7) are equated to zero and it turns into a system of linear algebraic equations
A nontrivial solution of system (1.8) can be obtained only if the matrix ? is degenerate. It was proved above that the probability density matrix? is degenerate. System (1.8) without one of its equations is supplemented by the normalization condition
Relations (1.8) and (1.9) make it possible to determine the limiting probabilities of states. Since a part of the terms corresponding to arcs with negative incidence is positive, and the other part corresponding to arcs with positive incidence is negative, then each equation of system (1.8) can be composed taking into account the mnemonic rule: for each state, the sum of the terms corresponding to incoming arcs is equal to the sum of the terms corresponding to the outgoing arcs.
Example. For the system shown in Fig. 1.2, it follows from the Kolmogorov equations (1.7)
- (?12 +?13)p 1 =? 41 p 4 (? 41 +? 45)p 4 =? 34p3
- ? 25p1=? 12p1+? 32p3? 53p3=? 52p2+? 45p4
- (? 3 2 +? 3 4)p 4 =? 13p1+? 5 3 p 5 (1.10)
To solve (1.10), any of the first five equations must be excluded (for example, the fifth one, as containing the largest number of terms).
The limiting probabilities of states are used in TMT much more often than the solutions of the Kolmogorov equations, and, knowing the solution of the system of Kolmogorov equations, it is possible to determine the moment of the end of the transient process of changing the probabilities of states in time. This makes it possible to calculate the time interval starting from the system start-up, after which the state probabilities will reach their limit values and the estimates using these values will be valid. In conclusion of this section, we consider one particular, but practically very important class of Markov processes widely used in the study of QS. These are the processes of "reproduction and death". These include Markov chains, represented by a labeled graph, which consists of an elongated chain of states, shown in Fig. 1.4.
The transition probability density matrix of such a system is Jacobian (tridiagonal):
Considering the initial state S 0 , we obtain in accordance with (1.8)
01 p 0 =? 10 p 1 (1.11)
For the state S 1 we have
01p0+? 21p2=? 10p1+? 12 p 1 (1.12)
Subtracting equality (1.11) from (1.12), we obtain
21p2 = ? 12 p 1 (1.13)
Continuing this process up to the n-state inclusive, we obtain
N, n -1 p n \u003d? n -1, n p n -1
From (1.11) we can now express p 1 in terms of p 0:
p 1 \u003d p 0 (? 01 /? 10) (1.14)
Substituting (1.14) into (1.13), we obtain
p 2 \u003d p 0 (? 01? 12 /? 10? 21)
It is obvious that for an arbitrary k (1?k?n) the expression will be true
In accordance with (1.15) and the labeled state graph shown in Fig. 1.4, we can formulate a rule that can be used to express the limiting probabilities of the states of the "breeding and dying" process in terms of the probability of the initial state р 0 . This rule says: the probability of an arbitrary state p k (l?k?n) is equal to the probability of the initial state p 0 multiplied by a fraction, the numerator of which is equal to the product of the transition probability densities for arcs that transfer the state of the system from left to right, and the denominator is the product of the transition probability densities from the right to the left from initial to k-th states inclusive.
The probability p 0 is found from the normalization condition and expressions (1.15) as follows:
Expressions (1.15) and (1.16) completely determine the limiting probabilities of the "breeding and dying" process.
Considering Markov processes with discrete states and continuous time, it will be convenient to imagine that all transitions of the system S from state to state occur under the influence of some event flows (call flow, failure flow, recovery flow, etc.). If all flows of events that transfer the system S from state to state are the simplest, then the process occurring in the system will be Markovian ( simplest character flows is a sufficient but not necessary condition for a Markov process, since the simplest flow has no aftereffect: in it the "future" does not depend on the "past").
If the system S is in some state S i , from which there is a direct transition to another state S j , then imagine that the system, while it is in the state S i , is affected by the simplest flow of events, transferring it along the arrow . As soon as the first event of this flow appears, the system transitions from S i to S j . For clarity, on the graph of states, for each arrow, we put down the intensity of the flow of events that moves the system along this arrow. Denote by λ ij the intensity of the flow of events that transfers the system from the state S i to S j . Such a graph will be called labeled (Fig. 4.8). (let's return to the example of a technical device from d vuh nodes).
Recall the state of the system:
S 0 - both nodes are operational;
S 1 - the first node is under repair, the second is serviceable;
S 2 - the second node is under repair, the first is serviceable;
S 3 - both units are under repair.
We will calculate the intensities of event streams that transfer the system from state to state, assuming that the average node repair time does not depend on whether one node is being repaired or both at once. This will be the case if a separate specialist is engaged in the repair of each node.
Let us find all the intensities of the flows of events that transfer the system from state to state. Let the system be in state S 0 . What flow of events brings it to state S 1 ? Obviously the first node failure stream. Its intensity 1 is equal to one divided by the mean uptime of the first node. The flow of events that transfers the system back from S 1 to S 0 is the flow of ends of repairs of the first node. Its intensity 1 is equal to one divided by the average repair time of the first node. The intensities of the flows of events that transfer the system along all the arrows of the graph in Fig. 2 are calculated similarly. 4.9.
Having a labeled system state graph, it is possible to construct a mathematical model of this process.
Let a system S be considered, which has n possible states S 1 , S 2 , …, S n . Let's call the probability of the i-th state the probability P i (t) that at the moment t the system will be in the state S i . It is obvious that for any moment the sum of all state probabilities is equal to one.
(4.5)
Having at our disposal a labeled graph of states, we can find all the probabilities of states P i (t) as a function of time. To do this, they draw up and decide Kolmogorov equations - a special kind differential equations, in which the unknown functions are the probabilities of the states.
Let's look at an example of how these equations are composed. Let the system S have 4 states: S 1 , S 2 , S 3 , S 4 , the labeled graph of which is shown in fig. 4.10. Consider one of the state probabilities, for example P 1 (t). This is the probability that at time t the system will be in state S 1 . Let's give t a small increment t and find P 1 (t+t) – the probability that at the moment t+t the system will be in state S 1 . How can this happen? Obviously in two ways:
at time t the system already been in the state S 1 , and during the time t didn't come out out of him; or
at the moment t the system was in the state S 2 , and the time t passed from it to S 1 .
Let's find the probability of the first option. The probability that at time t the system was in state S 1 is P 1 (t). This probability must be multiplied by the probability that, being at the moment t in the state S 1 , the system during the time t will not pass from it either to S 2 or to S 3 . The total flow of events that brings the system out of the state S 1 will also be the simplest, with intensity 12 + 13 (when superimposing - superposition - two simplest flows, the simplest flow is obtained again, since the properties of stationarity, ordinaryness and absence of aftereffect are preserved), which means , the probability that during the time t the system will exit the state S 1 is equal to ( 12 + 13) t, the probability that it will not exit: 1-( 12 + 13) t. Hence the probability of the first variant is equal to P 1 (t).
Let's find the probability of the second option. It is equal to the probability that at the moment t the system will be in the state S 2 , and during the time t it will move from it to the state S 1 , i.e. it is equal to P 2 (t) 21 t.
Adding the probabilities of both options (according to the rule of addition of probabilities), we get: P 1 (t+t)=P 1 (t)+P 2 (t) 21 t.
Let's open the square brackets, move P 1 (t) to the left side and divide both parts by t:
Let's let t go to zero; on the left, we obtain in the limit the derivative of the function P 1 (t). Thus, we write the differential equation for P 1 (t):
, or, discarding the argument t from the functions P 1 , P 2:
(4.6)
Arguing similarly for all other states, we write three more differential equations. As a result, we obtain a system of differential equations for state probabilities:
(4.7)
This is a system of 4 linear differential equations with four unknown functions P 1 , P 2 , P 3 , P 4 . One of them (any) can be discarded using the fact that
; express any of the probabilities Pi in terms of others, substitute this expression into (4.7), and the corresponding equation with derivative discard.
Let's formulate now general rule compiling the Kolmogorov equations. On the left side of each of them is the derivative of the probability of some ( i th) state. On the right side is the sum of the products of the probabilities of all states from which the arrows go to this state, by the intensity of the corresponding event flows, minus the total intensity of all flows that bring the system out of this state, multiplied by the probability of the given ( ith) state.
Using this rule, we write the Kolmogorov equations for the system S (Fig. 4.9):
(4.8)
To solve the Kolmogorov equations and find the probabilities of the states, it is necessary to set the initial conditions. If we know exactly the initial state of the system S i (at t=0) P i (0)=1, and all other initial probabilities are equal to 0. So it is natural to solve equations (4.8) under the initial conditions P 0 (0)=1, P 1 (0)=P 2 (0)=P 3 (0)=0 (both nodes are healthy at the initial moment). Usually, when the number of equations is more than two (three), they are solved numerically on a computer.
Thus, the Kolmogorov equations make it possible to find all the probabilities of states as a function of time.
Considering Markov processes with discrete states and continuous time, it will be convenient for us to imagine that all transitions of the system from state to state occur under the influence of some event flows (call flow, failure flow, recovery flow, etc.). If all the flows of events that transfer the system S from state to state are the simplest, then the process occurring in the system will be Markovian. This is natural, since the simplest flow does not have an aftereffect: in it the “future” does not depend on the “past”.
If the system S is in some state from which there is a direct transition to another state (the arrow leading from on the state graph), then we will imagine this as if the system, while it is in the state, is affected by the simplest flow of events, translating it in the direction of the arrow. As soon as the first event of this flow appears, the system “jumps” from
For clarity, it is very convenient to put down the intensity of the flow of events that moves the system along this arrow on the state graph for each arrow. Let us denote the intensity of the flow of events that transfers the system from the state
On fig. 17.1 a graph of states is given with intensities affixed to the arrows (we will call such a graph labeled.
Let us construct a labeled state graph for the example given in § 15 (technical device of two nodes). Recall the state of the system:
Both nodes are correct
The first node is being repaired, the second is working,
The second node is being repaired, the first one is working,
Both nodes are being repaired.
We will calculate the intensities of event streams that transfer the system from state to state, assuming that the average node repair time does not depend on whether one node is being repaired or both at once.
This will be exactly the case if a separate specialist is engaged in the repair of each node. Let us find all the intensities of the flows of events that transfer the system from state to state. Let the system be in the state . What is the flow of events that brings it into the state ? Obviously the first node failure stream. Its intensity is equal to one divided by the average uptime of the first node. What is the flow of events that brings the system back from ? Obviously, the stream of "ends of repairs" of the first node. Its intensity is equal to one divided by the average repair time of the first node. The intensities of the flows of events that transfer the system along all the arrows of the graph in Fig. 2 are calculated similarly. 17.2.
Having at its disposal a labeled graph of system states, it is easy to build a mathematical model of this process.
Indeed, let a system S be considered that has possible states . Let's call the state probability the probability that at the moment t the system will be in the state . It is obvious that for any moment the sum of all state probabilities is equal to one:
With a labeled state graph at your disposal, you can find all state probabilities as a function of time. For this, the so-called Kolmogorov equations are compiled and solved - a special type of differential equations in which the unknown functions are the probabilities of states.
We will show on a specific example how these equations are composed. Let the system S have four states: the labeled graph of which is shown in Fig. 17.3. Consider one of the probabilities of states, for example This is the probability that at time t the system will be in state S. Let's give t a small increment and find - the probability that at time the system will be in state . How can this happen? Obviously, in two ways: either 1) at the moment t the system was already in the state a during the time it did not leave it; or 2) at the moment t the system was in state a in time passed from it to
Let's find the probability of the first option. The probability that at time t the system was in state is . This probability must be multiplied by the probability that, being at the moment t in the state, the system will not pass from it to either in time. The total flow of events that takes the system out of the state will also be the simplest, with intensity (when superimposing - superposition - two simplest flows, the simplest flow is obtained again, since the properties of stationarity, ordinaryness and absence of aftereffect are preserved).
This means that the probability that the system will exit the state in time is equal to the probability that it will not exit: Hence, the probability of the first option is equal to .
Let's find the probability of the second option. It is equal to the probability that at the moment t the system will be in state a in time it will go from it to state i.e. it is equal to
Adding the probabilities of both options (according to the rule of addition of probabilities), we get:
Let's open the square brackets, move to the left side and divide both parts into
Let us tend, as it should be in such cases, to zero; on the left, we obtain the derivative of the function in the limit. Thus, we write the differential equation for
or, in short, by discarding the t argument from functions (now we no longer need it):
Arguing similarly for all other states, we write three more differential equations. Adding equation (17.2) to them, we obtain a system of differential equations for the state probabilities:
This is a system of four linear differential equations with four unknown functions. Note that one of them (any) can be discarded using the fact that any of the probabilities can be expressed in terms of others, this expression can be substituted into (17.3), and the corresponding equation with the derivative can be discarded.
Let us now formulate a general rule for compiling the Kolmogorov equations. On the left side of each of them is the derivative of the probability of some state. On the right side - the sum of the products of the probabilities of all states, from which the arrows go to this state, by the intensity of the corresponding event flows, minus the total intensity of all flows that bring the system out of this state, multiplied by the probability of this state.
Using this rule, we write the Kolmogorov equations for the system S whose labeled state graph is given in Fig. 17.2:
To solve the Kolmogorov equations and find the probabilities of the states, one must first set the initial conditions. If we know exactly the initial state of the system , then at the initial moment (at ) , and all other initial probabilities are equal to zero. So, for example, it is natural to solve equations (17.4) under initial conditions (at the initial moment both nodes are operational).
How to solve such equations? Generally speaking, linear differential equations with constant coefficients can be solved analytically, but this is convenient only when the number of equations does not exceed two (sometimes three).
If there are more equations, they are usually solved numerically - manually or on a computer.
Thus, the Kolmogorov equations make it possible to find all the probabilities of states as a function of time.
Let us now pose the question: what will happen to the probabilities of states at ? Will they strive for some limits? If these limits exist and do not depend on the initial state of the system, then they are called the final state probabilities. In theory random processes it is proved that if the number of states of the system is finite and from each of them it is possible (in a finite number of steps) to go to any other, then the final probabilities exist
Assume that this condition is met and the final probabilities exist:
We will designate the final probabilities with the same letters as the probabilities of the states themselves, but by them we mean not variables (time functions), but constant numbers. Obviously, they also form a unit in the sum:
How to understand these final probabilities? When in the system S, a limiting stationary regime is established, during which the system randomly changes its states, but their probabilities no longer depend on time. The final probability of a state can be interpreted as the average relative time the system spends in this state. For example, if the system S has three states and their final probabilities are equal to 0.2, 0.3 and 0.5, this means that in the limiting, stationary mode, the system spends on average two tenths of the time in the state three tenths - in the state and half time - able
How to calculate the final probabilities? Very simple. If the probabilities are constant, then their derivatives are zero. This means that in order to find the final probabilities, it is necessary to set all left-hand sides in the Kolmogorov equations equal to zero and solve the resulting system of linear algebraic equations rather than differential ones. It is possible not to write the Kolmogorov equations, but write a system of linear algebraic equations directly from the state graph. If we transfer the negative term of each equation from the right side to the left side, then we immediately get a system of equations, where on the left is the final probability of this state multiplied by the total intensity of all flows leading from this state, and on the right is the sum of the products of the intensities of all flows entering the state, on the probabilities of those states from which these flows originate.