Skip to main content icon/video/no-internet

Markov processes are mathematical processes in which, given the present state of the process, the future is independent of the past. They are named after the Russian mathematician Andrei Markov (1856–1922), who provided the first theoretical results for this type of process. They offer a flexible and tractable framework for medical modeling and are typically used to analyze processes that evolve over time. They can be used to aggregate information from different sources and to extrapolate short-term study results into the future.

A Simple Two-State Example

Markov processes can be used to model lifetime duration, for humans as well as devices. For example, of a group of hearing aids, some may fail early on, whereas others will last a long time before they eventually break down. If the probability to fail increases with time, then a graph of the failure times might look like a bell-shaped curve. However, for hearing aids, breakdowns will often be due to an accident, so a constant breakdown rate may be more realistic. In that case, in each period of time a certain proportion of the hearing aids will break down, and the distribution of the life duration follows an exponential distribution (Figure 1).

If a hearing aid has a constant breakdown rate that is equal to μ, then the mean life duration is 1/μ and the lifetime, T, of this hearing aid follows an exponential probability distribution:

None

Figure 1 Graph of hearing aid failure times

None

For example, if the average life duration is half a year, then the annual breakdown rate is μ = 1/.5 = 2, and the probability that the hearing aid survives the first year is equal to exp(—2 × 1) = 14%. Because of the constant breakdown rate, the lifetime duration is memory-less: If the aid hasn't broken down yet after a year, then the aid's breakdown rate is still the same constant rate, μ, so the probability that the aid survives one more year is again 14%. In other words, the remaining lifetime is independent of the time already spent; the future is independent of the past. Markov processes are basically the extension of this memory-less property to more complicated processes, by introducing a state space.

The life of the hearing aid can be modeled as a Markov process with two states, indicating whether the aid has broken down or not, and a rate of transition, μ, from one state to the other (Figure 2).

Figure 2 Markov process with two states

None

In this description of the process, the hearing aid can break down at any point in time. Instead of constantly looking at the hearing aid, one could observe its state only at the beginning of every week. This changes the continuous-time Markov process to a discrete-time Markov process (Figure 3).

From one week to the next, the hearing aid breaks down with probability p The continuous-time and discrete-time models describe the same process, so their parameters μ and p are related. If the mean life duration of the hearing aid is half a year (μ = 1/.5 = 2), then the probability that the hearing aid breaks down during any particular week is equal

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading