Skip to content

Commit

Permalink
Add more Procesos Estocásticos notes
Browse files Browse the repository at this point in the history
  • Loading branch information
camargomau committed Apr 5, 2024
1 parent 1cd292f commit 2ebfe90
Show file tree
Hide file tree
Showing 14 changed files with 280 additions and 4 deletions.
21 changes: 21 additions & 0 deletions MAC/6th Semester/Procesos Estocásticos.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,24 @@ A [[Transitory Set|transitory set]] is basically the opposite of an ergodic set:

The states in an ergodic set, absorbing set and transitory set are called ergodic, absorbing and transitory, respectively.

## Ergodic, Regular and Cyclical Markov Chains

An [[Ergodic Markov Chain|ergodic Markov chain]] is a Markov chain that only has a single class, which can only be an ergodic set.

In an ergodic Markov chain, if we can only enter all states in fixed periodic intervals, then it's [[Cyclical Markov Chain|cyclical]]; otherwise, it's [[Regular Markov Chain|regular]].

## Steady State Probabilities and Stationary Distribution

In a regular Markov chain, the probability of reaching a given state is constant when the number of steps is high enough, regardless of the initial state. As such, regular Markov chains are characterised by a [[Steady State Probability|stationary distribution (vector)]], which contains these constant steady state probabilities.

## First-Passage and Recurrence Time s

In a Markov chain, the [[First-Passage Time|first-passage time]] from a state $i$ to another state $j$ is the amount of steps it takes to reach $j$ from $i$ _for the first time_.

Similarly, the [[Recurrence Time|recurrence time]] of a state $i$ is the amount of steps it takes to _return_ to $i$ _for the first time_.

We can calculate the [[Average First-Passage Time|average first-passage time]] from a given state $i$ to another state $j$.

## Absorption Probability

When a Markov chain has absorbing states, the probability of reaching one of them from any other is called the [[Probability of Absorption|probability of absorption]].
6 changes: 6 additions & 0 deletions Mathematics/2024-03-20, Probabilidades de Absorción (PE).md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
date: 2024-03-20
type: 📚
---

![[Probability of Absorption]]
5 changes: 3 additions & 2 deletions Mathematics/Absorbing Set.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,11 @@ tags:

_**(definition)**_

An [[Ergodic Set|ergodic set]] with a single element is called an **absorbing set**.

An [[Ergodic Set|ergodic set]] with a single element is called an **absorbing set**.
This single element of an absorbent set is called an **absorbing state**.

It is called so because once we reach such a state, it _absorbs_ the process and it will remain there forever.

A [[Markov Chain|Markov chain]] whose every non-[[Transitory Set|transitory state]] is absorbing is also said to be [[Absorbing Markov Chain|absorbing]].

_**(observation)**_
Expand Down
30 changes: 30 additions & 0 deletions Mathematics/Average First-Passage Time.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
date: 2024-04-04
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[First-Passage Time]]

---

_**(theorem)**_

In a [[Markov Chain|Markov chain]], we can calculate the **average [[First-Passage Time|first-passage time]]** for the state $j$ when starting at $i$, denoted $p_{ij}$, with:

$$
\mu_{ij} = 1 + \sum_{k\neq j} p_{ik}\ \mu_{kj}
$$

> [!tip]- Explanation
> Here, we are basically considering that the only two possibilities for the first transition are two:
>
> 1. Reaching $j$
> 2. Reaching any other $k \neq j$
>
> If we reach $j$ during the first transition, then it only took us 1 step to reach it.
>
> If we first pass through $k \neq j$ before eventually reaching $j$, then we have to take 1 step (to reach $k$) then take $\mu_{kj}$ more steps to reach $j$ from $k$.
>
> When taking into consideration the probabilities of reaching through any $k$ and adding up all of them, we obtain the average first-passage time from $i$ to $j$.
2 changes: 1 addition & 1 deletion Mathematics/Cyclical Markov Chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ tags:

_**(definition)**_

A **cyclical Markov chain** is an [[Ergodic Markov Chain|ergodic Markov Chain]] where we can only enter a given state in _fixed_ periodic intervals.
A **cyclical Markov chain** is an [[Ergodic Markov Chain|ergodic Markov Chain]] where we can only enter all states in _fixed_ periodic intervals.

An ergodic Markov chain that is not cyclical is [[Regular Markov Chain|regular]].
18 changes: 18 additions & 0 deletions Mathematics/First-Passage Time.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
date: 2024-03-15
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Markov Chain]] - [[Stochastic Process]]

---

_**(definition)**_

In the context of a [[Markov Chain|Markov chain]], the **first-passage time** is the minimum amount of transitions that are necessary to go from one [[State Set|state]] to another _for the first time_.

Compare to the [[Recurrence Time|recurrence time]], which is the minimum amount of steps needed to _return_ to a given state (for the first time, too).

We can obtain the [[Average First-Passage Time|average first-passage time]] for a given state when starting at another given state.
2 changes: 1 addition & 1 deletion Mathematics/Independent Trial Process.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ tags:

_**(definition)**_

An **independent trial process** is a [[Stochastic Process|stochastic process]] of a discrete [[Parameter Set|parameter set]] and consists of [[Event Independence|independent]] [[Random Variable|random variables]].
An **independent trial process** is a [[Stochastic Process|stochastic process]] of a discrete [[Parameter Set|parameter set]] that consists of [[Event Independence|independent]] [[Random Variable|random variables]].

This implies that the result of the process in any given moment is independent from any other past or future result in the process.
26 changes: 26 additions & 0 deletions Mathematics/Initial Probability Vector.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
date: 2024-03-04
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Stochastic Process]]

---

_**(definition)**_

Given a [[Stochastic Process|stochastic process]] with $m$ [[State Set|possible states]], we can define an **initial probability vector** $a$:

$$
a = (a_{0}, a_{1}, \dots, a_{m})
$$

…where $a_{i}$ denotes the probability of the initial state being $s_{i}$.

> [!example]-
> For instance, let's say we have the stochastic process of tossing a coin and noting when we get heads or when we get tails. This process has a simple state set of $S = \left( \text{heads}, \text{tails} \right)$.
>
> The initial probability vector of this process is $a = \left( \frac{1}{2}, \frac{1}{2} \right)$, since the probability of getting _either_ heads or tails is $\frac{1}{2}$.
18 changes: 18 additions & 0 deletions Mathematics/Probability Space.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
date: 2024-02-07
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Probability]]

---

_**(definition)**_

A **probability space** is a [[Tuple (Mathematics)|tuple]] $(\Omega, \alpha, \mathbb{P})$, where:

- $\Omega$ is the [[Sample Space|sample space]]
- $\alpha$ is the [[Power Set|power set]] of $\Omega$
- $\mathbb{P}$ is the [[Probability Measure|probability measure]]
40 changes: 40 additions & 0 deletions Mathematics/Probability of Absorption.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
date: 2024-03-20
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Markov Chain]] - [[Absorbing Set]]

---

_**(definition)**_

In a [[Markov Chain|Markov chain]], given a $k$ an [[Absorbing Set|absorbing state]], the probability of reaching $k$ when we start from another state $i$ is called the **probability of absorption to $k$**. This probability is denoted by $f_{ik}$.

_**(theorem)**_

The probability of absorption to $k$ when starting at $i$ is given by:

$$
f_{ik} = \sum_{j=0} p_{ij} f_{jk}
$$

…where $f_{kk} = 1$.

_**(theorem)**_

Let $k_{1}, k_{2}, \dots, k_{s}$ be all absorbing states in a Markov chain. Then, for a fixed state $i$, it follows that:

$$
\sum_{c=1}^{s} f_{ik_{c}} = 1
$$

In other words, the result of adding up all possible absorption probabilities when starting at a given state $i$ is simply $1$.

_**(observation)**_

Note that we'll have $f_{ik}=0$ if the state $i$ is absorbing and different from $k$.


24 changes: 24 additions & 0 deletions Mathematics/Recurrence Time.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
date: 2024-03-15
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Markov Chain]] - [[Stochastic Process]]

---

_**(definition)**_

In the context of a [[Markov Chain|Markov chain]], the **recurrence time** is the minimum amount of steps that are needed to _go back_ to a given [[State Set|state]] _for the first time_.

Compare the [[First-Passage Time|first-passage time]], which refers to the minimum amount of steps that are needed to get to a state (for the first time, too).

The recurrence time to a state $i$, denoted $\mu_{ii}$ (similarly to a first-passage time), is given by:

$$
\mu_{ii} = \frac{1}{\pi_{i}}
$$

…where $\pi_{i}$ is the [[Steady State Probability|steady state probability]] of $i$.
2 changes: 2 additions & 0 deletions Mathematics/Regular Markov Chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,5 @@ tags:
_**(definition)**_

A **regular Markov chain** is an [[Ergodic Markov Chain|ergodic Markov chain]] that is not [[Cyclical Markov Chain|cyclical]].

In a regular Markov chains, the probability of reaching a given state is constant when the number of steps is high enough, regardless of the initial state. As such, regular Markov chains are characterised by a [[Steady State Probability|stationary distribution (vector)]], which contains these constant steady state probabilities.
68 changes: 68 additions & 0 deletions Mathematics/Steady State Probability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
date: 2024-04-04
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[Regular Markov Chain]]

---

In a regular Markov chain, it happens that, regardless of the initial state, the probability of reaching a given state is a _constant_ when the number of steps is sufficiently high.

_**(fundamental theorem)**_

For every regular Markov chain, the following limit ([[n Step Transition Probability|n step transition probability]]) exists and it's the same one regardless of $i$:

$$
\pi_{j} = \lim_{ n \to \infty } p_{ij}^{(n)}
$$

Furthermore, we have that $\pi_{j} = 0$ and the following equations are satisfied:

1. $\pi_{j} = \sum_{i=0}^{m} \pi_{i} p_{ij}$, for $j = 0, 1, 2, \dots, m$
2. $\sum_{j=0}^{m} \pi_{j} = 1$

The values $\pi_{j}$ are called **steady state probabilities**, while the vector that they form is called the **stationary distribution vector**.

In other words, the probability of reaching a state $j$ tends to $\pi_{j}$ as $n \to \infty$, regardless of the initial state.

# Calculation of Steady State Probabilities

_**(observation)**_

We can calculate all of the $\pi_{j}$ in a given chain by formulating all corresponding equations then solving the resulting equation system.

Notice that this resulting equation system has $m+2$ equations _but_ only $m+1$ unknowns. This system has a unique solution, so one of these equations will be redundant.

Thus, we may find it easier to solve the system that consists of all equations _but one_ (the redundant one). Note that the equation that establishes that all $\pi_{j}$ must add up to $1$ is never redundant, so it can never be discarded.

# Transitory States Tend to 0

_**(corollary)

If $j$ is a [[Transitory Set|transitory state]], then:

$$
\lim_{ n \to \infty } p_{ij}^{(n)} = \pi_{j} = 0
$$

…since, the more steps we take, the more likely we are to step out of its containing (transitory) set and thus, stop being able to return to it.

# Limit of the Transition Matrix

_**(corollary)**_

From the previous theorem, it's also possible to observe that, given $P^{(n)}$ the [[Transition Matrix|n step transition matrix]] of the regular Markov chain:

$$
\lim_{ n \to \infty } P^{(n)} =
\begin{pmatrix}
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\
\vdots & \vdots & \ddots & \vdots \\
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\
\end{pmatrix}
$$

22 changes: 22 additions & 0 deletions Mathematics/Transitory Set.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
date: 2024-03-11
type: 🧠
tags:
- MAC/6/PE
---

**Topics:** [[State Set]] - [[Stochastic Process]]

---

_**(definition)**_

Let $T$ be a subset of [[State Set|state set]] $S$ and let $T'$ be its [[Complement Set|complement]] in $S$.

If each state in $T$ can be reached from any other in $T$, and it's possible to move from (at least) one state in $T$ to another in $T'$, then we call $T$ a **transitory set.

A **transitory state** is an element of a transitory set. A state that is not transitory is [[Ergodic Set|ergodic]].

In simpler terms, a transitory state is a state where there exists a probability of getting out of it, but then never coming back (i.e. when reaching an ergodic set).

A [[Markov Chain|Markov chain]] may have no transitory states. Compare how a Markov chain must have at least one ergodic set.

0 comments on commit 2ebfe90

Please sign in to comment.