-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add more Procesos Estocásticos notes
- Loading branch information
1 parent
1cd292f
commit 2ebfe90
Showing
14 changed files
with
280 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
--- | ||
date: 2024-03-20 | ||
type: 📚 | ||
--- | ||
|
||
![[Probability of Absorption]] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
--- | ||
date: 2024-04-04 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[First-Passage Time]] | ||
|
||
--- | ||
|
||
_**(theorem)**_ | ||
|
||
In a [[Markov Chain|Markov chain]], we can calculate the **average [[First-Passage Time|first-passage time]]** for the state $j$ when starting at $i$, denoted $p_{ij}$, with: | ||
|
||
$$ | ||
\mu_{ij} = 1 + \sum_{k\neq j} p_{ik}\ \mu_{kj} | ||
$$ | ||
|
||
> [!tip]- Explanation | ||
> Here, we are basically considering that the only two possibilities for the first transition are two: | ||
> | ||
> 1. Reaching $j$ | ||
> 2. Reaching any other $k \neq j$ | ||
> | ||
> If we reach $j$ during the first transition, then it only took us 1 step to reach it. | ||
> | ||
> If we first pass through $k \neq j$ before eventually reaching $j$, then we have to take 1 step (to reach $k$) then take $\mu_{kj}$ more steps to reach $j$ from $k$. | ||
> | ||
> When taking into consideration the probabilities of reaching through any $k$ and adding up all of them, we obtain the average first-passage time from $i$ to $j$. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
date: 2024-03-15 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Markov Chain]] - [[Stochastic Process]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
In the context of a [[Markov Chain|Markov chain]], the **first-passage time** is the minimum amount of transitions that are necessary to go from one [[State Set|state]] to another _for the first time_. | ||
|
||
Compare to the [[Recurrence Time|recurrence time]], which is the minimum amount of steps needed to _return_ to a given state (for the first time, too). | ||
|
||
We can obtain the [[Average First-Passage Time|average first-passage time]] for a given state when starting at another given state. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
--- | ||
date: 2024-03-04 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Stochastic Process]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
Given a [[Stochastic Process|stochastic process]] with $m$ [[State Set|possible states]], we can define an **initial probability vector** $a$: | ||
|
||
$$ | ||
a = (a_{0}, a_{1}, \dots, a_{m}) | ||
$$ | ||
|
||
…where $a_{i}$ denotes the probability of the initial state being $s_{i}$. | ||
|
||
> [!example]- | ||
> For instance, let's say we have the stochastic process of tossing a coin and noting when we get heads or when we get tails. This process has a simple state set of $S = \left( \text{heads}, \text{tails} \right)$. | ||
> | ||
> The initial probability vector of this process is $a = \left( \frac{1}{2}, \frac{1}{2} \right)$, since the probability of getting _either_ heads or tails is $\frac{1}{2}$. | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
date: 2024-02-07 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Probability]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
A **probability space** is a [[Tuple (Mathematics)|tuple]] $(\Omega, \alpha, \mathbb{P})$, where: | ||
|
||
- $\Omega$ is the [[Sample Space|sample space]] | ||
- $\alpha$ is the [[Power Set|power set]] of $\Omega$ | ||
- $\mathbb{P}$ is the [[Probability Measure|probability measure]] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
--- | ||
date: 2024-03-20 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Markov Chain]] - [[Absorbing Set]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
In a [[Markov Chain|Markov chain]], given a $k$ an [[Absorbing Set|absorbing state]], the probability of reaching $k$ when we start from another state $i$ is called the **probability of absorption to $k$**. This probability is denoted by $f_{ik}$. | ||
|
||
_**(theorem)**_ | ||
|
||
The probability of absorption to $k$ when starting at $i$ is given by: | ||
|
||
$$ | ||
f_{ik} = \sum_{j=0} p_{ij} f_{jk} | ||
$$ | ||
|
||
…where $f_{kk} = 1$. | ||
|
||
_**(theorem)**_ | ||
|
||
Let $k_{1}, k_{2}, \dots, k_{s}$ be all absorbing states in a Markov chain. Then, for a fixed state $i$, it follows that: | ||
|
||
$$ | ||
\sum_{c=1}^{s} f_{ik_{c}} = 1 | ||
$$ | ||
|
||
In other words, the result of adding up all possible absorption probabilities when starting at a given state $i$ is simply $1$. | ||
|
||
_**(observation)**_ | ||
|
||
Note that we'll have $f_{ik}=0$ if the state $i$ is absorbing and different from $k$. | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
--- | ||
date: 2024-03-15 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Markov Chain]] - [[Stochastic Process]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
In the context of a [[Markov Chain|Markov chain]], the **recurrence time** is the minimum amount of steps that are needed to _go back_ to a given [[State Set|state]] _for the first time_. | ||
|
||
Compare the [[First-Passage Time|first-passage time]], which refers to the minimum amount of steps that are needed to get to a state (for the first time, too). | ||
|
||
The recurrence time to a state $i$, denoted $\mu_{ii}$ (similarly to a first-passage time), is given by: | ||
|
||
$$ | ||
\mu_{ii} = \frac{1}{\pi_{i}} | ||
$$ | ||
|
||
…where $\pi_{i}$ is the [[Steady State Probability|steady state probability]] of $i$. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
--- | ||
date: 2024-04-04 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[Regular Markov Chain]] | ||
|
||
--- | ||
|
||
In a regular Markov chain, it happens that, regardless of the initial state, the probability of reaching a given state is a _constant_ when the number of steps is sufficiently high. | ||
|
||
_**(fundamental theorem)**_ | ||
|
||
For every regular Markov chain, the following limit ([[n Step Transition Probability|n step transition probability]]) exists and it's the same one regardless of $i$: | ||
|
||
$$ | ||
\pi_{j} = \lim_{ n \to \infty } p_{ij}^{(n)} | ||
$$ | ||
|
||
Furthermore, we have that $\pi_{j} = 0$ and the following equations are satisfied: | ||
|
||
1. $\pi_{j} = \sum_{i=0}^{m} \pi_{i} p_{ij}$, for $j = 0, 1, 2, \dots, m$ | ||
2. $\sum_{j=0}^{m} \pi_{j} = 1$ | ||
|
||
The values $\pi_{j}$ are called **steady state probabilities**, while the vector that they form is called the **stationary distribution vector**. | ||
|
||
In other words, the probability of reaching a state $j$ tends to $\pi_{j}$ as $n \to \infty$, regardless of the initial state. | ||
|
||
# Calculation of Steady State Probabilities | ||
|
||
_**(observation)**_ | ||
|
||
We can calculate all of the $\pi_{j}$ in a given chain by formulating all corresponding equations then solving the resulting equation system. | ||
|
||
Notice that this resulting equation system has $m+2$ equations _but_ only $m+1$ unknowns. This system has a unique solution, so one of these equations will be redundant. | ||
|
||
Thus, we may find it easier to solve the system that consists of all equations _but one_ (the redundant one). Note that the equation that establishes that all $\pi_{j}$ must add up to $1$ is never redundant, so it can never be discarded. | ||
|
||
# Transitory States Tend to 0 | ||
|
||
_**(corollary) | ||
|
||
If $j$ is a [[Transitory Set|transitory state]], then: | ||
|
||
$$ | ||
\lim_{ n \to \infty } p_{ij}^{(n)} = \pi_{j} = 0 | ||
$$ | ||
|
||
…since, the more steps we take, the more likely we are to step out of its containing (transitory) set and thus, stop being able to return to it. | ||
|
||
# Limit of the Transition Matrix | ||
|
||
_**(corollary)**_ | ||
|
||
From the previous theorem, it's also possible to observe that, given $P^{(n)}$ the [[Transition Matrix|n step transition matrix]] of the regular Markov chain: | ||
|
||
$$ | ||
\lim_{ n \to \infty } P^{(n)} = | ||
\begin{pmatrix} | ||
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\ | ||
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\ | ||
\vdots & \vdots & \ddots & \vdots \\ | ||
\pi_{0} & \pi_{1} & \dots & \pi_{m} \\ | ||
\end{pmatrix} | ||
$$ | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
date: 2024-03-11 | ||
type: 🧠 | ||
tags: | ||
- MAC/6/PE | ||
--- | ||
|
||
**Topics:** [[State Set]] - [[Stochastic Process]] | ||
|
||
--- | ||
|
||
_**(definition)**_ | ||
|
||
Let $T$ be a subset of [[State Set|state set]] $S$ and let $T'$ be its [[Complement Set|complement]] in $S$. | ||
|
||
If each state in $T$ can be reached from any other in $T$, and it's possible to move from (at least) one state in $T$ to another in $T'$, then we call $T$ a **transitory set. | ||
|
||
A **transitory state** is an element of a transitory set. A state that is not transitory is [[Ergodic Set|ergodic]]. | ||
|
||
In simpler terms, a transitory state is a state where there exists a probability of getting out of it, but then never coming back (i.e. when reaching an ergodic set). | ||
|
||
A [[Markov Chain|Markov chain]] may have no transitory states. Compare how a Markov chain must have at least one ergodic set. |