Add MCs
This commit is contained in:
parent
2358418fd2
commit
70978b3702
1 changed files with 41 additions and 0 deletions
41
markov_chain.md
Normal file
41
markov_chain.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
# Markov Chain
|
||||
|
||||
Markov chain is a relatively simple [stochastic](stochastic.md) (working with probability) mathematical model for predicting or generating sequences of symbols. It can be used to describe some processes happening in the [real world](real_world.md) such as behavior of some animals, Brownian motion or structure of a language. In the world of programming Markov chains are pretty often used for generation of texts that looks like some template text whose structure is learned by the Markov chain (Markov chains are one possible model used in [machine learning](machine_learning.md)). Chatbots are just one example.
|
||||
|
||||
There are different types of Markov chains. Here we will be focusing on discrete time Markov chains with finite state space as these are the ones practically always used in programming. They are also the simplest ones.
|
||||
|
||||
Such a Markov chain consists of a finite number of states *S0*, *S1*, ..., *Sn*. Each state *Si* has a certain probability of transitioning to another state (including transitioning back to itself), i.e. *P(Si,S0)*, *P(Si,S1)*, ..., *P(Si,Sn)*; these probabilities have to, of course, add up to 1, and some of them may be 0. These probabilities can conveniently be written as a *n x n* matrix.
|
||||
|
||||
Basically Markov chain is like a [finite state automaton](finite_state_automaton.md) which instead of input symbols on its transition arrows has probabilities.
|
||||
|
||||
## Example
|
||||
|
||||
Let's say we want to create a simple [AI](ai.md) for an NPC in a video [game](game.md). At any time this NPC is in one of these states:
|
||||
|
||||
- **Taking cover** (state A):
|
||||
- 50% chance to stay in cover
|
||||
- 50% chance to start looking for a target
|
||||
- **Searching for a target** (state B):
|
||||
- 50% chance to remain searching for a target
|
||||
- 25% chance to start shooting at what it's looking at
|
||||
- 25% chance to throw a grenade at what it's looking at
|
||||
- **Shooting a bullet at the target** (state C):
|
||||
- 70% chance to remain shooting
|
||||
- 10% chance to throw a grenade
|
||||
- 10% chance to start looking for another target
|
||||
- 10% chance to take cover
|
||||
- **Throwing a grenade at the target** (state D):
|
||||
- 50% chance to shoot a bullet
|
||||
- 25% chance to start looking for another target
|
||||
- 25% chance to take cover
|
||||
|
||||
Now it's pretty clear this description gets a bit tedious, it's better, especially with even more states, to write the probabilities as a matrix (rows represent the current state, columns the next state):
|
||||
|
||||
| | A | B | C | D |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| A | 0.5 | 0.5 | 0 | 0 |
|
||||
| B | 0 | 0.5 |0.25 |0.25 |
|
||||
| C | 0.1 | 0.1 | 0.7 | 0.1 |
|
||||
| D |0.25 |0.25 | 0.5 | 0 |
|
||||
|
||||
We can see a few things: the NPC can't immediately attack from cover, it has to search for a target first. It also can't throw two grenades in succession etc. Let's note that this model will now be yielding random sequences of actions such as [*cover*, *search*, *shoot*, *shoot*, *cover*] or [*cover*, *search*, *search*, *grenade*, *shoot*] but some of them may be less likely (for example shooting 3 bullets in a row has a probability of 0.1%) and some downright impossible (e.g. two grenades in a row). Notice a similarity to for example natural language: some words are more likely to be followed by some words than others (e.g. the word "number" is more likely to be followed by "one" than for example "cat").
|
Loading…
Reference in a new issue