This commit is contained in:
Miloslav Ciz 2024-10-09 14:00:06 +02:00
parent 695e83f707
commit 28d52eba80
11 changed files with 1874 additions and 1834 deletions

View file

@ -92,9 +92,9 @@ The core of chess programming is writing the [AI](ai.md). Everything else, i.e.
The AI itself works traditionally on the following principle: firstly we implement so called static **evaluation function** -- a function that takes a chess position and outputs its evaluation number which says how good the position is for white vs black (positive number favoring white, negative black, zero meaning equal, units usually being in pawns, i.e. for example -3.5 means black has an advantage equivalent to having extra 3 and a half pawns; to avoid fractions we sometimes use centipawns, i.e. rather -350). This function considers a number of factors such as total material of both players, pawn structure, king safety, men mobility and so on. Traditionally this function has been hand-written, nowadays it is being replaced by a learned [neural network](neural_network.md) ([NNUE](nnue.md)) which showed to give superior results (e.g. Stockfish still offers both options); for starters you probably want to write a simple evaluation function manually.
Note: if you could make a perfect evaluation function that would completely accurately state given position's true evaluation (considering all possible combinations of moves until the end of game), you'd basically be done right there as your AI could just always make a move that would lead to position which your evaluation function rated best, which would lead to perfect play. Though neural networks got a lot closer to this ideal than we once were, as far as we can foresee ANY evaluation function will always be just an approximation, an estimation, heuristic, many times far away from perfect evaluation, so we cannot stop at this. We have to program yet something more.
Note: if you could make a perfect evaluation function that would completely accurately state given position's true evaluation (considering all possible combinations of moves until the end of game), you'd basically be done right there as your AI could just always make a move that would take it to the position which your evaluation function rated best, which would lead to perfect play by searching just to depth 1. Though neural networks got a lot closer to this ideal than we once were, as far as we can foresee ANY evaluation function will always be just an approximation, an estimation, heuristic, many times far away from perfect evaluation, so we cannot stop at this. We have to program yet something more. However some more relaxed engines that don't aim to be among the best can already work in the lazy way and be pretty good opponents -- see for example the Maia engine.
So secondly we need to implement a so called **search** algorithm -- typically some modification of [minimax](minimax.md) algorithm, e.g. with alpha-beta pruning -- that recursively searches the game tree and looks for a move that will lead to the best result in the future, i.e. to position for which the evaluation function gives the best value. This basic principle, especially the search part, can get very complex as there are many possible weaknesses and optimizations. Note now that this search kind of improves on the basic static evaluation function by making it [dynamic](dynamic.md) and so increases its accuracy greatly (of course for the price of CPU time spent on searching).
So secondly we need to implement a so called **search** algorithm -- typically some modification of [minimax](minimax.md) algorithm, e.g. with alpha-beta pruning -- that recursively searches the game tree and looks for a move that will lead to the best result in the future, i.e. to position for which the evaluation function gives the best value. This basic principle, especially the search part, can get very complex as there are many possible weaknesses and optimizations. For example (somewhat counterintuitively) it turns out to be a good idea to do iterative deepening, i.e. first searching to depth 1, then to depth 2, then to depth 3 etc., rather than searching to depth N right away. But again, this is all too complicated to expand on here. Just note now that doing the search kind of improves on the basic static evaluation function by making it [dynamic](dynamic.md) and so increases its accuracy greatly (of course for the price of CPU time spent on searching).
Exhaustively searching the tree to great depths is not possible even with most powerful hardware due to astronomical numbers of possible move combinations, so the engine has to limit the depth quite greatly and use various [hacks](hacking.md), [approximations](approximation.md), [heuristics](heuristic.md) etc.. Normally it will search all moves to a small depth (e.g. 2 or 3 half moves or *plys*) and then extend the search for interesting moves such as exchanges or checks. Maybe the greatest danger of searching algorithms is so called **horizon effect** which has to be addressed somehow (e.g. by detecting quiet positions, so called *quiescence*). If not addressed, the horizon effect will make an engine misevaluate certain moves by stopping the evaluation at certain depth even if the played out situation would continue and lead to a vastly different result (imagine e.g. a queen taking a pawn which is guarded by another pawn; if the engine stops evaluating after the pawn take, it will think it's a won pawn, when in fact it's a lost queen). There are also many techniques for reducing the number of searched tree nodes and speeding up the search, for example pruning methods such as **alpha-beta** (which subsequently works best with correctly ordering moves to search), or **transposition tables** (remembering already evaluated position so that they don't have to be evaluated again when encountered by a different path in the tree).