Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Game complexity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Measures of game complexity== === State-space complexity === The ''state-space complexity'' of a game is the number of legal game positions reachable from the initial position of the game.<ref name="Allis1994"/> When this is too hard to calculate, an [[upper bound]] can often be computed by also counting (some) illegal positions (positions that can never arise in the course of a game). === Game tree size === The ''game tree size'' is the total number of possible games that can be played. This is the number of [[Leaf nodes#Terminology|leaf nodes]] in the [[game tree]] rooted at the game's initial position. The game tree is typically vastly larger than the state-space because the same positions can occur in many games by making moves in a different order (for example, in a [[tic-tac-toe]] game with two X and one O on the board, this position could have been reached in two different ways depending on where the first X was placed). An upper bound for the size of the game tree can sometimes be computed by simplifying the game in a way that only increases the size of the game tree (for example, by allowing illegal moves) until it becomes tractable. For games where the number of moves is not limited (for example by the size of the board, or by a rule about repetition of position) the game tree is generally infinite. === Decision trees === A [[decision tree]] is a subtree of the game tree, with each position labelled "player A wins", "player B wins", or "draw" if that position can be proved to have that value (assuming best play by both sides) by examining only other positions in the graph. Terminal positions can be labelled directly—with player A to move, a position can be labelled "player A wins" if any successor position is a win for A; "player B wins" if all successor positions are wins for B; or "draw" if all successor positions are either drawn or wins for B. (With player B to move, corresponding positions are marked similarly.) The following two methods of measuring game complexity use decision trees: ==== Decision complexity ==== ''Decision complexity'' of a game is the number of leaf nodes in the smallest decision tree that establishes the value of the initial position. ==== Game-tree complexity ==== ''Game-tree complexity'' of a game is the number of leaf nodes in the smallest ''full-width'' decision tree that establishes the value of the initial position.<ref name="Allis1994"/> A full-width tree includes all nodes at each depth. This is an estimate of the number of positions one would have to evaluate in a [[minimax]] search to determine the value of the initial position. It is hard even to estimate the game-tree complexity, but for some games an approximation can be given by <math>GTC \geq b^d</math>, where {{Mvar|b}} is the game's average [[branching factor]] and ''{{Mvar|d}}'' is the number of [[Ply (chess)|plies]] in an average game. === Computational complexity === The [[Computational complexity theory|''computational complexity'']] of a game describes the [[Asymptotic analysis|asymptotic]] difficulty of a game as it grows arbitrarily large, expressed in [[big O notation]] or as membership in a [[complexity class]]. This concept doesn't apply to particular games, but rather to games that have been [[generalized game|generalized]] so they can be made arbitrarily large, typically by playing them on an ''n''-by-''n'' board. (From the point of view of computational complexity, a game on a fixed size of board is a finite problem that can be solved in O(1), for example by a look-up table from positions to the best move in each position.) The asymptotic complexity is defined by the most efficient algorithm for solving the game (in terms of whatever [[computational resource]] one is considering). The most common complexity measure, [[computation time]], is always lower-bounded by the logarithm of the asymptotic state-space complexity, since a solution algorithm must work for every possible state of the game. It will be upper-bounded by the complexity of any particular algorithm that works for the family of games. Similar remarks apply to the second-most commonly used complexity measure, the amount of [[DSPACE|space]] or [[computer memory]] used by the computation. It is not obvious that there is any lower bound on the space complexity for a typical game, because the algorithm need not store game states; however many games of interest are known to be [[PSPACE-hard]], and it follows that their space complexity will be lower-bounded by the logarithm of the asymptotic state-space complexity as well (technically the bound is only a polynomial in this quantity; but it is usually known to be linear). * The [[depth-first search|depth-first]] [[minimax strategy]] will use computation time proportional to the game's tree-complexity (since it must explore the whole tree), and an amount of memory polynomial in the logarithm of the tree-complexity (since the algorithm must always store one node of the tree at each possible move-depth, and the number of nodes at the highest move-depth is precisely the tree-complexity). * [[Backward induction]] will use both memory and time proportional to the state-space complexity, as it must compute and record the correct move for each possible position.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)