Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computer chess
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== History ==== The [[Claude Shannon#Shannon's computer chess program|first paper]] on chess search was by [[Claude Shannon]] in 1950.<ref name="wheland197810">{{cite news | url=https://archive.org/stream/byte-magazine-1978-10/1978_10_BYTE_03-10_Chess_for_the_Microcomputer#page/n167/mode/2up | title=A Computer Chess Tutorial | work=BYTE | date=October 1978 | access-date=17 October 2013 | author=Wheland, Norman D. | page=168}}</ref> He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B",<ref name="shannon">{{Harvcol|Shannon|1950}}</ref> before anyone had programmed a computer to play chess. Type A programs would use a "[[brute-force search|brute force]]" approach, examining every possible position for a fixed number of moves using a pure naive [[minimax|minimax algorithm]]. Shannon believed this would be impractical for two reasons. First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 10<sup>9</sup> positions involved in looking three moves ahead for both sides (six [[Ply (game theory)|plies]]) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed.) A later search algorithm called [[alpha–beta pruning]], a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, reduced the branching factor of the game tree logarithmically, but it still was not feasible for chess programs at the time to exploit the exponential explosion of the tree. Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of an [[exchange (chess)|exchange]] of pieces or other important sequence of moves ('lines'). He expected that adapting minimax to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. This led naturally to what is referred to as "selective search" or "type B search", using chess knowledge (heuristics) to select a few presumably good moves from each position to search, and prune away the others without searching. Instead of wasting processing power examining bad or trivial moves, Shannon suggested that type B programs would use two improvements: # Employ a [[quiescence search]]. # Employ forward pruning; i.e. only look at a few good moves for each position. This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time. However, early attempts at selective search often resulted in the best move or moves being pruned away. As a result, little or no progress was made for the next 25 years dominated by this first iteration of the selective search paradigm. The best program produced in this early period was Mac Hack VI in 1967; it played at the about the same level as the average amateur (C class on the United States Chess Federation rating scale). Meanwhile, hardware continued to improve, and in 1974, brute force searching was implemented for the first time in the Northwestern University Chess 4.0 program. In this approach, all alternative moves at a node are searched, and none are pruned away. They discovered that the time required to simply search all the moves was much less than the time required to apply knowledge-intensive heuristics to select just a few of them, and the benefit of not prematurely or inadvertently pruning away good moves resulted in substantially stronger performance. In the 1980s and 1990s, progress was finally made in the selective search paradigm, with the development of [[quiescence search]], null move pruning, and other modern selective search heuristics. These heuristics had far fewer mistakes than earlier heuristics did, and was found to be worth the extra time it saved because it could search deeper and widely adopted by many engines. While many modern programs do use [[alpha-beta search]] as a substrate for their search algorithm, these additional selective search heuristics used in modern programs means that the program no longer does a "brute force" search. Instead they heavily rely on these selective search heuristics to extend lines the program considers good and prune and reduce lines the program considers bad, to the point where most of the nodes on the search tree are pruned away, enabling modern programs to search very deep. In 2006, [[Rémi Coulom]] created [[Monte Carlo tree search]], another kind of type B selective search. In 2007, an adaption of Monte Carlo tree search called Upper Confidence bounds applied to Trees or UCT for short was created by Levente Kocsis and Csaba Szepesvári. In 2011, Chris Rosin developed a variation of UCT called Predictor + Upper Confidence bounds applied to Trees, or PUCT for short. PUCT was then used in [[AlphaZero]] in 2017, and later in [[Leela Chess Zero]] in 2018.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)