Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Horizon effect
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Weakness of game-playing artificial intelligence}} {{more footnotes|date=July 2012}} The '''horizon effect''', also known as the '''horizon problem''', is a problem in [[artificial intelligence]] whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few [[Ply (game theory)|plies]] down the [[game tree]]. Thus, for a computer searching only a fixed number of plies, there is a possibility that it will make a poor long-term move. The drawbacks of the move are not "visible" because the computer does not search to the depth at which its [[evaluation function]] reveals the true evaluation of the line. The analogy is to peering at a distance on a sphere like the earth, but a threat being beneath the horizon and hence unseen. When evaluating a large [[game tree]] using techniques such as [[minimax]] with [[alpha-beta pruning]], search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect. In 1973 [[Hans Berliner]] named this phenomenon, which he and other researchers had observed, the "Horizon Effect."<ref>{{cite journal|last1=Berliner|first1=Hans J.|title=Some Necessary Conditions for a Master Chess Program |journal=Proceedings of the 3rd International Joint Conference on Artificial Intelligence. Stanford, CA, USA, August 20β23, 1973 |date=1973|pages=77β85 |url=http://dblp.org/rec/conf/ijcai/Berliner73}}</ref> He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form." The horizon effect can be somewhat mitigated by [[quiescence search]]. This technique extends the effort and time spent searching board states left in volatile positions and allocates less effort to easier-to-assess board states. For example, "scoring" the worth of a chess position often involves a [[Chess piece relative value|material value count]], but this count is misleading if there are [[Hanging Piece|hanging piece]]s or an imminent checkmate. A board state after the white queen has captured a protected black knight would appear to the naive material count to be advantageous to white as they are now up a knight, but is probably disastrous as the queen will be taken in the exchange one ply later. A quiescence search may tell a search algorithm to play out the [[capture (chess)|capture]]s and [[check (chess)|check]]s before scoring [[leaf node]]s with volatile positions. == Examples == In [[chess]], assume a situation where the computer only searches the game tree to six [[Ply (game theory)|plies]] and from the current position determines that the queen is lost in the sixth ply; and suppose there is a move in the search depth where it may [[sacrifice (chess)|sacrifice]] a rook, and the loss of the queen is pushed to the eighth ply. This is, of course, a worse move than sacrificing the queen because it leads to losing both a queen and a rook. However, because the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Losing the rook seems to be better than losing the queen, so the sacrifice is returned as the best option whereas delaying the sacrifice of the queen has in fact additionally weakened the computer's position. As another example, while some [[perpetual check]]s quickly trigger a threefold-repetition draw, others can involve a queen chasing a king around across the board, varying the position each time, meaning the actual forced draw could be many moves into the future. If a quiescence search playing out the checks isn't done, then the AI might not detect the possibility, and can let the engine blunder a winning position into a drawn one. In [[Go (board game)|Go]], the horizon effect is a major concern for writing an AI capable of even beginner-level play, and part of why alpha-beta search was a weak approach to [[Computer Go]] compared to later machine learning and pattern recognition approaches. It is a very common situation for certain stones to be "dead" yet require many moves to actually capture them if fought over. The horizon effect may cause a naive algorithm to incorrectly assess the situation and believe that the stones are savable by calculating a play that seems to keep the doomed stones alive as of the move the search tree stops at. While the death of the group can indeed be delayed, it cannot be stopped, and contesting this will only allow more stones to be captured. A classic example that beginners learn are [[Ladder (Go)|Go ladders]], but the same general idea occurs even in situations that aren't strictly ladders.<ref>{{cite web |url=https://staff.itee.uq.edu.au/janetw/Computer%20Go/go-vs-chess.pdf |title=The Challenge of Go as a Domain for AI Research: A Comparison Between Go and Chess |first=Jay |last=Burmeister |first2=Janet |last2=Wiles |date=1995 |pages=181-186 |doi=10.1109/ANZIIS.1995.705737 }}</ref> {| style="display:inline; display:inline-table;" |- | style="border: solid thin; padding: 3px;" | {{Goban 13x13 <!--A B C D E F G H J K L M N--> | | | | | | | | | | c| b| b| b<!--13--> | | | | | | | | | | | w| w| w<!--12--> | | | | | | | | | | | | | <!--11--> | | | | x| | | | | | x| | | <!--10--> | | | | | | | | | | | | | <!--9--> | | | | | | | | | | | | | <!--8--> | | | | | | | | | | | | | <!--7--> | | | | | | | | | | | | | <!--6--> | | | | | | | | | | | | | <!--5--> | | | | x| | | | | | x| | | <!--4--> | | | | | | | | | | | | | <!--3--> | | | | | | | | | | | | | <!--2--> | | | | | | | | | | | | | <!--1--> <!--A B C D E F G H J K L M N-->|14}} |- | style="text-align:center" | Black to play. Playing on the X spot<br/>gets the stones briefly out of [[Atari (go)|atari]],<br/>and thus appears a useful move<br/>to shallow searches... |} {| style="display:inline; display:inline-table;" |- | style="border: solid thin; padding: 3px;" | {{Goban 13x13 <!--A B C D E F G H J K L M N--> | | b| b| b| b| b| b| b| b| b| b| b| b<!--13--> | | w| w| w| w| w| w| w| w| w| w| w| w<!--12--> | | | | | | | | | | | | | <!--11--> | | | | x| | | | | | x| | | <!--10--> | | | | | | | | | | | | | <!--9--> | | | | | | | | | | | | | <!--8--> | | | | | | | | | | | | | <!--7--> | | | | | | | | | | | | | <!--6--> | | | | | | | | | | | | | <!--5--> | | | | x| | | | | | x| | | <!--4--> | | | | | | | | | | | | | <!--3--> | | | | | | | | | | | | | <!--2--> | | | | | | | | | | | | | <!--1--> <!--A B C D E F G H J K L M N-->|14}} |- | style="text-align:center" | ...but Black loses far more than<br/>three stones if the [[Ladder (Go)|ladder]] is<br/>foolishly played to completion. |} ==See also== * [[Fog of war]] * [[Anti-computer tactics]] * [[Monte Carlo tree search]] == References == {{Reflist}} == Further reading == * {{Russell Norvig 2003 | pages = 174}} ==External links== * [https://www.chessprogramming.org/Horizon%20Effect Horizon Effect] at Chess Programming WIKI (CPW) [[Category:Game artificial intelligence]] [[Category:Computer chess]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Goban 13x13
(
edit
)
Template:More footnotes
(
edit
)
Template:Reflist
(
edit
)
Template:Russell Norvig 2003
(
edit
)
Template:Short description
(
edit
)