Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Go (game)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Software players === Go long posed a daunting challenge to [[computer programmer]]s, putting forward "difficult decision-making tasks, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function".<ref name="AlphaGo" /> Prior to 2015,<ref name="AlphaGo">{{Cite journal|title = Mastering the game of Go with deep neural networks and tree search|journal = [[Nature (journal)|Nature]]| issn= 0028-0836|pages = 484–489|volume = 529|issue = 7587|doi = 10.1038/nature16961|pmid = 26819042|first1 = David|last1 = Silver|author-link1=David Silver (programmer)|first2 = Aja|last2 = Huang|author-link2=Aja Huang|first3 = Chris J.|last3 = Maddison|first4 = Arthur|last4 = Guez|first5 = Laurent|last5 = Sifre|first6 = George van den|last6 = Driessche|first7 = Julian|last7 = Schrittwieser|first8 = Ioannis|last8 = Antonoglou|first9 = Veda|last9 = Panneershelvam|first10= Marc|last10= Lanctot|first11= Sander|last11= Dieleman|first12=Dominik|last12= Grewe|first13= John|last13= Nham|first14= Nal|last14= Kalchbrenner|first15= Ilya|last15= Sutskever|author-link15=Ilya Sutskever|first16= Timothy|last16= Lillicrap|first17= Madeleine|last17= Leach|first18= Koray|last18= Kavukcuoglu|first19= Thore|last19= Graepel|first20= Demis |last20=Hassabis|author-link20=Demis Hassabis|date= 28 January 2016|bibcode = 2016Natur.529..484S|s2cid = 515925}}{{closed access}}</ref> the best Go programs only managed to reach [[Go ranks and ratings#Kyu and dan ranks|amateur dan]] level.<ref name=humancomputermatchs>{{cite web|last=Wedd|first=Nick|title=Human-Computer Go Challenges|url=http://www.computer-go.info/h-c/index.html|work=computer-go.info|access-date=2011-10-28}}</ref> On smaller 9×9 and 13x13 boards, computer programs fared better, and were able to compare to professional players. Many in the field of [[artificial intelligence]] consider Go to require more elements that mimic human thought than [[chess]].<ref>{{Citation| url=https://query.nytimes.com/gst/fullpage.html?res=9C04EFD6123AF93AA15754C0A961958260 | title=To Test a Powerful Computer, Play an Ancient Game | last=Johnson | first=George | work=[[The New York Times]]| date=1997-07-29 | access-date = 2008-06-16}}</ref> [[File:13 by 13 game finished.jpg|thumb|A finished beginner's game on a 13×13 board]] The reasons why computer programs had not played Go at the [[Go ranks and ratings#Kyu and dan ranks|professional dan]] level prior to 2016 include:<ref>{{Citation |url=http://www.intelligentgo.org/en/computer-go/overview.html |publisher=Intelligent Go Foundation |title=Overview of Computer Go |access-date=2008-06-16 |archive-url=https://web.archive.org/web/20080531072850/http://www.intelligentgo.org/en/computer-go/overview.html |archive-date=2008-05-31 |url-status=dead}}</ref> * The number of spaces on the board is much larger (over five times the number of spaces on a chess board—361 vs. 64). On most turns there are many more possible moves in Go than in chess. Throughout most of the game, the number of legal moves stays at around 150–250 per turn, and rarely falls below 100 (in chess, the average number of moves is 37).<ref>{{citation | title = How to beat your chess computer | first1 = Raymond | last1 = Keene | first2 = David | last2 = Levy | publisher = Batsford Books | year = 1991 | page = 85}}</ref> Because an [[exhaustive search|exhaustive computer program]] for Go must calculate and compare every possible legal move in each [[Ply (game theory)|ply]] (player turn), its ability to calculate the best plays is sharply reduced when there are a large number of possible moves. Most computer game algorithms, such as those for chess, compute several moves in advance. Given an average of 200 available moves through most of the game, for a computer to calculate its next move by exhaustively anticipating the next four moves of each possible play (two of its own and two of its opponent's), it would have to consider more than 320 billion (3.2{{e|11}}) possible combinations. To exhaustively calculate the next eight moves, would require computing 512 quintillion (5.12{{e|20}}) possible combinations. {{As of|2014|3}}, the most powerful supercomputer in the world, [[National University of Defense Technology|NUDT]]'s "[[Tianhe-2]]", can sustain 33.86 [[FLOPS|petaflops]].<ref>{{cite web |url=https://spectrum.ieee.org/tianhe2-caps-top-10-supercomputers |title=China's Tianhe-2 Caps Top 10 Supercomputers |access-date=2014-04-14 |author=Davey Alba |author-link=Davey Alba |date=2014-06-17 |publisher=IEEE Spectrum }}</ref> At this rate, even given an exceedingly low estimate of 10 operations required to assess the value of one play of a stone, Tianhe-2 would require four hours to assess all possible combinations of the next eight moves in order to make a single play. * The placement of a single stone in the initial phase can affect the play of the game a hundred or more moves later. A computer would have to predict this influence, and it would be unworkable to attempt to exhaustively analyze the next hundred moves. * In capture-based games (such as chess), a position can often be evaluated relatively easily, such as by calculating who has a material advantage or more active pieces.{{efn|1=While chess position evaluation is simpler than Go position evaluation, it is still more complicated than simply calculating material advantage or piece activity; pawn structure and king safety matter, as do the possibilities in further play. The complexity of the algorithm differs per engine.<ref>{{citation|last=Shannon|first=Claude|year=1950|title=Programming a Computer for Playing Chess|publisher=Philosophical Magazine|series=Ser. 7|volume=41|issue=314|url=https://archive.computerhistory.org/projects/chess/related_materials/text/2-0%20and%202-1.Programming_a_computer_for_playing_chess.shannon/2-0%20and%202-1.Programming_a_computer_for_playing_chess.shannon.062303002.pdf|access-date=12 December 2021}}</ref><ref>{{citation|title=Learning to Play the Game of Chess|last=Thurn|first=Sebastian|year=1995|publisher=MIT Press|url=https://proceedings.neurips.cc/paper/1994/file/d7322ed717dedf1eb4e6e52a37ea7bcd-Paper.pdf|access-date=12 December 2021}}</ref><ref>{{citation|title=A Self-Learning, Pattern-Oriented Chess Program|last=Levinson|first=Robert|year=1989|publisher=ICCA Journal|volume=12|issue=4}}</ref>}} In Go, there is often no easy way to evaluate a position.<ref name=research.microsoft.com>{{cite web|last=Stern|first=David|title=Modelling Uncertainty in the Game of Go|url=http://www.cs.brown.edu/~ynm/Papers/AAAI06-312.pdf|work=[[Cornell University]]|access-date=15 May 2014|archive-url=https://web.archive.org/web/20130525131512/http://cs.brown.edu/~ynm/Papers/AAAI06-312.pdf|archive-date=25 May 2013}}</ref> However a 6-kyu human can evaluate a position at a glance, to see which player has more territory, and even beginners can estimate the score within 10 points, given time to count it. The number of stones on the board (material advantage) is only a weak indicator of the strength of a position, and a territorial advantage (more empty points surrounded) for one player might be compensated by the opponent's strong positions and influence all over the board. Normally a 3-dan can easily judge most of these positions. It was not until August 2008 that a computer won a game against a professional level player at a handicap of 9 stones, the greatest handicap normally given to a weaker opponent. It was the Mogo program, which scored this first victory in an exhibition game played during the US Go Congress.<ref>{{cite web| title= Supercomputer with innovative software beats Go Professional| url= http://www.cs.unimaas.nl/g.chaslot/muyungwan-mogo/| access-date= 2008-12-19| url-status= dead| archive-url= https://web.archive.org/web/20090101023512/http://www.cs.unimaas.nl/g.chaslot/muyungwan-mogo/| archive-date= 2009-01-01}}</ref><ref>{{cite web | title= AGA News: Kim Prevails Again In Man Vs Machine Rematch | url=http://www.usgo.org/news/ | access-date = 2009-08-08}}</ref> By 2013, a win at the professional level of play was accomplished with a four-stone advantage.<ref>{{Cite magazine|url = https://www.wired.com/2014/05/the-world-of-computer-go/|title = The Mystery of Go, the Ancient Game That Computers Still Can't Win|last = Levinovitz|first = Alan|date = May 12, 2014|magazine = Wired|access-date = December 8, 2015|department = Business|at = The Electric Sage Battle}}</ref><ref>{{Cite magazine|url = https://www.wired.com/2015/12/google-and-facebook-race-to-solve-the-ancient-game-of-go|title = Google and Facebook Race To Solve the Ancient Game of Go With AI|last = Metz|first = Cade|date = December 7, 2015|magazine = Wired|access-date = December 8, 2015|department = Business}}</ref> In October 2015, [[Google DeepMind]]'s program [[AlphaGo]] beat [[Fan Hui]], the European Go champion and a [[Go ranks and ratings|2 dan]] (out of 9 dan possible) professional, [[AlphaGo versus Fan Hui|five times out of five]] with no handicap on a full size 19×19 board.<ref name="AlphaGo" /> AlphaGo used a fundamentally different paradigm than earlier Go programs; it included very little direct instruction, and mostly used [[deep learning]] where AlphaGo played itself in hundreds of millions of games such that it could measure positions more intuitively. In March 2016, Google next challenged [[Lee Sedol]], a 9 dan considered the top player in the world in the early 21st century,<ref>{{ cite web|title= History of Go Ratings |url= http://www.goratings.org/history/ |author=<!--Staff writer(s); no by-line.--> |website= goratings.org |access-date= 18 March 2016}}</ref> to a [[AlphaGo versus Lee Sedol|five-game match]]. Leading up to the game, Lee Sedol and other top professionals were confident that he would win;<ref>{{cite web|title= Lee Se-dol confident about beating AlphaGo |url= https://www.koreatimes.co.kr/www/news/tech/2016/03/325_199865.html |author=<!--Staff writer(s); no by-line.--> |website= [[The Korea Times]] |date= 8 March 2016 |access-date= 18 March 2016}}</ref> however, AlphaGo defeated Lee in four of the five games.<ref name="BBC News 12 March 2016">{{cite web | title= Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol |url= https://www.bbc.co.uk/news/technology-35785875| author=<!--Staff writer(s); no by-line.-->|date= 12 March 2016| website= [[BBC News Online]] | access-date= 12 March 2016}}</ref><ref>{{cite news|last1=Lawler|first1=Richard|title=Google DeepMind AI wins final Go match for 4-1 series win|url=https://www.engadget.com/2016/03/14/the-final-lee-sedol-vs-alphago-match-is-about-to-start/|access-date=15 March 2016}}</ref> After having already lost the series by the third game, Lee won the fourth game, describing his win as "invaluable".<ref name="BBC News 13 March 2016">{{cite web | title= Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program |url= https://www.bbc.co.uk/news/technology-35797102| author=<!--Staff writer(s); no by-line.-->|date= 13 March 2016| website= [[BBC News Online]] | access-date= 13 March 2016}}</ref> In May 2017, AlphaGo beat [[Ke Jie]], who at the time continuously held the world No. 1 ranking for two years,<ref>{{Cite web|title=柯洁迎19岁生日 雄踞人类世界排名第一已两年|url=http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|language=zh|date=May 2017}}</ref><ref>{{Cite web|url=http://www.goratings.org/|title=World's Go Player Ratings|date=24 May 2017}}</ref> winning each game in a [[AlphaGo versus Ke Jie|three-game match]] during the [[Future of Go Summit]].<ref name="wuzhensecond">{{cite magazine|url=https://www.wired.com/2017/05/googles-alphago-continues-dominance-second-win-china/|title=Google's AlphaGo Continues Dominance With Second Win in China|magazine=Wired|date=2017-05-25}}</ref><ref>{{cite magazine|url=https://www.wired.com/2017/05/win-china-alphagos-designers-explore-new-ai/|title=After Win in China, AlphaGo's Designers Explore New AI|magazine=Wired|date=2017-05-27}}</ref> In October 2017, [[DeepMind]] announced a significantly stronger version called [[AlphaGo Zero]] which beat the previous version by 100 games to 0.<ref>{{cite journal |first1=David |last1=Silver|author-link1=David Silver (programmer)|first2= Julian|last2= Schrittwieser|first3= Karen|last3= Simonyan|first4= Ioannis|last4= Antonoglou|first5= Aja|last5= Huang|author-link5=Aja Huang|first6=Arthur|last6= Guez|first7= Thomas|last7= Hubert|first8= Lucas|last8= Baker|first9= Matthew|last9= Lai|first10= Adrian|last10= Bolton|first11= Yutian|last11= Chen|author-link11=Chen Yutian|first12= Timothy|last12= Lillicrap|first13=Hui|last13= Fan|author-link13=Fan Hui|first14= Laurent|last14= Sifre|first15= George van den|last15= Driessche|first16= Thore|last16= Graepel|first17= Demis|last17= Hassabis |author-link17=Demis Hassabis|title=Mastering the game of Go without human knowledge|journal=[[Nature (journal)|Nature]]|issn= 0028-0836|pages=354–359|volume =550|issue =7676|doi =10.1038/nature24270|pmid=29052630|date=19 October 2017|bibcode=2017Natur.550..354S|s2cid=205261034|url=https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf|archive-url=https://web.archive.org/web/20200102034116/https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf|archive-date=2 January 2020|url-status=live}}{{closed access}}</ref> In February 2023, Kellin Pelrine, an amateur American Go player, won 14 out of 15 games against a top-ranked AI system in a significant victory over artificial intelligence. Pelrine took advantage of a previously unknown flaw in the Go computer program, which had been identified by another computer. He exploited this weakness by slowly encircling the opponent's stones and distracting the AI with moves in other parts of the board.<ref>{{Cite news |author1=Joshua Wolens |date=2023-02-20 |title=A human has beat an AI in possibly the most complex board game ever |language=en |work=PC Gamer |url=https://www.pcgamer.com/a-human-has-beat-an-ai-in-possibly-the-most-complex-board-game-ever/ |access-date=2023-02-21}}</ref><ref>{{Cite web |title=Human convincingly beats AI at Go with help from a bot |url=https://www.engadget.com/human-convincingly-beats-ai-at-go-with-help-from-a-bot-100903836.html |access-date=2023-02-21 |website=Engadget |date=20 February 2023 |language=en-US}}</ref><ref>{{Cite web |last=Times |first=Financial |date=2023-02-19 |title=Man beats machine at Go in human victory over AI |url=https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/ |access-date=2023-02-21 |website=Ars Technica |language=en-us}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)