A human player comprehensively defeated a top-ranked AI system in the board game Go, surprisingly reversing a 2016 computer victory that was seen as a landmark in the rise of artificial intelligence.
American player Kelly Pelrin, who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw identified by another computer. But the head-to-head matchup, in which he won 14 of 15 games, was accomplished without direct computer assistance.
The previously unreported victory highlighted a weakness in the best Go software shared by most of today’s widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.
The tactic that puts a human on top on a Go board was suggested by a computer program that tested AI systems for weaknesses. The proposed plan was then ruthlessly presented by Pelrin.
“It was surprisingly easy for us to operate this system,” said Adam Gleave, CEO of FAR AI, the California research firm that designed the program. The program played more than 1 million games against KataGo, one of the best Go-playing systems, to find a “blind spot” that a human player could exploit, he added.
The winning strategy identified by the software is “not completely trivial, but it’s not super difficult” for a human to learn and can be used by intermediate players to beat machines, Pelrin said. He also used the method to win against another top Go system, Leela Zero.
The decisive victory, albeit with the help of computer-suggested tactics, comes seven years after artificial intelligence appeared to have an undisputed lead over humans in what is often considered the most complex of all board games.
The AlphaGo system, developed by Google-owned research firm DeepMind, beat world Go champion Lee Sedol four games to one in 2016. Sedol attributed his departure from Go after three years to the rise of AI, saying it was “an entity”. which cannot be defeated.” AlphaGo is not publicly available, but the systems Pelrine dominated are considered peer-to-peer.
In the game of Go, two players take turns placing black and white stones on a board marked with a 19×19 grid, seeking to surround their opponent’s stones and close the largest space. The sheer number of combinations means that it is impossible for a computer to evaluate all possible future moves.
A tactic used by Pelrin involved slowly connecting a large “ring” of stones to surround one of his opponent’s own groups while distracting the AI with moves in other corners of the board. The Go-playing bot didn’t notice its vulnerability even when the encirclement was almost complete, Pelrin said.
“As a human, it would be pretty easy to spot,” he added.
The discovery of a weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep learning systems that underpin today’s most advanced AI, said Stuart Russell, a computer science professor at the University of California, Berkeley.
Systems can only “understand” specific situations they’ve been exposed to in the past and can’t generalize in a way that humans find easy, he added.
“It shows once again that we’ve been too quick to attribute superhuman levels of intelligence to machines,” Russell said.
According to the researchers, the exact reason for the failure of Go-playing systems is a matter of speculation. One likely reason is that the tactics used by Pelrine are rarely used, meaning AI systems haven’t been trained enough on similar games to realize they’re vulnerable, Gleave said.
It is common to find flaws in AI systems when they are subjected to the kind of “adversary attack” used against Go-playing computers, he added. Despite this, “we see very big [AI] systems that are deployed at scale with few checks.”
Copyright The Financial Times Limited 2023 © 2023 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post on the web.