Glass Chess

A computer program called Giraffe has taught itself to become an international grand master after playing itself for just 72 hours. The software was able to learn from its own mistakes as it played against itself and is now able to beat most humans who have spent a lifetime learning the game.

The software can play with itself and perfect the moves by detecting previous mistakes. It can also find alternative combinations for every set.

To be crowned as “Grandmaster” in chess, a player must be able to have a rating of more than 2,500. The world’s current number one player, Magnus Carlsen, was rated at 2,853.

It’s been nearly 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard event rules.  Since then, chess-playing computer systems have developed considerably powerful, leaving the best humans little likelihood even against a modern chess engine running on a smartphone.

Gary vs IBM
Gary Kasprov playing against Deep Blue – 1997 – Source: Reuters

However while computers have become faster, the way chess engines work has not changed. Their power depends on brute force, the process of looking through all possible future moves to seek out the most effective next one.

Of course, no human can match that or come anyplace close. While IBM’s Deep Blue was searching some 200 million positions per second, Kasparov was probably searching not more than 5 a second. And yet he performed at essentially the same level. Clearly, humans have a trick up their sleeve that computers don’t (yet).

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational process because it prunes the tree of all doable moves to only a few branches.

Computers have never been good at this, but that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine, which he calls Giraffe that has taught itself to play chess by evaluating positions much more like humans and in a completely different technique to conventional chess engines.

Lai generated his dataset by randomly choosing 5 million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before utilizing it for training. In total he generated a hundred and seventy five million positions in this method.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are robust and those that are weak.

However this can be an enormous task for 175 million positions. It might be carried out by another chess engine but Lai’s objective was more ambitious. He wanted the machine to be taught itself.

As an alternative, he used a bootstrapping approach in which Giraffe played against itself with the aim of enhancing its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether or not the match is later won, lost or drawn.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top 3 ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

Robot playing chess
Source: https://www.lkessler.com/brutefor.shtml

In a report from Daily Mail, Mr. Lai admitted that his software was not as good as the best chess software.  “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

Source – Giraffe: Using Deep Reinforcement Learning to Play Chess 

 

1 COMMENT

Leave a Reply to Book & Share Media Cancel reply

Please enter your comment!
Please enter your name here