Google (NASDAQ: GOOG) has just mastered the ancient chinese game and beaten the best at it.

Go was the greatest challenge for artificial intelligence (AI). The game itself have more possibles moves than the numbers of atoms in the universe, and more than a googol times larger than chess.

Games are often used by the engineers to test and improve AI, because its response towards solving problems could be similar to human behavior. “Traditional AI methods—which construct a search tree over all possible positions—don’t have a chance in Go,” published google in its blog.

A couple playing Go (aka Weiqi) in Shangha, China with the flat-sided yunzi style of stones. Photo: Brian Jefferly Beggerly/Wikipedia
A couple playing Go (aka Weiqi) in Shangha, China with the flat-sided yunzi style of stones. Photo: Brian Jefferly Beggerly/Wikipedia

The researchers had to take a different approach and built a new system called AIphaGo. This system combines an advanced tree search with deep neural networks, according to the post.

One neural network selects the next move to play and the other predicts the winner of the game. The networks were trained with 30 million moves from games played by human experts, added the search engine. The machine was now able to predict but the gold was to beat those players.

Then, Google managed to make AIphaGo learn to discover new strategies for itself, by playing the game a thousand times between its neural networks and using a trial-and-error process known as reinforcement learning.

After that, AIphaGo was able to beat the reigning three-time European Go champion Fan Hui, an expert on the game which plays it since he was 12. The algorithm won by 5 games to 0, and it was the first time a computer beats a professional Go player.

The rules of the game are simple, the goal is to gain the most territory by placing and capturing black and white stones on the board. The difference between the chess and Go is that the stones have all the same value.

Source: Google Blog