Google’s A.I. Masters New Game

gameGoogle has got the Artificial Intelligence industry talking after its new robot AlphaGo, built by the tech giant’s A.I. development company DeepMind, was able to beat the European Champion, Fan Hui, at the game Go.

Go is thought to be one of the most complex games known to mankind, as it is played on a 19 by 19 checkered board  – far more than an 8 by 8 chessboard – meaning that there is an extremely vast array of potential moves compared to standard board games, as well as an immense strategic element.

To be able to cope with all of the data required to win at this difficult game, the DeepMind robot uses artificial neural networks, using machine learning and data mining methods to emulate the inner workings of the human brain. This allows the robot to process all the different possible moves used by top players, and whittles its choices down to the ones most likely to result in a win. It was this complex processing that allowed the robot to win all five games against Hui, a feat that even the Google team had not predicted.

So what does this mean for the future of A.I.? Well, it is definitely a cause for excitement in the field. Go is a highly decision-based game depending on more qualitative judgements than the games previously mastered by computers. As such, the tools used in creating AlphaGo could potentially be used by the DeepMind team to solve other complex problems, making this a historical step in the direction towards advanced artificial intelligence.

The breakthrough was described in a study, published in Nature last week.

 

Sarah Cowen-Rivers is studying for an MSc in Science Communication

Images: Close-up of board, Shutterstock; Finished game, Saran Poroong

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *