(Since I wrote Part 1 of this article, the ‘AlphaGo’ AI won the 5th game in the series, giving a 4:1 victory over one of the top human players, Lee So-dol).
We have already discussed how ‘Go’ is much more difficult for a computer to play than Chess – mainly because the number of possible different moves per turn is so much bigger (and so the total ‘game space’ is even more vast), and because deciding how ‘good’ a particular board position is, is so much harder with ‘Go’.
First, let’s address one of the points the mainstream press have been making: No, the ‘artificial intelligence’ computers are not coming to get us and annihilate the human race (I’ve seen articles online that pretty much implied this was the obvious next step). Or at least, not because of this result. ‘Go’ is still a ‘full information, deterministic’ game, and these are things computers are good at, for all ‘Go’ is about as hard as such games get. This is very different from forming a good understanding of a ‘real world’ situation such as politics, business or even ‘human’ actions such as finding a joke funny, or enjoying music.
But back to ‘Go’. With Chess, the number of possible moves per turn means that looking at all possible moves beyond about 6 moves out is not a sensible approach. So, pre-programmed approaches (‘heuristics’) are used to decide which moves can safely be ignored, and which need looking at more closely.
With ‘Go’, even this is not possible, as no simple rules can be programmed. So, how did ‘AlphaGo’ tackle the problem?
The basic approach (searching the ‘game tree’) remained similar, but more sophisticated. Decisions about which parts of the tree to analyse in more detail (and which to ignore) were made by neural networks (of which more later).
Similarly, the ‘evaluation function’ which tries to ‘score’ a given board position had to be more sophisticated than for Chess. In Chess, the evaluation function is usually written (i.e. programmed into the software) by humans – indeed, in the 1997 Kasparov match won by IBM’s Deep Blue, the evaluation function was even changed between games by a human Grand Master, a cause of some controversy at the time (i.e. had the computer really won ‘alone’, or had the human operators helped out, albeit only between games).
In ‘AlphaGo’, another neural network (a ‘deep’ NN) was employed to analyse positions. And here lies the real difference. With AlphaGo, the software analysed a vast number of real games, and learned by itself what are features of good board positions. Having done this, it then played against itself in millions more games, and in doing so was able to fine tune this learning even further.
It learned how to play ‘Go’ well, rather than being programmed.
This ‘deep neural network’ approach is the hallmark of many modern ‘deep learning’ systems. ‘Deep’ is really just the latest buzzword, but the underlying concept is that the software was able to learn – and not just learn specific features, like a traditional neural network, but also to learn which features to choose in the first place, rather than having features hand-selected by a programmer.
We’ve probably got to the stage now where the perennial argument – are computers ‘really intelligent’, or just good at computing – has become fairly irrelevant. AI systems are now able to not only learn a given set of features, but to choose those features themselves – this is how human (and other animal) brains work. This is undoubtedly a very powerful technique, which will guide the future of AI for the next few years.