Martin Müller has posted two more example games with some commentary on his blog:
http://webdocs.cs.ualberta.ca/~mmueller ... twork.html
Game 2 shows a series of funny blunders back and forth by both computers.
Teaching a convolutional deep network to play go
- RBerenguel
- Gosei
- Posts: 1585
- Joined: Fri Nov 18, 2011 11:44 am
- Rank: KGS 5k
- GD Posts: 0
- KGS: RBerenguel
- Tygem: rberenguel
- Wbaduk: JohnKeats
- Kaya handle: RBerenguel
- Online playing schedule: KGS on Saturday I use to be online, but I can be if needed from 20-23 GMT+1
- Location: Barcelona, Spain (GMT+1)
- Has thanked: 576 times
- Been thanked: 298 times
- Contact:
Re: Teaching a convolutional deep network to play go
In the comp-go list there was a mention on a forthcoming paper from Google. Here it is.
It answers several questions we had (how fast the network evaluation is, for instance) and also incorporates the network process with a MCTS.
It feels slightly less readable than the other paper, but it's still within reach of go players with some vague knowledge of neural networks. It also includes the kifu from a game from the network against pachi.
It answers several questions we had (how fast the network evaluation is, for instance) and also incorporates the network process with a MCTS.
It feels slightly less readable than the other paper, but it's still within reach of go players with some vague knowledge of neural networks. It also includes the kifu from a game from the network against pachi.
Geek of all trades, master of none: the motto for my blog mostlymaths.net
- emeraldemon
- Gosei
- Posts: 1744
- Joined: Sun May 02, 2010 1:33 pm
- GD Posts: 0
- KGS: greendemon
- Tygem: greendemon
- DGS: smaragdaemon
- OGS: emeraldemon
- Has thanked: 697 times
- Been thanked: 287 times
Re: Teaching a convolutional deep network to play go
Interesting developments. I hope the researchers release the source at some point, so that maybe it can be integrated with pachi or fuego.
-
Sennahoj
- Dies with sente
- Posts: 103
- Joined: Fri Jun 20, 2014 5:45 am
- Rank: Tygem 5d
- GD Posts: 0
- Has thanked: 3 times
- Been thanked: 37 times
Re: Teaching a convolutional deep network to play go
This really blows my mind. Somehow, it didn't feel at all counterintuitive to me when I first read about MCTS programs and how they managed to play good go, but I find it super difficult to understand how a neural net can reach such a high level!
It's really exciting that the two approaches seem to be so complementary!
It's really exciting that the two approaches seem to be so complementary!
-
Mike Novack
- Lives in sente
- Posts: 1045
- Joined: Mon Aug 09, 2010 9:36 am
- GD Posts: 0
- Been thanked: 182 times
Re: Teaching a convolutional deep network to play go
Well, not having reached all that high a level yet, but extremely impressive at the start of a new direction.
So far note that training has been limited to predicting the move that an expert would make in an actual game. Note that (according to the paper just referenced) weak in life & death.
Well now, how about specific training with the board created such that there is a life and death problem (with known correct solution) and that is all that is relevant on the board. And yes, it would be possible to construct a "rest of the board" such that:
1) The score there (outside of the L&D problem) in terms of absolutely live groups is equal.
2) There are lots of pairs of possible moves in this "rest of the board" but they are all dame and unable to affect the life and death problem.
Note that this would mean any one such problem would represent lots of input data. Think of all the combinations of a pair of dame plays and none of these should affect the (correct) output --- if one does, then the net needs "correction" in what it has learned.
As to surprise that something like this could work, you managed to learn to play go. Forget for a moment thinking about your brain as having consciousness and consider that at some low level the learning was a matter of adjusting the connections in a network of neurons. That's why these things are called neural nets. Surprise perhaps that doesn't require a larger net to be able to play go, but remember, it's only doing one thing (at a time). An animal brain is doing a huge number of things at once.
So far note that training has been limited to predicting the move that an expert would make in an actual game. Note that (according to the paper just referenced) weak in life & death.
Well now, how about specific training with the board created such that there is a life and death problem (with known correct solution) and that is all that is relevant on the board. And yes, it would be possible to construct a "rest of the board" such that:
1) The score there (outside of the L&D problem) in terms of absolutely live groups is equal.
2) There are lots of pairs of possible moves in this "rest of the board" but they are all dame and unable to affect the life and death problem.
Note that this would mean any one such problem would represent lots of input data. Think of all the combinations of a pair of dame plays and none of these should affect the (correct) output --- if one does, then the net needs "correction" in what it has learned.
As to surprise that something like this could work, you managed to learn to play go. Forget for a moment thinking about your brain as having consciousness and consider that at some low level the learning was a matter of adjusting the connections in a network of neurons. That's why these things are called neural nets. Surprise perhaps that doesn't require a larger net to be able to play go, but remember, it's only doing one thing (at a time). An animal brain is doing a huge number of things at once.
-
Sennahoj
- Dies with sente
- Posts: 103
- Joined: Fri Jun 20, 2014 5:45 am
- Rank: Tygem 5d
- GD Posts: 0
- Has thanked: 3 times
- Been thanked: 37 times
Re: Teaching a convolutional deep network to play go
Mike, comparing a neural net to my brain doesn't really give me all that much. Of course my brain is nothing but a meat computer, but the problem is that we know very little about how it actually works! And of course it must be physically possible to build artificial general intelligence (in the sense of e.g. this lovely article http://aeon.co/magazine/technology/davi ... elligence/), but this is not what the authors claim they have achieved 
I work with machine learning applications (albeit in a very different field), and I really find it impressive that they get any kind of results with a huge non-linear regression let lose on GoGoD...
I work with machine learning applications (albeit in a very different field), and I really find it impressive that they get any kind of results with a huge non-linear regression let lose on GoGoD...