AlphaZero paper published in journal Science
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
AlphaZero paper published in journal Science
After about a year since the pre-print appeared in arkiv (AlphaZero L19 thread, not the same as AlphaGo Zero L19 thread), the AlphaZero paper has finally passed peer review and is in the journal Science:
http://science.sciencemag.org/content/362/6419/1140
pdf: http://science.sciencemag.org/content/s ... 0.full.pdf
Focus seems to be on chess and shogi. There's a new match vs stockfish, hopefully a better test than the last one. Chess media report: https://www.chess.com/news/view/updated ... game-match
DeepMind article and video:
Supplementary materials includes some Shogi games too which is something that community were missing:
http://science.sciencemag.org/content/s ... ver-SM.pdf
http://science.sciencemag.org/content/362/6419/1140
pdf: http://science.sciencemag.org/content/s ... 0.full.pdf
Focus seems to be on chess and shogi. There's a new match vs stockfish, hopefully a better test than the last one. Chess media report: https://www.chess.com/news/view/updated ... game-match
DeepMind article and video:
Supplementary materials includes some Shogi games too which is something that community were missing:
http://science.sciencemag.org/content/s ... ver-SM.pdf
-
pookpooi
- Lives in sente
- Posts: 727
- Joined: Sat Aug 21, 2010 12:26 pm
- GD Posts: 10
- Has thanked: 44 times
- Been thanked: 218 times
Re: AlphaZero paper published in journal Science
chessbase.com wrote:So does this wrap up AlphaZero for good now? Hardly. As Demis Hassabis was so ready to point out recently, a new AlphaZero has been developed that is stronger than the one referenced in the paper. Be ready for new announcements!
-
Javaness2
- Gosei
- Posts: 1545
- Joined: Tue Jul 19, 2011 10:48 am
- GD Posts: 0
- Has thanked: 111 times
- Been thanked: 322 times
- Contact:
Re: AlphaZero paper published in journal Science
For a few moments I was convinced that their chess board was the wrong way around. Then I decided, no, they've just had a very active game.
-
mumps
- Dies with sente
- Posts: 112
- Joined: Thu Aug 12, 2010 1:11 am
- GD Posts: 0
- Has thanked: 9 times
- Been thanked: 23 times
Re: AlphaZero paper published in journal Science
Hmm
Looking at the graphs shows that komi is too large!
AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
Looking at the graphs shows that komi is too large!
AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
-
jonsa
- Beginner
- Posts: 16
- Joined: Sun Oct 14, 2018 10:55 am
- Rank: DDK
- GD Posts: 0
- KGS: 11 kyu
- IGS: 14 kyu
- OGS: 11 kyu
- Universal go server handle: jonsa
- Has thanked: 11 times
- Been thanked: 2 times
Re: AlphaZero paper published in journal Science
Yeah, I was also thinking something along those lines. An "unusual" discrepancy.mumps wrote:Hmm
Looking at the graphs shows that komi is too large!
AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: AlphaZero paper published in journal Science
Well yes, all the bots (except from Elf v1) and quite a few top pros too (even before AI) think 7.5 komi gives white a slight advantage (53% according to AG Teach). But that's not exactly the same as saying it's too much as in something else would be better, because maybe if reduced to 6.5 that would give black more of an advantage (e.g. 55%).mumps wrote:Hmm
Looking at the graphs shows that komi is too large!
AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
For the Go version of AlphaZero it's not immediately obvious, but after careful reading of the paper, see below, I'm pretty sure it's 'only' the fully-trained 20-block AlphaGo Zero which AlphaZero beat 61% overall (and they report it beating the weaker AlphaGo Lee version before that, but still taking longer than vs Stockfish/Elmo). So by not having any AlphaZero Go games we aren't missing out on some new even stronger bot games that we have already, though it would be nice to see another instance of a strong bot's play learning from scratch to see if it ended up playing a similar style to AlphaGo Zero, LeelaZero, Elf OpenGo etc.
So to beat AlphaGo Lee, which is pretty weak by Go bot standards these days, it still took longer to train than the chess and shogi versions (a training step for Go was obviously slower, presumably because it's a bigger board). Then:Science paper wrote:We trained separate instances of AlphaZero for chess, shogi, and Go. Training proceeded for 700,000 steps (in mini-batches of 4096 training positions).
In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps).
From the AlphaGo Zero paper, the 20-block version was trained for a total of 700k steps aka mini-batches (of 2048 positions, cf AlphaZero's 4096) over a total 4.9 million self-play games. They then made the 40-block version which was trained, from scratch, over 3.1 million batches (of 2048 positions again) with 29 million games of self-play (LeelaZero is currently 40 block at 11 million self-play games (over increase # blocks), with bootstrapping of increasing network sizes). So my reading of this is that A0 beat the fully-trained 20-block version (which is stronger than AG Lee but weaker than AG Master), but not the 40-block version. Beating AG0 20-block by only 61%, which is around 4350 Elo on their graphs, means I think A0 is weaker than AG Master (4858) and AG0 40b (5185).The Go match was played against the previously published version of AlphaGo Zero [also trained for 700,000 steps (footnote 25 = AlphaGo Zero was ultimately trained for 3.1 million steps over 40 days.)]. <snip> In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games.
Using the DeepMind Elo scale which is an extension of goratings.org we have:Science figure 2 caption wrote:Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against, respectively, Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days
Code: Select all
Player Elo Matches
Fan Hui ~3000
AlphaGo Fan 3144 Beat Fan Hui 5-0
Lee Sedol / top human ~3600
AlphaGo Lee 3739 Beat Lee Sedol 4-1
AlphaGoZero 20b 4350 Beat AG Lee 100-0
AlphaZero ~4500 Beat AG0 20b 61% (over 1000 games?)
AlphaGo Master 4858 Beat top pros online 60-0
AlphaGo Zero 40b 5185 Beat AG Master 89-11
-
John Fairbairn
- Oza
- Posts: 3724
- Joined: Wed Apr 21, 2010 3:09 am
- Has thanked: 20 times
- Been thanked: 4672 times
Re: AlphaZero paper published in journal Science
Top human 3600 to top AI 5185 seems like an enormous gap.
What would you say that means in handicap terms?
If we say the range from Fan Hui 2d at 3000 to Yi Se-tol (obviously more than 9d) at 3600 is close to 3 stones (maybe too generous but I'd find it hard to believe it's not more than 2 stones), we get 1 pro da = 200 Elo. So the latest AI should give the top human about 9 stones???? Even halving the figures to give a handicap of 4.5 stones seems a stretch, but I wouldn't rule that out.
Do the top bots still play so as to win by half a point rather than by as much as possible? If so, can that behaviour be easily modified so that the bot will try to maximise the score. That would give us a way to compare humans more directly (i.e. by playing only even human-AI games, telling the bot the komi is 7.5 and telling the human the real komi is 40 points or whatever).
What would you say that means in handicap terms?
If we say the range from Fan Hui 2d at 3000 to Yi Se-tol (obviously more than 9d) at 3600 is close to 3 stones (maybe too generous but I'd find it hard to believe it's not more than 2 stones), we get 1 pro da = 200 Elo. So the latest AI should give the top human about 9 stones???? Even halving the figures to give a handicap of 4.5 stones seems a stretch, but I wouldn't rule that out.
Do the top bots still play so as to win by half a point rather than by as much as possible? If so, can that behaviour be easily modified so that the bot will try to maximise the score. That would give us a way to compare humans more directly (i.e. by playing only even human-AI games, telling the bot the komi is 7.5 and telling the human the real komi is 40 points or whatever).
-
dfan
- Gosei
- Posts: 1598
- Joined: Wed Apr 21, 2010 8:49 am
- Rank: AGA 2k Fox 3d
- GD Posts: 61
- KGS: dfan
- Has thanked: 891 times
- Been thanked: 534 times
- Contact:
Re: AlphaZero paper published in journal Science
They play so as to maximize the probability that they will win by at least half a point.John Fairbairn wrote:Do the top bots still play so as to win by half a point rather than by as much as possible?
People are still working on it. One problem is that at some point you have to make a tradeoff and say, for example , "I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.If so, can that behaviour be easily modified so that the bot will try to maximise the score.
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: AlphaZero paper published in journal Science
I wouldn't try to convert those Elo differences to handicap, it's like converting apples to volts. To take the example of LeelaZero vs Haylee a while ago (a bit weaker than Fan Hui I suppose), it absolutely demolished her on even and 2 stones, in a manner that if a human (e.g. Lee Sedol) did that I'd expect her to lose on 3 stones too, but she won easily on 3 with LZ going silly.
- jlt
- Gosei
- Posts: 1786
- Joined: Wed Dec 14, 2016 3:59 am
- GD Posts: 0
- Has thanked: 185 times
- Been thanked: 495 times
Re: AlphaZero paper published in journal Science
Note that the Elo rating does not vary linearly with handicap stones. Elo ratings are calculated in terms of winrate. God's Elo rating is infinite (well, not exactly but extremely high) , but cannot give 359 stones to a human.
Last edited by jlt on Wed Dec 12, 2018 7:40 am, edited 1 time in total.
-
mitsun
- Lives in gote
- Posts: 553
- Joined: Fri Apr 23, 2010 10:10 pm
- Rank: AGA 5 dan
- GD Posts: 0
- Has thanked: 61 times
- Been thanked: 250 times
Re: AlphaZero paper published in journal Science
I can think of one fairly simple way to gauge the strength of a computer program, relative to a human, expressed in meaningful units. Start playing an even game. The computer evaluates its winning chances after every move as usual. If and when the computer calculates that passing will still result in a likely win, the computer passes. At the end of the game, the computer probably wins by a small margin. The strength difference is the number of passes issued along the way. This scheme has the desirable feature that the computer is always playing the game it was trained to play, with no need to alter komi or introduce handicap stones.
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: AlphaZero paper published in journal Science
Interesting idea. 
One possible problem is that, as the temperature drops, the odds that a pass by the computer will not affect who wins increases, so that the computer will probably pass more often in the endgame than in the opening. It is passes in the opening that approximate handicap stones. The number of passes under this scheme is likely not only to be greater than the number of handicap stones, it is likely to be more variable. Still, an interesting idea.
One possible problem is that, as the temperature drops, the odds that a pass by the computer will not affect who wins increases, so that the computer will probably pass more often in the endgame than in the opening. It is passes in the opening that approximate handicap stones. The number of passes under this scheme is likely not only to be greater than the number of handicap stones, it is likely to be more variable. Still, an interesting idea.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
- ez4u
- Oza
- Posts: 2414
- Joined: Wed Feb 23, 2011 10:15 pm
- Rank: Jp 6 dan
- GD Posts: 0
- KGS: ez4u
- Location: Tokyo, Japan
- Has thanked: 2351 times
- Been thanked: 1332 times
Re: AlphaZero paper published in journal Science
The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
Dave Sigaty
"Short-lived are both the praiser and the praised, and rememberer and the remembered..."
- Marcus Aurelius; Meditations, VIII 21
"Short-lived are both the praiser and the praised, and rememberer and the remembered..."
- Marcus Aurelius; Meditations, VIII 21
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: AlphaZero paper published in journal Science
The thing is, amateur dans play the late endgame almost perfectly; but even pros do not play the late endgame perfectly. Under those circumstances, if it's a close call in the late endgame between going for a ½ pt. win versus going for a 1½ pt. win, the extra point gives a margin of safety. At least for humans.ez4u wrote:The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
But most, if not all, modern top bots do not assume nearly perfect play when they calculate winrates. And they do not estimate the margin of safety by expected scores, but by percentages.* As far as I can tell, the endgame, particularly the late endgame, is one of the places where humans play better than bots; life and death, semeai, and ladders being others. In all of these places, local reading can give the right global results. Bots excel at global reading, humans still excel at local reading.
* Edit: That's not right, is it? Modern top bots do not actually estimate the margin of safety, do they?
Last edited by Bill Spight on Sat Dec 08, 2018 4:33 pm, edited 1 time in total.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
- ez4u
- Oza
- Posts: 2414
- Joined: Wed Feb 23, 2011 10:15 pm
- Rank: Jp 6 dan
- GD Posts: 0
- KGS: ez4u
- Location: Tokyo, Japan
- Has thanked: 2351 times
- Been thanked: 1332 times
Re: AlphaZero paper published in journal Science
If the discussion is about switching from a winrate strategy to a maximum point strategy, then the starting point is the fuseki not the late endgame.Bill Spight wrote:The thing is, amateur dans play the late endgame almost perfectly; but even pros do not play the late endgame perfectly. Under those circumstances, if it's a close call in the late endgame between going for a ½ pt. win versus going for a 1½ pt. win, the extra point gives a margin of safety. At least for humans.ez4u wrote:The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
But most, if not all, modern top bots do not assume nearly perfect play when they calculate winrates. And they do not estimate the margin of safety by expected scores, but by percentages. As far as I can tell, the endgame, particularly the late endgame, is one of the places where humans play better than bots; life and death, semeai, and ladders being others. In all of these places, local reading can give the right global results. Bots excel at global reading, humans still excel at local reading.
Dave Sigaty
"Short-lived are both the praiser and the praised, and rememberer and the remembered..."
- Marcus Aurelius; Meditations, VIII 21
"Short-lived are both the praiser and the praised, and rememberer and the remembered..."
- Marcus Aurelius; Meditations, VIII 21