Re: Engine Tournament
Pull requests is welcomed.
Life in 19x19. Go, Weiqi, Baduk... Thats the life.
https://www.lifein19x19.com/
I am confident that at some point, somebody will show up and provide some help for linux, keep up the good work!zakki wrote:I usually use Windows, and Rn has no maintainer on Linux.
Pull requests is welcomed.
Code: Select all
1. AQ 2.0.3 12/16
2. Leela 0.11.0 Beta 11 4/16Code: Select all
1. Leela 0.11.0 18/20
2. Rayon 4.6.0 15/20
3. Oakfoam 0.2.1 NG-06 12/20
4. Leela Zero 0.11 5773f44c 7/20
5. Hiratuka 10.37B (CPU) 6/20
6. DarkForest v2 MCTS 1.0 2/20
Code: Select all
1. Leela Zero 0.11 c83e1b6e 15/20
2. Pachi DCNN 11.99 13/20
3. DarkGo 1.0 12/20
4. Dream Go 0.5.0 11/20
5. Ray 9.0.1 7/20
6. Mogo 4.86 2/20Code: Select all
1. MoGo 4.86 18/20
2. deltaGo 1.0.0 14/20
3. Fuego 1.1 13/20
4. Michi C-2 1.4.2 8/20
5. Orego 7.08 5/20
6. GNU Go 3.8 2/20Code: Select all
1. GNU Go 3.8 25/28
2. Hara 0.9 18/28
3. Matilda 1.25 16/28
4. Indigo 2009 16/28
5. Dariush 3.1.5.7 15/28
6. Aya 6.34 13/28
7. Fudo Go 3.0 7/28
8. JrefBot 081016-2022 2/28Code: Select all
1. JrefBot 081016-2022 16/20
2. Iomrascálaí 0.3.2 12/20
3. SimpleGo 0.4.3 11/20
4. Crazy Patterns 0008-13 7/20
5. Marcos Go 1.0 7/20
6. AmiGo 1.8 7/20Code: Select all
1. AmiGo 1.8 19/20
2. Beancounter 0.1 15/20
3. Stop 0.9-005 10/20
4. GoTraxx 1.4.2 7/20
5. CopyBot 0.1 6/20
6. Brown 1.0 3/20So, it will be good, if we will rewrite Your representations about basic mathematical principles...as0770 wrote:Then we have to rewrite basic mathematical principles.q30 wrote:It depends on game randomness, that changes with the time control...as0770 wrote: You have no idea what you are talking about. Standard deviation doesn't change with the timecontrol.
Nice you tried to understand my points. Of course the probability _can_ change _slightly_ with the time control. But the result in a 1h match and a 2h match will be more or less the same. What you claim is that the result of a 2h match will show the relative strength more accurate than a 1h match, and that is nonsense.q30 wrote:So, it will be good, if we will rewrite Your representations about basic mathematical principles...
For beginning, Standard deviation is square root of give right translation to English Yourself, that can be determined by next:
https://wikimedia.org/api/rest_v1/media/math/render/svg/1d1610b913011b6744f23f47e0920974b7f78f58,
where pi in our case depends among others on time control...
Although, it's best not to take this heuristic too seriously, because a nontrivial change is possible. I haven't read it that closely, but my skim of the following thread https://github.com/gcp/leela-zero/issues/667 suggested that that Leela Zero has sometimes got noticeably different results between very small numbers of playouts, like 5, and a larger number number of playouts, like 1600, where the relative strength difference and even sometimes the ordering of strength would change between the neural nets.as0770 wrote:Nice you tried to understand my points. Of course the probability _can_ change _slightly_ with the time control. But the result in a 1h match and a 2h match will be more or less the same. What you claim is that the result of a 2h match will show the relative strength more accurate than a 1h match, and that is nonsense.q30 wrote:So, it will be good, if we will rewrite Your representations about basic mathematical principles...
For beginning, Standard deviation is square root of give right translation to English Yourself, that can be determined by next:
https://wikimedia.org/api/rest_v1/media/math/render/svg/1d1610b913011b6744f23f47e0920974b7f78f58,
where pi in our case depends among others on time control...
Two engines of equal strength will have a 50% chance for a 1-1, a 25%chance for a 0-2 and 25% chance for a 2-0. If you double the time control from 1h to 2h the over all winning probability will _maybe_ change to 51:49%. Experience in engine matches in chess is that you get basically the same results in 1min/game and 2h/game as long as there is no significant bug. The difference of 1h/game and 2h/game match is not measurable. There is no reason why it should be different in Go. Even if the probability changes to 55:45%, you would need hundreds of games to prove the difference in strength. What I do is a tournament with 20 or 30 games. If I run the tournament twice I can get completely different results. This won't change with 2h/games or pondering on (League A is 2h on 4 threads btw).
You are quite right, if there is the same engine sparring. But even if there will be 2 simple MC engines (which will in sparring demonstrate mentioned by You chances with time on move --> 0), it may be difference in strength (i.e. in chances) dependent on time control because of difference in best move choice algorithm (and especially more complex engines with more complex algorithms).as0770 wrote:q30 wrote:
So, it will be good, if we will rewrite Your representations about basic mathematical principles...
For beginning, Standard deviation is square root of give right translation to English Yourself, that can be determined by next:
https://wikimedia.org/api/rest_v1/media ... 74b7f78f58,
where pi in our case depends among others on time control...
Nice you tried to understand my points. Of course the probability _can_ change _slightly_ with the time control. But the result in a 1h match and a 2h match will be more or less the same. What you claim is that the result of a 2h match will show the relative strength more accurate than a 1h match, and that is nonsense.
Two engines of equal strength will have a 50% chance for a 1-1, a 25%chance for a 0-2 and 25% chance for a 2-0. If you double the time control from 1h to 2h the over all winning probability will _maybe_ change to 51:49%. Experience in engine matches in chess is that you get basically the same results in 1min/game and 2h/game as long as there is no significant bug. The difference of 1h/game and 2h/game match is not measurable. There is no reason why it should be different in Go. Even if the probability changes to 55:45%, you would need hundreds of games to prove the difference in strength. What I do is a tournament with 20 or 30 games. If I run the tournament twice I can get completely different results. This won't change with 2h/games or pondering on (League A is 2h on 4 threads btw).
You don't get the point. The statistical fluctuation is way too high to meassure little differences in strength. I won't play hundreds of games to prove you wrong.q30 wrote:You are quite right, if there is the same engine sparring. But even if there will be 2 simple MC engines (which will in sparring demonstrate mentioned by You chances with time on move --> 0), it may be difference in strength (i.e. in chances) dependent on time control because of difference in best move choice algorithm (and especially more complex engines with more complex algorithms).
You can try to compare 2 engines (with close strength levels) results with time and thread control, that You have used for league B-F, and results of these engines sparring with 2' per move and 4 threads...
This discussion doesn't make any sense. No more replies by me.as0770 wrote:Pachi vs. Hiratuka 8:8
Pachi vs. Hiratuka 2:14
Thanks for running the tournament and sharing the result. It's nice also to have the list of internet linksas0770 wrote:Now in League A: Leela Zero 5773f44c (2018.01.26), it lost 5 games because of a ladder. Also Leela is updated to v0.11.0.
Of course with 5 playouts there will be different results, but we are talking about 1h/game vs 2h/game what is 7000 vs. 14000 playouts on my system.lightvector wrote:Although, it's best not to take this heuristic too seriously, because a nontrivial change is possible. I haven't read it that closely, but my skim of the following thread https://github.com/gcp/leela-zero/issues/667 suggested that that Leela Zero has sometimes got noticeably different results between very small numbers of playouts, like 5, and a larger number number of playouts, like 1600, where the relative strength difference and even sometimes the ordering of strength would change between the neural nets.
It's not actually not surprising at all to me that Leela Zero in some cases could have quite a large difference in strength between tiny numbers of playouts and large numbers of playouts, enough to change the ordering between nets. For example new candidate nets often appear to vary in strength on the order of multiple hundreds of Elos, so training is very noisy, and there's no reason to expect that the quality of the policy part of the neural net and the value part of the neural net always vary together in the same way. And thinking in those terms, it's pretty obvious that you're measuring something fairly different at 5 playouts vs at 1600 playouts. With very few playouts you rely on the policy net more heavily.
I agree that if you're only running 20 or 30 games, then of course none of this matters, the noise in 20 to 30 games still dwarfs this.
Code: Select all
1. Ray 9.0.1 29/32
2. Pachi DCNN 11.99 28/32
3. Leela Zero 0.9 (2018.01.01) 19/32
4. MoGo 4.86 18/32
5. deltaGo 1.0.0 17/32
6. Fuego 1.1 15/32
7. Michi C-2 1.4.2 8/32
8. Orego 7.08 8/32
9. GNU Go 3.8 2/32Code: Select all
1. Leela Zero 0.11 c83e1b6e 15/20
2. Pachi DCNN 11.99 13/20
3. DarkGo 1.0 12/20
4. Dream Go 0.5.0 11/20
5. Ray 9.0.1 7/20
6. Mogo 4.86 2/20Code: Select all
1. DreamGo 0.5.0 15/20
2. DarkForest v2 MCTS 1.0 12/20
3. Pachi DCNN 11.99 12/20
4. DarkGo 1.0 10/20
5. Ray 9.0.1 9/20
6. Mogo 4.86 2/20Code: Select all
1. Leela 0.11.0 9/16
2. AQ 2.1.1 7/16Code: Select all
1. Leela 0.11.0 18/20
2. Rayon 4.6.0 15/20
3. Oakfoam 0.2.1 NG-06 12/20
4. Hiratuka 10.37B (CPU) 7/20
5. Leela Zero 0.11 5773f44c 6/20
6. DreamGo 0.5.0 2/20
Code: Select all
1. DreamGo 0.5.0 15/20
2. DarkForrest MCTS 1.0 12/20
3. Pachi 11.99 12/20
4. DarkGo 1.0 10/20
5. Ray 9.0.1 9/20
6. Mogo 4.86 2/20Code: Select all
1. MoGo 4.86 18/20
2. deltaGo 1.0.0 14/20
3. Fuego 1.1 13/20
4. Michi C-2 1.4.2 8/20
5. Orego 7.08 5/20
6. GNU Go 3.8 2/20Code: Select all
1. GNU Go 3.8 25/28
2. Hara 0.9 18/28
3. Matilda 1.25 16/28
4. Indigo 2009 16/28
5. Dariush 3.1.5.7 15/28
6. Aya 6.34 13/28
7. Fudo Go 3.0 7/28
8. JrefBot 081016-2022 2/28Code: Select all
1. JrefBot 081016-2022 16/20
2. Iomrascálaí 0.3.2 12/20
3. SimpleGo 0.4.3 11/20
4. Crazy Patterns 0008-13 7/20
5. Marcos Go 1.0 7/20
6. AmiGo 1.8 7/20Code: Select all
1. AmiGo 1.8 19/20
2. Beancounter 0.1 15/20
3. Stop 0.9-005 10/20
4. GoTraxx 1.4.2 7/20
5. CopyBot 0.1 6/20
6. Brown 1.0 3/20This result only proves, that time control was very small for these (or one of these) engines, so games were very randomness...as0770 wrote:You don't get the point. The statistical fluctuation is way too high to meassure little differences in strength. I won't play hundreds of games to prove you wrong.q30 wrote:You are quite right, if there is the same engine sparring. But even if there will be 2 simple MC engines (which will in sparring demonstrate mentioned by You chances with time on move --> 0), it may be difference in strength (i.e. in chances) dependent on time control because of difference in best move choice algorithm (and especially more complex engines with more complex algorithms).
You can try to compare 2 engines (with close strength levels) results with time and thread control, that You have used for league B-F, and results of these engines sparring with 2' per move and 4 threads...
Once again: This are two matches with the same engines and the same conditions:This discussion doesn't make any sense. No more replies by me.as0770 wrote:Pachi vs. Hiratuka 8:8
Pachi vs. Hiratuka 2:14
Code: Select all
1. Leela 0.11.0 9/16
2. AQ 2.1.1 7/16Code: Select all
1. Leela 0.11.0 17/20
2. Leela Zero 0.11 cde9c8d4 13/20
3. Rayon 4.6.0 13/20
4. Oakfoam 0.2.1 NG-06 12/20
5. Hiratuka 10.37B (CPU) 4/20
6. DreamGo 0.5.0 1/20
Code: Select all
1. DreamGo 0.5.0 15/20
2. DarkForrest MCTS 1.0 12/20
3. Pachi 11.99 12/20
4. DarkGo 1.0 10/20
5. Ray 9.0.1 9/20
6. Mogo 4.86 2/20Code: Select all
1. MoGo 4.86 18/20
2. deltaGo 1.0.0 14/20
3. Fuego 1.1 13/20
4. Michi C-2 1.4.2 8/20
5. Orego 7.08 5/20
6. GNU Go 3.8 2/20Code: Select all
1. GNU Go 3.8 25/28
2. Hara 0.9 18/28
3. Matilda 1.25 16/28
4. Indigo 2009 16/28
5. Dariush 3.1.5.7 15/28
6. Aya 6.34 13/28
7. Fudo Go 3.0 7/28
8. JrefBot 081016-2022 2/28Code: Select all
1. JrefBot 081016-2022 16/20
2. Iomrascálaí 0.3.2 12/20
3. SimpleGo 0.4.3 11/20
4. Crazy Patterns 0008-13 7/20
5. Marcos Go 1.0 7/20
6. AmiGo 1.8 7/20Code: Select all
1. AmiGo 1.8 19/20
2. Beancounter 0.1 15/20
3. Stop 0.9-005 10/20
4. GoTraxx 1.4.2 7/20
5. CopyBot 0.1 6/20
6. Brown 1.0 3/20I've run games between AQ and Rayon or others, it works well with Sabaki. The problem is that Sabaki doesn't handle consecutive matches automatically. You have to run one game after another, I don't think you can tell Sabaki to run automatically 16 consecutive games between X and Y, save the games, and at the end, tell the score of the 16 games match. If someone knows how to do it, tell me, I'd be interested.Unfortunately AQ doesn't work with Rayon and Oakfoam...
I gonna try Sabaki, but I think it is a gpu memory conflict. Even running both engines in console makes one crash. I think I need to update my computer...Vargo wrote:I've run games between AQ and Rayon or others, it works well with Sabaki. The problem is that Sabaki doesn't handle consecutive matches automatically. You have to run one game after anotherUnfortunately AQ doesn't work with Rayon and Oakfoam...