This is definitely something to keep in mind. I did a similar exercise running old Go World commented games through LZ - there was one where a sequence was played that LZ heavily disliked, but that was because it didn't see that there was a ko left for later. Once the move that set up the ko was played, the winrate shot back up. So one has to keep in mind that the networks have these kinds of holes in their evaluations, and mentally correct for it if one manages to notice the problem.Uberdude wrote:Something to note is LZ didn't anticipate Otake's good move 29 beforehand so its judgements preceding that are less trustworthy, but once it gets to that move it does find it after about 30,000 playouts
Easy to set up Go AI?
-
bernds
- Lives with ko
- Posts: 259
- Joined: Sun Apr 30, 2017 11:18 pm
- Rank: 2d
- GD Posts: 0
- Has thanked: 46 times
- Been thanked: 116 times
Re: Easy to set up Go AI?
-
John Fairbairn
- Oza
- Posts: 3724
- Joined: Wed Apr 21, 2010 3:09 am
- Has thanked: 20 times
- Been thanked: 4672 times
Re: Easy to set up Go AI?
I looked with LZ at a famous old no-komi game (Genjo-Chitoku) which I chose for a variety of reasons: (i) it has many pro commentaries; (ii) it has been widely hailed as a masterpiece ("no wrong moves"); (iii) Takagawa disagreed with that assessment; (iv) it has a clutch of praised tesujis; (v) it has a fairly early (move 66) comment that the "game is now close" (it ended in jigo).
Quite a few of the early fuseki moves did not even appear on the LZ radar. The modern pros likewise were generally quick to say these were Edo-style and "modern players would ...". We must also remember, on LZ's behalf, that there was no komi.
For the most part, early in the game, the players' moves and the commentators' suggested alternatives lined up pretty well with the moves LZ considered. Furthermore, where the modern pros suggested a better move than the one played, LZ seemed to agree. This phase of the game was, however, largely tactical.
When strategic considerations took over, LZ differed sharply. It was a direction of play issue. It wanted to play in another part of the board that both players and commentators ignored completely. The moves played or suggested by humans were not wildly inferior (and were also often selected by LZ once play returned to that area later) but what was noticeable was precisely that the area chosen by LZ was a blind spot for humans at that stage of the game (they did go there later and play LZ-type moves, but there was a clear disagreement about timing).
LZ didn't like moyo-making moves. Humans did.
My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).
There was a tesuji that was universally praised by humans that LZ didn't even consider even when I let it run a longish time. It was also sniffy about some other moves praised by humans (with even Takagawa joining the chorus). Again it seemed to prefer solid moves, not fancy tricks.
At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the uncertainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).
In summary, the humans were not disgraced, though generational differences were confirmed and the label of "masterpiece" may be generous, but LZ liked stolid moves much more than they did. And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.
Quite a few of the early fuseki moves did not even appear on the LZ radar. The modern pros likewise were generally quick to say these were Edo-style and "modern players would ...". We must also remember, on LZ's behalf, that there was no komi.
For the most part, early in the game, the players' moves and the commentators' suggested alternatives lined up pretty well with the moves LZ considered. Furthermore, where the modern pros suggested a better move than the one played, LZ seemed to agree. This phase of the game was, however, largely tactical.
When strategic considerations took over, LZ differed sharply. It was a direction of play issue. It wanted to play in another part of the board that both players and commentators ignored completely. The moves played or suggested by humans were not wildly inferior (and were also often selected by LZ once play returned to that area later) but what was noticeable was precisely that the area chosen by LZ was a blind spot for humans at that stage of the game (they did go there later and play LZ-type moves, but there was a clear disagreement about timing).
LZ didn't like moyo-making moves. Humans did.
My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).
There was a tesuji that was universally praised by humans that LZ didn't even consider even when I let it run a longish time. It was also sniffy about some other moves praised by humans (with even Takagawa joining the chorus). Again it seemed to prefer solid moves, not fancy tricks.
At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the uncertainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).
In summary, the humans were not disgraced, though generational differences were confirmed and the label of "masterpiece" may be generous, but LZ liked stolid moves much more than they did. And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.
-
dfan
- Gosei
- Posts: 1599
- Joined: Wed Apr 21, 2010 8:49 am
- Rank: AGA 2k Fox 3d
- GD Posts: 61
- KGS: dfan
- Has thanked: 891 times
- Been thanked: 534 times
- Contact:
Re: Easy to set up Go AI?
I think that one reason that modern engines don't like moyos as much is that they are really good at reducing and invading them, so they don't think they are as valuable. I think in some of these cases a top human would also be able to invade and live/run if he really had to, but he'd really rather not stake the game on it, whereas LZ etc. are much more confident that they'll find a way.John Fairbairn wrote:LZ didn't like moyo-making moves. Humans did.
My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).
I'm curious whether whether it liked the move after it was played (what happened to the winrate).There was a tesuji that was universally praised by humans that LZ didn't even consider even when I let it run a longish time.
I put your "un" in brackets as I think it was a typo. Indeed if Black is winning by 3 points early in the game, he's ahead but White still has a chance, while if Black is winning by 3 points in the endgame, White might as well resign.At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the [un]certainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).
Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.
-
John Fairbairn
- Oza
- Posts: 3724
- Joined: Wed Apr 21, 2010 3:09 am
- Has thanked: 20 times
- Been thanked: 4672 times
Re: Easy to set up Go AI?
I haven't got round to advanced thinking like that yetI'm curious whether whether it liked the move after it was played (what happened to the winrate).
It wasn't a typo though of course I may have made a mistake in other ways. My thinking was that as the board fills up and there are fewer moves to the end of the game (i.e. there is more information), it is possible to make more moves that are reliable (even if not always the best).put your "un" in brackets as I think it was a typo.
Yes, I understand that. But that is just when comparing A, B, C at the same move. What I was postulating was that, nearer the end of the game, the winrate for each of A, B and C is likely to be higher because of the "certainty" I already alluded to. That is, winrate at move 100 implies something a bit different from winrate at move 50. I suppose what I am really referring to is the winrate graph and am implying that its trend is not really telling us anything. The only parts that seem hugely significant are when the direction of the line changes drastically.Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.
If anybody else fancies running tests at a different speed or on a different set-up, here is the sgf of the game I looked at.
(;SZ[19]FF[3]
PW[Honinbo Genjo]
WR[7d]
PB[Yasui Chitoku]
BR[7d]
EV[Castle Game]
DT[1806-12-26 (Bunka 3 XI 17)]
PC[Edo Castle]
KM[0]
RE[Jigo]
US[GoGoD95]
;B[qd];W[dc];B[cp];W[pq];B[eq];W[oc];B[ce];W[ci];B[lc];W[ld];B[kd];W[mc]
;B[kc];W[qg];B[pe];W[of];B[oe];W[nf];B[pc];W[ne];B[gc];W[ed];B[qo];W[qk]
;B[mq];W[op];B[jq];W[pn];B[ck];W[df];B[qm];W[qp];B[pm];W[ok];B[po];W[ic]
;B[id];W[hc];B[hb];W[hd];B[ib];W[fb];B[gb];W[he];B[kf];W[mp];B[nq];W[on]
;B[rp];W[rq];B[ol];W[nk];B[oo];W[np];B[nn];W[ro];B[rn];W[sp];B[nl];W[re]
;B[rd];W[qe];B[pf];W[pg];B[rb];W[cf];B[ek];W[fp];B[fq];W[ip];B[jp];W[in]
;B[io];W[ho];B[jo];W[hm];B[en];W[fn];B[fm];W[gq];B[fo];W[km];B[rk];W[rl]
;B[pk];W[pj];B[ql];W[qj];B[rj];W[ri];B[sl];W[sj];B[rm];W[fc];B[jn];W[jm]
;B[il];W[hl];B[mk];W[mi];B[ln];W[je];B[mb];W[nb];B[md];W[nd];B[nc];W[ep]
;B[dp];W[mc];B[ma];W[ke];B[le];W[me];B[nc];W[lf];B[mj];W[ni];B[ik];W[hk]
;B[ij];W[hj];B[kj];W[dj];B[dk];W[im];B[gn];W[hn];B[bj];W[bi];B[or];W[pb]
;B[qb];W[li];B[kk];W[ej];B[ih];W[ii];B[ji];W[hi];B[jg];W[hg];B[ir];W[gr]
;B[qr];W[pr];B[ps];W[rr];B[er];W[mc];B[gp];W[hp];B
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: Easy to set up Go AI?
My take, largely based upon AlphaGo, which may not hold for other bots, is that AlphaGo, and especially AlphaGo Zero, when playing itself, was extremely flexible. However, when playing humans, it often went for solidity. My guess is that when it felt itself sufficiently ahead, even though humans might not realize that, it went for solid play.dfan wrote:I think that one reason that modern engines don't like moyos as much is that they are really good at reducing and invading them, so they don't think they are as valuable. I think in some of these cases a top human would also be able to invade and live/run if he really had to, but he'd really rather not stake the game on it, whereas LZ etc. are much more confident that they'll find a way.John Fairbairn wrote:LZ didn't like moyo-making moves. Humans did.
My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).
As for moyos, AlphaGo Zero seemed to me to have a relatively cosmic style, by comparison with current humans and Leela Zero and Elf. But the other two bots are quite capable of cosmic plays, as well. That said, my intuition about their center play is uncertain. They certainly will sometimes make plays that are too loose to make a moyo yet.
Pretty much what I would expect with good play, if Black were aiming to lose by 6.5 pts. or less, given 7.5 komi.John Fairbairn wrote:At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the [un]certainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).
John Fairbairn wrote:And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.
It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?dfan wrote:Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.
Last edited by Bill Spight on Sat Aug 18, 2018 10:17 am, edited 1 time in total.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: Easy to set up Go AI?
Yes, I wouldn't attach too much meaning to the absolute value of the winrate. There have been various discussions about it in the computer go subforum but I see this as more interesting from a mathematical or technical perspective than useful for reviewing go games. As lightvector said elsewhere you need to get to know your bot/weight file to get a feel for how qualitively big a 1% vs 5% vs 20% mistake is our how much of a lead 80% is. And if the game is already at 97% (which could just mean a 5 point lead near the end) and you make a 50 point blunder the worst it can rate it is -3%.John Fairbairn wrote: Yes, I understand that. But that is just when comparing A, B, C at the same move. What I was postulating was that, nearer the end of the game, the winrate for each of A, B and C is likely to be higher because of the "certainty" I already alluded to. That is, winrate at move 100 implies something a bit different from winrate at move 50. I suppose what I am really referring to is the winrate graph and am implying that its trend is not really telling us anything. The only parts that seem hugely significant are when the direction of the line changes drastically.
As an example, I analysed my recent game with Shi Yue 9p with LZ. As I had 4 stones handicap I started out with 96% win, and we could call this something like a 50-point lead. Until I killed myself in late yose the win% basically wobbled around in 90s the whole game, despite my only being about 5 points ahead before the death, so he caught up throughout the game, but not quite quickly enough. So 96% at move 1 meant a 50 point lead but 90% at move 250 meant a 5 point lead, but both mean LZ is very confident of winning if it played that position against itself (it can't model one player being weaker as in a handicap game). Of course it immediately identified my blunder and the win% plunged on that move.
-
Tryss
- Lives in gote
- Posts: 502
- Joined: Tue May 24, 2011 1:07 pm
- Rank: KGS 2k
- GD Posts: 100
- KGS: Tryss
- Has thanked: 1 time
- Been thanked: 153 times
Re: Easy to set up Go AI?
There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leafBill Spight wrote:It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: Easy to set up Go AI?
So that's different from AlphaGo Zero, IIRC?Tryss wrote:There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leafBill Spight wrote:It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?
And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
Tryss
- Lives in gote
- Posts: 502
- Joined: Tue May 24, 2011 1:07 pm
- Rank: KGS 2k
- GD Posts: 100
- KGS: Tryss
- Has thanked: 1 time
- Been thanked: 153 times
Re: Easy to set up Go AI?
No, it was the same as AlphaGo ZeroBill Spight wrote:So that's different from AlphaGo Zero, IIRC?
And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.
sourcepage 2 of the AG Zero paper wrote:Our program, AlphaGo Zero, differs from AlphaGo Fan and AlphaGo Lee 12 in several important aspects. First and foremost, it is trained solely by self-play reinforcement learning, starting from random play, without any supervision or use of human data. Second, it only uses the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any MonteCarlo rollouts.
And in practice, there is no counting with LZ, it usually abandon, and didn't really learned to pass (the no resign self play games are usually over 500 moves long)
-
dfan
- Gosei
- Posts: 1599
- Joined: Wed Apr 21, 2010 8:49 am
- Rank: AGA 2k Fox 3d
- GD Posts: 61
- KGS: dfan
- Has thanked: 891 times
- Been thanked: 534 times
- Contact:
Re: Easy to set up Go AI?
AlphaGo did actual Monte Carlo simulations to the end of the game. AlphaGo Zero and all its descendants (AlphaZero, Leela Zero, ELF OpenGo, etc.) do not, despite continuing to use the (now misleading) term "Monte Carlo Tree Search".Bill Spight wrote:So that's different from AlphaGo Zero, IIRC?Tryss wrote:There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leafBill Spight wrote:It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?
Terminal positions in the tree search are actually evaluated by the game rules. However, the position is not considered terminal until both players have passed. The engines do know how to pass but I don't know how often it comes up in the tree search. Since they all use Chinese rules there is no penalty for continuing to play on for a while after the dame are filled.And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.
-
dfan
- Gosei
- Posts: 1599
- Joined: Wed Apr 21, 2010 8:49 am
- Rank: AGA 2k Fox 3d
- GD Posts: 61
- KGS: dfan
- Has thanked: 891 times
- Been thanked: 534 times
- Contact:
Re: Easy to set up Go AI?
Agreed; that's I would have expected you to say "as the game progresses so does the certainty" rather than "as the game progresses so does the uncertainty", but if we both agree then it doesn't matter.John Fairbairn wrote:It wasn't a typo though of course I may have made a mistake in other ways. My thinking was that as the board fills up and there are fewer moves to the end of the game (i.e. there is more information), it is possible to make more moves that are reliable (even if not always the best).put your "un" in brackets as I think it was a typo.
I guess there are two ways of looking at it. If a winrate of, say, 60% slowly climbs over the course of the game, it means bothYes, I understand that. But that is just when comparing A, B, C at the same move. What I was postulating was that, nearer the end of the game, the winrate for each of A, B and C is likely to be higher because of the "certainty" I already alluded to. That is, winrate at move 100 implies something a bit different from winrate at move 50. I suppose what I am really referring to is the winrate graph and am implying that its trend is not really telling us anything. The only parts that seem hugely significant are when the direction of the line changes drastically.Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.
- No one made any big mistakes over the rest of the game
- Black's chance of converting his advantage kept growing (since the amount of time left to turn it around decreased)
These sorts of winrate graphs have become popular in sports analysis over the last 10+ years so I am used to seeing the same sort of shape when, for example, a baseball team takes an early one-run lead and nurses it to victory; there is definitely a much different feeling in the ninth inning of a 3-2 game than the third.
-
Tryss
- Lives in gote
- Posts: 502
- Joined: Tue May 24, 2011 1:07 pm
- Rank: KGS 2k
- GD Posts: 100
- KGS: Tryss
- Has thanked: 1 time
- Been thanked: 153 times
Re: Easy to set up Go AI?
As an illustration, here is a recent self play of LZ (network #166) with resign disabled :dfan wrote:Terminal positions in the tree search are actually evaluated by the game rules. However, the position is not considered terminal until both players have passed. The engines do know how to pass but I don't know how often it comes up in the tree search. Since they all use Chinese rules there is no penalty for continuing to play on for a while after the dame are filled.
http://zero.sjeng.org/view/3ad105e1870e ... viewer=wgo
Black pass only because it has no legal moves remaining.
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: Easy to set up Go AI?
Thanks. But Leela still uses Monte Carlo playouts, right? I noticed "MCWR" in Bojanic's analysis of the Metta-Ben David game, and figured that stood for Monte Carlo win rate. That's one reason for my confusion on that issue.dfan wrote:AlphaGo did actual Monte Carlo simulations to the end of the game. AlphaGo Zero and all its descendants (AlphaZero, Leela Zero, ELF OpenGo, etc.) do not, despite continuing to use the (now misleading) term "Monte Carlo Tree Search".Bill Spight wrote:Bill Spight wrote:It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?So that's different from AlphaGo Zero, IIRC?Tryss wrote:There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leaf
And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.
Right. So even though the game in question was played by territory rules with a 6.5 komi, Zen had no way of knowing that, and could not tell that making an unnecessary protective play would lose the game. It also produced a very peculiar win rate that, again, seemed to me like it was based upon semi-random playouts.Terminal positions in the tree search are actually evaluated by the game rules. However, the position is not considered terminal until both players have passed. The engines do know how to pass but I don't know how often it comes up in the tree search. Since they all use Chinese rules there is no penalty for continuing to play on for a while after the dame are filled.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
bernds
- Lives with ko
- Posts: 259
- Joined: Sun Apr 30, 2017 11:18 pm
- Rank: 2d
- GD Posts: 0
- Has thanked: 46 times
- Been thanked: 116 times
Re: Easy to set up Go AI?
For a different impression, you can try the first Go-Kitani game from Kamakura ("The one with the nosebleed"). I tried this one out with a much older version of LZ, so it may have changed - but at the time it really disliked Kitani's slow and solid moves in the beginning.John Fairbairn wrote:My interpretation of the last two points, at east in this game, is that LZ preferred solidity.
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: Easy to set up Go AI?
Seeing as Kitani didn't have to give komi, whereas LZ as black does, it's possible Kitani's moves were good for a no komi game if they weren't haemorrhaging points at too fast a rate, which would be on average (7.5 points / number of black moves in game) per move. But I guess they were, and checking on no komi LZ157 it says white was winning by move 26 (the kosumi to enclose bottom right). The ignored push on the 3 stones was the biggest mistake, a -11% (where black starts at 65%). It also didn't like quite a few of Go's moves, basically anything that didn't follow the order first empty corners, then shimaris/approaches to 3-4s, then do other things.bernds wrote:For a different impression, you can try the first Go-Kitani game from Kamakura ("The one with the nosebleed"). I tried this one out with a much older version of LZ, so it may have changed - but at the time it really disliked Kitani's slow and solid moves in the beginning.John Fairbairn wrote:My interpretation of the last two points, at east in this game, is that LZ preferred solidity.