It is currently Fri Apr 19, 2024 10:49 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
Offline
 Post subject: Re: Easy to set up Go AI?
Post #21 Posted: Fri Aug 17, 2018 5:47 am 
Gosei

Posts: 1348
Location: Finland
Liked others: 49
Was liked: 129
Rank: FGA 7k GoR 1297
With a weights file AND recompiled without OpenCL it works.

_________________
Offending ad removed

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #22 Posted: Fri Aug 17, 2018 8:47 am 
Oza
User avatar

Posts: 2411
Location: Ghent, Belgium
Liked others: 359
Was liked: 1019
Rank: KGS 2d OGS 1d Fox 4d
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
Since this is a free and responsive help desk, I thought I'd report on my issue here

I downloaded both zip folders CPU and GPU, not knowing the difference. Then I unzipped them and launched the application. In both cases a command prompt like window popped up and promptly disappeared.

OK, now what?

Thanks guys

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #23 Posted: Fri Aug 17, 2018 8:55 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Quote:
launched the application

means double clicked leelaz.exe? You should double click Lizzie.jar and, assuming you have .jar files associated correctly, that should open the lizzie interface. If not open a command prompt in that folder and run "java -jar Lizzie.jar".

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #24 Posted: Fri Aug 17, 2018 9:04 am 
Oza
User avatar

Posts: 2411
Location: Ghent, Belgium
Liked others: 359
Was liked: 1019
Rank: KGS 2d OGS 1d Fox 4d
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
Uberdude wrote:
Quote:
launched the application

means double clicked leelaz.exe? You should double click Lizzie.jar and, assuming you have .jar files associated correctly, that should open the lizzie interface. If not open a command prompt in that folder and run "java -jar Lizzie.jar".


My saviour! :salute:

Yes, I was born and raised on IBM DOS, managed to get through to Windows and when the java era came around, my intuition had already stopped developing.

Edit: I analyzed my first game using Lizzie and I can feel signs of an upcoming addiction.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #25 Posted: Fri Aug 17, 2018 10:45 am 
Oza

Posts: 3655
Liked others: 20
Was liked: 4630
First a comment. I have mentioned several times before that pros usually don't count the way we do. They can look at the board and see inefficiencies and simply count those (there aren't so many in pro games). Generally the side with the most inefficiencies is behind, but obviously some inefficiencies are worse than others and pros seem to have enough skill to assess how much each one is worth (I expect, though, some are significantly more skilled at this than others). Because they have this feel for the game, they stress efficiency of plays an awful lot.

I've understood this, and have even been able on occasion to simulate their behaviour, but mostly I have treated it as a little bit of a party trick by them. However, my first forays with Lizzie/LZ have astonished me because it spots inefficiencies very early in the game and marks them down heavily. For example, there was a joseki where a connection was needed and two were available. The pro played the solid connection but LZ preferred a hanging connection. It hardly seemed to matter because there was no immediate danger, no shortage of liberties, nothing special at all - purely a case of long-term efficiency. But LZ adjudged the pro's solid move a whopping 6 percentage points worse. That pattern seems to emerge elsewhere and so I now understand even better why pros care about efficiency of plays, but at the same time it seems they have not entirely mastered all the elements of good shape.

Second, another question. I have read that the AI programs don't work happily with different komis. But presumably this has been discussed. I'd love to know what the general feeling is as to how much less reliable LZ is if the game komi is zero, or more specifically if LZ starts with a winrate of say 46% for Black on an empty board in a no-komi game (i.e. it still believes komi of 7.5 applies), what is the general LZ-assisted human view as to the real winrate knowing that komi is actually 0 (and likewise for various handicaps)?

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #26 Posted: Fri Aug 17, 2018 11:19 am 
Gosei

Posts: 1590
Liked others: 886
Was liked: 528
Rank: AGA 3k Fox 3d
GD Posts: 61
KGS: dfan
John Fairbairn wrote:
Second, another question. I have read that the AI programs don't work happily with different komis. But presumably this has been discussed. I'd love to know what the general feeling is as to how much less reliable LZ is if the game komi is zero, or more specifically if LZ starts with a winrate of say 46% for Black on an empty board in a no-komi game (i.e. it still believes komi of 7.5 applies), what is the general LZ-assisted human view as to the real winrate knowing that komi is actually 0 (and likewise for various handicaps)?

Unfortunately, it depends. I would still happily follow LZ's judgment in the early game even though it was mistaken about the size of the komi; maybe it thinks that the winrate is 46% while it is really 60% or something, but it'll still make reasonable moves. On the other hand, at the very end of the game, if Black is ahead by 3 points on the board, say, it will think that Black's winrate is almost 0% when it really should be almost 100%. So I think its endgame evaluations in a close game (as will happen in most pro games) will be pretty useless.


This post by dfan was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #27 Posted: Fri Aug 17, 2018 12:01 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
First a comment. I have mentioned several times before that pros usually don't count the way we do. They can look at the board and see inefficiencies and simply count those (there aren't so many in pro games). Generally the side with the most inefficiencies is behind, but obviously some inefficiencies are worse than others and pros seem to have enough skill to assess how much each one is worth (I expect, though, some are significantly more skilled at this than others). Because they have this feel for the game, they stress efficiency of plays an awful lot.


There are other reasons to stress efficiency, I think. ;)

Books about positional judgement start with counting more or less sure territory, but non-territorial players, like Takemiya, have to rely upon other factors, such as efficiency. There is no getting away from the need to assess influence.

Quote:
I've understood this, and have even been able on occasion to simulate their behaviour, but mostly I have treated it as a little bit of a party trick by them.


There are tricks, such as the saying that ponnuki is worth 30 points. Even the idea that one handicap stone is worth 10 points is sort of a trick. But we now know, because of komi, that a handicap stone is worth around 14 points (to a good player). That's a 40% difference. :o

Quote:
However, my first forays with Lizzie/LZ have astonished me because it spots inefficiencies very early in the game and marks them down heavily. For example, there was a joseki where a connection was needed and two were available. The pro played the solid connection but LZ preferred a hanging connection. It hardly seemed to matter because there was no immediate danger, no shortage of liberties, nothing special at all - purely a case of long-term efficiency. But LZ adjudged the pro's solid move a whopping 6 percentage points worse. That pattern seems to emerge elsewhere and so I now understand even better why pros care about efficiency of plays, but at the same time it seems they have not entirely mastered all the elements of good shape.


It takes experience to develop judgement. Modern bots enjoy the experience of millions of games. No human can match that.

Quote:
Second, another question. I have read that the AI programs don't work happily with different komis. But presumably this has been discussed. I'd love to know what the general feeling is as to how much less reliable LZ is if the game komi is zero, or more specifically if LZ starts with a winrate of say 46% for Black on an empty board in a no-komi game (i.e. it still believes komi of 7.5 applies), what is the general LZ-assisted human view as to the real winrate knowing that komi is actually 0 (and likewise for various handicaps)?


In the 1970s I saw some Nihon Kiin statistics on 2800 pro games, 1400 played with 4.5 komi, 1400 played with 5.5 komi. It was partly on that basis that I predicted that Japanese komi would be 6.5 by the year 2,000. (Almost. ;)) For both sets of statistics the median result was between 6 and 7 points on the board for Black. Also, IIRC, if you applied different komis around 6.5 to the data, a one point difference in komi meant a difference in win rates of 2% plus. Leela Zero uses area scoring, where minimum score differences are nearly always at least 2 points. So a minimum score difference translates to a winrate difference of around 4-5%. So the pro's mistake (if Leela Zero is right) cost maybe 2-3 pts. on the board.

There are amateur dans players who scoff at 2 pt. mistakes in the opening. But Leela doesn't. :)

As for the reliability of Leela Zero with different komi, or no komi, I think that win rate differences aren't much affected early in the game. For winrate estimates themselves, maybe add 10-15% to Black's winrate estimate, early in the game. (???)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #28 Posted: Fri Aug 17, 2018 12:20 pm 
Lives in gote

Posts: 502
Liked others: 1
Was liked: 153
Rank: KGS 2k
GD Posts: 100
KGS: Tryss
There's an experimental 0 komi version of LZ available somewhere, but I never tried it. I think it can be used to analyse no komi games correctly.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #29 Posted: Fri Aug 17, 2018 12:22 pm 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
John Fairbairn wrote:
Second, another question. I have read that the AI programs don't work happily with different komis. But presumably this has been discussed. I'd love to know what the general feeling is as to how much less reliable LZ is if the game komi is zero, or more specifically if LZ starts with a winrate of say 46% for Black on an empty board in a no-komi game (i.e. it still believes komi of 7.5 applies), what is the general LZ-assisted human view as to the real winrate knowing that komi is actually 0 (and likewise for various handicaps)?


2. There is an experimental version of LeelaZero that works for 0 and other komis (but it's not as easy to install). It basically does a trick of switching black and white and going for the middle and hoping the magic of the neural networks is basically linear, and it seems to work fairly well (though not being self-play trained in these conditions means it'll be less accurate). With network #153 (LZ's opinion does slowly evolve with new versions) it thinks black starts off at 61.5% on the empty board with no komi (cf 46.5 with 7.5 komi). I did a little analysis of Shusaku's famous ear-reddening move with it here.

Here's some win% for various handicaps (just pass in lizzie with "p") with network #157 (the best 15 block and the one I mostly use now) at 30k playouts (doesn't change much by then). Note that 157 is higher than 153, as bots get stronger this probably happens, Elf is even higher (it thinks black 2 stones with 7.5 komi is 99%). A perfect bot would obviously say 100% for black first no komi, and we presume 0% for 7.5 komi if it is indeed too high.

Code:
Black starts, white 7.5 komi ("even game"):     46.7
Black starts, no komi                           65.5
Black starts with 2 stones, white 7.5 komi      80.7
Black 2 stones, no komi                         88.8
Black 3 stones, 7.5 komi                        92.6
Black 3 stones, no komi                         93.2
Black 4 stones, 7.5 komi                        96.5
Black 4 stones, no komi                         88.4  (yes, really, don't trust interpolation/extrapolation too much when you adventure to new lands)


This post by Uberdude was liked by 2 people: Bill Spight, pnprog
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #30 Posted: Fri Aug 17, 2018 1:32 pm 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
John Fairbairn wrote:
First a comment. I have mentioned several times before that pros usually don't count the way we do. They can look at the board and see inefficiencies and simply count those (there aren't so many in pro games). Generally the side with the most inefficiencies is behind, but obviously some inefficiencies are worse than others and pros seem to have enough skill to assess how much each one is worth (I expect, though, some are significantly more skilled at this than others). Because they have this feel for the game, they stress efficiency of plays an awful lot.

Well, I rarely count points in games, and usually use the "sum of how bad I think my mistakes were" vs "sum of how bad I think opponent's mistakes were" as the basis for if I think I'm leading or not. I did actually count a few times in my recent title match game: in the middlegame my count suggested playing to defend my 3-3 could be ok but there was just too much play left in other areas of the board I wasn't confident to play a conservative gote move, and then things got out of hand and I never got the chance, and in the endgame I counted it was super close so did some reading instead of being lazy and playing (just!) unnecessary defence. I have noticed that in his commentaries Myungwan Kim counts territory a lot (though often does the "white needs to make at least 15 in this hard to count area to be even" approach) whereas Michael Redmond rarely does.

John Fairbairn wrote:
I've understood this, and have even been able on occasion to simulate their behaviour, but mostly I have treated it as a little bit of a party trick by them. However, my first forays with Lizzie/LZ have astonished me because it spots inefficiencies very early in the game and marks them down heavily. For example, there was a joseki where a connection was needed and two were available. The pro played the solid connection but LZ preferred a hanging connection. It hardly seemed to matter because there was no immediate danger, no shortage of liberties, nothing special at all - purely a case of long-term efficiency. But LZ adjudged the pro's solid move a whopping 6 percentage points worse. That pattern seems to emerge elsewhere and so I now understand even better why pros care about efficiency of plays, but at the same time it seems they have not entirely mastered all the elements of good shape.

I think far too much judgement by pros of josekis (or at least what I've gleaned of it) is based on reference/tewari/comparison (where such comparisons can accumulate asymmetric errors) to other results they think (possibly mistakenly) are ok, instead of trying to make more absolute judgements based on where the stones end up. "Well we played these moves which are probably okay and it started even so it ended even" isn't great. Robert Jasiek's territory/influence stone counting approach is a nice idea in that direction, but just not good enough to be useful. Basically you need a massive function with gazillions of parameters finely tuned to judge a multitude of facets of a position. Hang on a minute, I just described a neural network!

6% is quite a big mistake for a pro, but not that unusual. If you want to see lots of big % drops to shatter the illusion pros know what they are doing then I suggest reviewing pro games with Elf :D . I was recently reviewing the 5th Kisei title match of Shuko vs Otake with Go world commentary and half the opening moves were 8-12% point mistakes according to Elf (lots of splitting or extending on side moves, even before bots that had gone out of fashion as not active/efficient enough). LZ #157 was much more tame, just rating them -1 to -3%. Here's a graph from LZ (the horizontal dashed lines are at 25/50/75%).
Attachment:
Kisei5g1LZ157.PNG
Kisei5g1LZ157.PNG [ 106.91 KiB | Viewed 6860 times ]


You can see the little zigzags as they play side moves in the opening. That big drop on move 23 of 10% received no comment. Shuko did criticise Otake's move a few later (LZ agrees it was bad, -3%) and says "This move surprised me. I think it must have been a slip. Black would have had quite a good game if he had connected at 34 instead". LZ thinks the damage was already done and white would still be 70%, though that is with 7.5 not 5.5 komi. By move 35 "... makes it clear how badly Black has done here. The general feeling was that he [Otake] had suffered a loss equivalent to the komi. Some professionals commented that the game had already been decided". LZ with it's 7.5 komi says white was at 76%. Something to note is LZ didn't anticipate Otake's good move 29 beforehand so its judgements preceding that are less trustworthy, but once it gets to that move it does find it after about 30,000 playouts (so non-GPU people will probably never find it). You can compare this graph to the one with my title match review to see the difference in sizes of mistakes between top pros of the 80s and 2 EGF 4ds. (This is based on the assumption LZ is stronger than the players, which is probably true but might not be in some complicated fights or ladders where it can have delusions, though these tend to disappear with sufficient playouts).

Here's the graph from Elf v1. You can see the much bigger swings from opinionated Elf as they play side instead of corner moves. What was -10% to 70% for white for LZ 157 is -21% to white 95% for Elf. White is at 99% by move 35. Elf thinks Shuko did make some mistakes in the middle game which took him to only 80% but that didn't last long.

Attachment:
Kisei5g1Elfv1.PNG
Kisei5g1Elfv1.PNG [ 102.25 KiB | Viewed 6860 times ]


And another LZ157 graph for comparison, a half-pointer from Park Junghwan vs Mi Yuting from a few days ago (at 7.5 komi). The deviations are generally smaller than Shuko vs Otake, so LZ is supporting the belief that top pros of today are stronger than of the past (though in opening they can be playing bot josekis which LZ will approve of (unless they do it in bad situations, which does actually happen a lot) but maybe that's not a caveat if such josekis are actually better ;-) ) Btw, white's 40 was the biggest mistake of the opening at -9%, can you find the better move? It's totally logical and a beautiful tesuji that exploits black's timing mistake. LZ is good at this sort of sharpness in fighting. That big spike at move 110 is for the f11/g11 exchange that I couldn't understand and neither can LZ, saying it was bad. I'd really like to know what the players were thinking: did they see something LZ didn't which when shown LZ agrees was important (well done them), or did LZ not see it but when shown it thinks it's bad and sticks to its opinion, or did LZ see and dismiss it as bad? Also some of the big zigzags later I think are due to foibles in LZ's analysis, for example it wants to play the profitable sente exchanges from t13 for white quite early, so when white does something else it sometimes doesn't see (at 10k playouts) that white will be able to play them later too so thinks the other move is worse when really it's just a different order of chunks. Being a close game big swings could just be 1 point mistakes that change the winner. Also it wastes playouts throughout the game wanting to play black's a13 sente move, which humans can save for a ko threat with little thought and then stop wasting brainpower on. And generally I think the fighting might be too complicated for LZ on moderate playouts that I wouldn't confidently say they must be making big mistakes here. With more time I can try to tease out the truth, but really a stronger player than myself is needed for a reliable interpretation (Elf can help).

Attachment:
ParkvsMiLZ157.PNG
ParkvsMiLZ157.PNG [ 415.12 KiB | Viewed 6844 times ]


Elf v1 on only 1k playouts on whole Park vs Mi. Noticeable that in the early game it prefers black and generally sees a lot more going on. For starters it thinks after parallel 4-4s white answering knight approach with a knight move is a 7% mistake (Elf v0 thought it was fine). Has Elfv1 discovered how to make better use of black's sente advantage? And it thinks Mi only won very late with Park connecting the ko, though I don't know if it can correctly read/intuit the deciding ko fight with few playouts. Update: playing out the ko fight as Elf wants sees the win gradually become white's, so it seems it was mistaken to think black was winning before.

Attachment:
ParkvsMiElf1.PNG
ParkvsMiElf1.PNG [ 538.4 KiB | Viewed 6841 times ]


This post by Uberdude was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #31 Posted: Fri Aug 17, 2018 1:56 pm 
Lives with ko

Posts: 259
Liked others: 46
Was liked: 116
Rank: 2d
Uberdude wrote:
Something to note is LZ didn't anticipate Otake's good move 29 beforehand so its judgements preceding that are less trustworthy, but once it gets to that move it does find it after about 30,000 playouts
This is definitely something to keep in mind. I did a similar exercise running old Go World commented games through LZ - there was one where a sequence was played that LZ heavily disliked, but that was because it didn't see that there was a ko left for later. Once the move that set up the ko was played, the winrate shot back up. So one has to keep in mind that the networks have these kinds of holes in their evaluations, and mentally correct for it if one manages to notice the problem.


This post by bernds was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #32 Posted: Sat Aug 18, 2018 6:27 am 
Oza

Posts: 3655
Liked others: 20
Was liked: 4630
I looked with LZ at a famous old no-komi game (Genjo-Chitoku) which I chose for a variety of reasons: (i) it has many pro commentaries; (ii) it has been widely hailed as a masterpiece ("no wrong moves"); (iii) Takagawa disagreed with that assessment; (iv) it has a clutch of praised tesujis; (v) it has a fairly early (move 66) comment that the "game is now close" (it ended in jigo).

Quite a few of the early fuseki moves did not even appear on the LZ radar. The modern pros likewise were generally quick to say these were Edo-style and "modern players would ...". We must also remember, on LZ's behalf, that there was no komi.

For the most part, early in the game, the players' moves and the commentators' suggested alternatives lined up pretty well with the moves LZ considered. Furthermore, where the modern pros suggested a better move than the one played, LZ seemed to agree. This phase of the game was, however, largely tactical.

When strategic considerations took over, LZ differed sharply. It was a direction of play issue. It wanted to play in another part of the board that both players and commentators ignored completely. The moves played or suggested by humans were not wildly inferior (and were also often selected by LZ once play returned to that area later) but what was noticeable was precisely that the area chosen by LZ was a blind spot for humans at that stage of the game (they did go there later and play LZ-type moves, but there was a clear disagreement about timing).

LZ didn't like moyo-making moves. Humans did.

My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).

There was a tesuji that was universally praised by humans that LZ didn't even consider even when I let it run a longish time. It was also sniffy about some other moves praised by humans (with even Takagawa joining the chorus). Again it seemed to prefer solid moves, not fancy tricks.

At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the uncertainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).

In summary, the humans were not disgraced, though generational differences were confirmed and the label of "masterpiece" may be generous, but LZ liked stolid moves much more than they did. And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #33 Posted: Sat Aug 18, 2018 6:43 am 
Gosei

Posts: 1590
Liked others: 886
Was liked: 528
Rank: AGA 3k Fox 3d
GD Posts: 61
KGS: dfan
John Fairbairn wrote:
LZ didn't like moyo-making moves. Humans did.

My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).

I think that one reason that modern engines don't like moyos as much is that they are really good at reducing and invading them, so they don't think they are as valuable. I think in some of these cases a top human would also be able to invade and live/run if he really had to, but he'd really rather not stake the game on it, whereas LZ etc. are much more confident that they'll find a way.

Quote:
There was a tesuji that was universally praised by humans that LZ didn't even consider even when I let it run a longish time.

I'm curious whether whether it liked the move after it was played (what happened to the winrate).

Quote:
At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the [un]certainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).

I put your "un" in brackets as I think it was a typo. Indeed if Black is winning by 3 points early in the game, he's ahead but White still has a chance, while if Black is winning by 3 points in the endgame, White might as well resign.

Quote:
And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.

Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #34 Posted: Sat Aug 18, 2018 9:09 am 
Oza

Posts: 3655
Liked others: 20
Was liked: 4630
Quote:
I'm curious whether whether it liked the move after it was played (what happened to the winrate).


I haven't got round to advanced thinking like that yet :)

Quote:
put your "un" in brackets as I think it was a typo.


It wasn't a typo though of course I may have made a mistake in other ways. My thinking was that as the board fills up and there are fewer moves to the end of the game (i.e. there is more information), it is possible to make more moves that are reliable (even if not always the best).

Quote:
Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.


Yes, I understand that. But that is just when comparing A, B, C at the same move. What I was postulating was that, nearer the end of the game, the winrate for each of A, B and C is likely to be higher because of the "certainty" I already alluded to. That is, winrate at move 100 implies something a bit different from winrate at move 50. I suppose what I am really referring to is the winrate graph and am implying that its trend is not really telling us anything. The only parts that seem hugely significant are when the direction of the line changes drastically.

If anybody else fancies running tests at a different speed or on a different set-up, here is the sgf of the game I looked at.

(;SZ[19]FF[3]
PW[Honinbo Genjo]
WR[7d]
PB[Yasui Chitoku]
BR[7d]
EV[Castle Game]
DT[1806-12-26 (Bunka 3 XI 17)]
PC[Edo Castle]
KM[0]
RE[Jigo]
US[GoGoD95]
;B[qd];W[dc];B[cp];W[pq];B[eq];W[oc];B[ce];W[ci];B[lc];W[ld];B[kd];W[mc]
;B[kc];W[qg];B[pe];W[of];B[oe];W[nf];B[pc];W[ne];B[gc];W[ed];B[qo];W[qk]
;B[mq];W[op];B[jq];W[pn];B[ck];W[df];B[qm];W[qp];B[pm];W[ok];B[po];W[ic]
;B[id];W[hc];B[hb];W[hd];B[ib];W[fb];B[gb];W[he];B[kf];W[mp];B[nq];W[on]
;B[rp];W[rq];B[ol];W[nk];B[oo];W[np];B[nn];W[ro];B[rn];W[sp];B[nl];W[re]
;B[rd];W[qe];B[pf];W[pg];B[rb];W[cf];B[ek];W[fp];B[fq];W[ip];B[jp];W[in]
;B[io];W[ho];B[jo];W[hm];B[en];W[fn];B[fm];W[gq];B[fo];W[km];B[rk];W[rl]
;B[pk];W[pj];B[ql];W[qj];B[rj];W[ri];B[sl];W[sj];B[rm];W[fc];B[jn];W[jm]
;B[il];W[hl];B[mk];W[mi];B[ln];W[je];B[mb];W[nb];B[md];W[nd];B[nc];W[ep]
;B[dp];W[mc];B[ma];W[ke];B[le];W[me];B[nc];W[lf];B[mj];W[ni];B[ik];W[hk]
;B[ij];W[hj];B[kj];W[dj];B[dk];W[im];B[gn];W[hn];B[bj];W[bi];B[or];W[pb]
;B[qb];W[li];B[kk];W[ej];B[ih];W[ii];B[ji];W[hi];B[jg];W[hg];B[ir];W[gr]
;B[qr];W[pr];B[ps];W[rr];B[er];W[mc];B[gp];W[hp];B[go];W[ld];B[kb];W[iq]
;B[lq];W[qf];B[pd];W[fk];B[rs];W[sr];B[gd];W[fl];B[el];W[ge];B[ai];W[ah]
;B[aj];W[bh];B[sg];W[rg];B[qi];W[pi];B[fa];W[ea];B[ga];W[eb];B[kg];W[lg]
;B[se];W[sf];B[rf];W[cj];B[bk];W[sf];B[sd];W[mm];B[lm];W[mn];B[nm];W[mo]
;B[rf];W[oq];B[mr];W[sf];B[lp];W[hr];B[jr];W[sh];B[if];W[ig];B[jf];W[ki]
;B[gm];W[hh];B[jh];W[ko];B[gl];W[gk];B[fd];W[fe];B[nj];W[oj];B[pa];W[jc]
;B[jb];W[ob];B[kp];W[ie];B[jd];W[ml];B[ll];W[no];B[pl];W[hf];B[sn];W[so]
;B[kh];W[lh];B[jj];W[fr];B[fs];W[is];B[js];W[hs];B[jl];W[lj];B[lk];W[od]
)


This post by John Fairbairn was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #35 Posted: Sat Aug 18, 2018 10:14 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
dfan wrote:
John Fairbairn wrote:
LZ didn't like moyo-making moves. Humans did.

My interpretation of the last two points, at east in this game, is that LZ preferred solidity. Humans preferred flexibility. That surprised me. I'd have thought the AI program would prefer flexibility (they "see" farther") and humans tend to go for solidity as a way of reducing uncertainty (and especially so in no-komi games).

I think that one reason that modern engines don't like moyos as much is that they are really good at reducing and invading them, so they don't think they are as valuable. I think in some of these cases a top human would also be able to invade and live/run if he really had to, but he'd really rather not stake the game on it, whereas LZ etc. are much more confident that they'll find a way.


My take, largely based upon AlphaGo, which may not hold for other bots, is that AlphaGo, and especially AlphaGo Zero, when playing itself, was extremely flexible. However, when playing humans, it often went for solidity. My guess is that when it felt itself sufficiently ahead, even though humans might not realize that, it went for solid play.

As for moyos, AlphaGo Zero seemed to me to have a relatively cosmic style, by comparison with current humans and Leela Zero and Elf. But the other two bots are quite capable of cosmic plays, as well. That said, my intuition about their center play is uncertain. They certainly will sometimes make plays that are too loose to make a moyo yet.

John Fairbairn wrote:
At move 66, when commentators though the game was no close, the winrate in favour of White had shot up from 54% to over 80%, with no obvious horrendous mistakes - just a slow accumulation of increments. This of course assumes 7.5 komi. I'm guessing that what this means, at least in part, is that as the game progresses so does the [un]certainty and so the program can be ever more confident about its winrate. In other words, it does not necessarily mean Black's play deteriorated. Another 66 moves further on, again with no obvious blunders according to humans, the winrate had increased to 87% (and after a further tranche of 66 moves it was 99.8%).

Pretty much what I would expect with good play, if Black were aiming to lose by 6.5 pts. or less, given 7.5 komi. ;)

John Fairbairn wrote:
And, for me at least, it remains puzzling what winrate really means, but I infer it matters more in relation to individual moves rather than over a whole game.

dfan wrote:
Winrate is how probable LZ thinks it is that Black would win if it were allowed to take over the game and play both sides from here to the end. So certainly if move A has a higher winrate than move B you can infer that LZ thinks that move A is better, without having to worry about exactly what each number means.


It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


Last edited by Bill Spight on Sat Aug 18, 2018 10:17 am, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #36 Posted: Sat Aug 18, 2018 10:16 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
John Fairbairn wrote:
Yes, I understand that. But that is just when comparing A, B, C at the same move. What I was postulating was that, nearer the end of the game, the winrate for each of A, B and C is likely to be higher because of the "certainty" I already alluded to. That is, winrate at move 100 implies something a bit different from winrate at move 50. I suppose what I am really referring to is the winrate graph and am implying that its trend is not really telling us anything. The only parts that seem hugely significant are when the direction of the line changes drastically.


Yes, I wouldn't attach too much meaning to the absolute value of the winrate. There have been various discussions about it in the computer go subforum but I see this as more interesting from a mathematical or technical perspective than useful for reviewing go games. As lightvector said elsewhere you need to get to know your bot/weight file to get a feel for how qualitively big a 1% vs 5% vs 20% mistake is our how much of a lead 80% is. And if the game is already at 97% (which could just mean a 5 point lead near the end) and you make a 50 point blunder the worst it can rate it is -3%.

As an example, I analysed my recent game with Shi Yue 9p with LZ. As I had 4 stones handicap I started out with 96% win, and we could call this something like a 50-point lead. Until I killed myself in late yose the win% basically wobbled around in 90s the whole game, despite my only being about 5 points ahead before the death, so he caught up throughout the game, but not quite quickly enough. So 96% at move 1 meant a 50 point lead but 90% at move 250 meant a 5 point lead, but both mean LZ is very confident of winning if it played that position against itself (it can't model one player being weaker as in a handicap game). Of course it immediately identified my blunder and the win% plunged on that move.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #37 Posted: Sat Aug 18, 2018 11:08 am 
Lives in gote

Posts: 502
Liked others: 1
Was liked: 153
Rank: KGS 2k
GD Posts: 100
KGS: Tryss
Bill Spight wrote:
It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?

There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leaf

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #38 Posted: Sat Aug 18, 2018 11:26 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Tryss wrote:
Bill Spight wrote:
It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?

There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leaf


So that's different from AlphaGo Zero, IIRC?

And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #39 Posted: Sat Aug 18, 2018 11:36 am 
Lives in gote

Posts: 502
Liked others: 1
Was liked: 153
Rank: KGS 2k
GD Posts: 100
KGS: Tryss
Bill Spight wrote:
So that's different from AlphaGo Zero, IIRC?

And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.
No, it was the same as AlphaGo Zero

page 2 of the AG Zero paper wrote:
Our program, AlphaGo Zero, differs from AlphaGo Fan and AlphaGo Lee 12 in several important aspects. First and foremost, it is trained solely by self-play reinforcement learning, starting from random play, without any supervision or use of human data. Second, it only uses the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any MonteCarlo rollouts.
source


And in practice, there is no counting with LZ, it usually abandon, and didn't really learned to pass (the no resign self play games are usually over 500 moves long)


This post by Tryss was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Easy to set up Go AI?
Post #40 Posted: Sat Aug 18, 2018 11:41 am 
Gosei

Posts: 1590
Liked others: 886
Was liked: 528
Rank: AGA 3k Fox 3d
GD Posts: 61
KGS: dfan
Bill Spight wrote:
Tryss wrote:
Bill Spight wrote:
It is my impression that the standard value networks are trained with 7.5 komi, so that komi is implied. My guess, however, is that komi must be explicitly supplied for the Monte Carlo playout results. IIUC, the winrates are the average of the two. What happens if the actual komi, zero in this case, is given to the Monte Carlo calculations?

There is no Monte Carlo random simulations, it's just the value network that estimate the position at each leaf


So that's different from AlphaGo Zero, IIRC?

AlphaGo did actual Monte Carlo simulations to the end of the game. AlphaGo Zero and all its descendants (AlphaZero, Leela Zero, ELF OpenGo, etc.) do not, despite continuing to use the (now misleading) term "Monte Carlo Tree Search".

Quote:
And the value network doesn't do any actual counting, even if the end of play is reached? It just kind of "knows" who won? If so, that could explain the recent peculiar end of game behavior of Zen in one game.

Terminal positions in the tree search are actually evaluated by the game rules. However, the position is not considered terminal until both players have passed. The engines do know how to pass but I don't know how often it comes up in the tree search. Since they all use Chinese rules there is no penalty for continuing to play on for a while after the dame are filled.

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group