Trying to Modernize Sensei's Library

General conversations about Go belong here.
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

Re: Trying to Modernize Sensei's Library

Post by Kirby »

yuzukitea wrote: I'm certainly not knowledgeable to any agree (and I'm probably wrong), but I get the impression that the AI doesn't believe in life-death uncertainty in that way, in the sense that it believes groups are alive if it thinks its alive or dead if thinks it's dead. AI are just that much better at calculating endgame than human professionals, for that matter.

A 50% uncertainty that a group is alive = miai for life (kind of) (or one side has objectively more ko threats).

Or well, there are no tsumego problems with an "uncertain" answer, in a sense.

The fluctuation in score, at least to me seem to largely happen when both players repeatedly miss the vital point for their giant dragons. The vital point might be obvious to AI, but not necessarily obvious to humans. The AI score in this since isn't a prediction of whether it thinks Black (human) or White (human) would win -- but rather a score of whom it favors assuming that AI took over and started playing itself Black (AI) vs. White (AI) at that precise moment.
I don't think AI sees the concept of life and death the same way as humans, either. Here, I'm more interested in trying to explain the difference between saying that "Black will be ahead by 2 points at the end of the game" and "if white is gifted 2 points, it will end in a tie". I don't think it matters if we are talking about AI or not here.

But even in the case of AI, in theory, I believe there should exist board positions in which 50% of play outs end up with a group dying, and 50% of play outs end up with a group not dying; you'd just need to make the board complex enough. Sure, humans make a lot more mistakes than AI and the life/death status of groups may unintentionally fluctuate a lot. But AI is just making estimates based on imperfect play outs, too.
be immersed
User avatar
Knotwilg
Oza
Posts: 2432
Joined: Fri Jan 14, 2011 6:53 am
Rank: KGS 2d OGS 1d Fox 4d
GD Posts: 0
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
Location: Ghent, Belgium
Has thanked: 360 times
Been thanked: 1021 times
Contact:

Re: Trying to Modernize Sensei's Library

Post by Knotwilg »

dfan wrote:I think there's a difference between
  • If both sides keep trying to win the game, Black will be ahead by two points at the end, and
  • If White is gifted two points, then both sides keep trying to win the game, it will end in a tie,
because play may be different in the two cases. This is probably more clear if we imagine a much bigger lead, like twenty points.
I believe the statement Lightvector infers is still different:

"If Black gives 2 points to white, then the probability of winning is 50-50"

Except for the late endgame, neither AI today or pros before, were claiming the game would result in a 2 point victory given sharp play by both. Maybe 2 points on average but even that. I really think pros meant "I prefer Black, and if you give White two points, I'm indifferent to take Black or White" which seems to be what AI "says" when marking a 2 point lead for Black.
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: Trying to Modernize Sensei's Library

Post by lightvector »

The two statements:
"AI says Black is about 2 points ahead"
"AI would be be unsure which side it prefers if Black were given a penalty of about 2 points"

...can mean the same thing if you interpret them one way, but they can also mean different things with a different interpretation. The whole point of the second phrasing is to make it easy for people to gravitate to the most accurate interpretation. (Let's brush under the rug for the moment that due to how MCTS works, even the second statement isn't exactly accurate, such as getting winrates and scores with mismatching signs. The second statement is a good enough approximation for now).

The first statement sounds like it is a claim of an objective fact about Black's lead. But in what sense? Is it a claim about the game-theoretic optimal value of the position? Not really, in general we have no idea what the optimal value of an arbitrary 19x19 game position is, and likely AI is far enough from optimal that even if we knew it wouldn't necessarily even be useful. For example, it's quite plausible that there are positions where two equally-matched top bots would win more as White, achieve a positive average score as White, prefer White (at their levels), but where the game-theoretic optimal score is positive for Black. The absoluteness of the statement also makes it easy for the listener to forget this fact that different players may value a position differently, and leaves ambiguous with respect to what standard the score is being measured.

The absoluteness of the statement also immediately leads to the question of how certain that statement is. What is the chance it is "wrong", or what is the range of uncertainty? It leads one to also wonder why bots mostly don't provide meaningful confidence ranges on these scores, and hides the fact that asking for confidence ranges in the first place is sort of a category error (and the natural question being a category error and thereby nonsensical is partly why they are hard to provide).

The second statement is much harder to misinterpret:

* It's immediately more clear that "2 points" is not an objective claim about the position itself, it's a claim about the bot's subjective preference in that position if 2 points were lost.

* It's immediately more clear what it's not: It's not the game-theoretic value. It might not be what average score would result if you *actually* took the bot and played a million full games with itself from that position (the bot's preference may or may not not match up with this kind of rollout). It's not necessarily how you should value the position, but it might be, you can be smart about it (e.g. are you talking about pro-level play or amateur-level play from that position? Is that score predicated on an inhumanly precise sequence of play or does anything work? Does the bot seem to have any blind spots in upcoming tactics, etc.).

* It is clearer now why asking for a confidence range on that particular specific score output "Black +2" is sort of a category error. How likely is it to be correct? It's correct. This is the number that reflects that bot's preference when it was run with those settings on that hardware for that amount of time with that random seed. It's correct because it was intended to be a measure of the bot's preference in that instance, and it was indeed a measure of the bot's preference in that instance. What is the confidence range on it? 0, I guess. Either that, or else the question is categorically nonsensical. Or else if we want to talk about average error with respect to something else (e.g. the average score difference that a random 1 dan human amateur would achieve a 50% win chance at, or perhaps the amount such a player would win by on average, or the bot's own new judgment after 10 self-play turns), we need to specify what that something else is.

I hope that made sense. I guess this is all off topic though for the thread. :)
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

Re: Trying to Modernize Sensei's Library

Post by Kirby »

I want to say that I appreciate that you take the time to give clear explanations like this, lightvector. I also get the sense that you are pretty precise with the phrasing you use - something I could learn from.
be immersed
yuzukitea
Beginner
Posts: 11
Joined: Mon Aug 09, 2021 6:55 am
Rank: OGS 4 kyu
GD Posts: 0
OGS: yuzukitea
Been thanked: 6 times

Re: Trying to Modernize Sensei's Library

Post by yuzukitea »

Another page updated - The first move of the 4-4 slide joseki - 4-4 Point Low Approach Low Extension, Slide

I think my focus for now is on DDK joseki.

I bumped all of the old forum-style discussion onto a "Discussion" section of the page (or moved it to the most relevant subpage where the move is discussed).

I think I also realized that SL is probably most effective for providing real-board examples, which isn't present in most joseki dictionaries. The idea for now is to select professional game where a less popular joseki variation is played and explain why the professional played that particular variation.
yuzukitea
Beginner
Posts: 11
Joined: Mon Aug 09, 2021 6:55 am
Rank: OGS 4 kyu
GD Posts: 0
OGS: yuzukitea
Been thanked: 6 times

Re: Trying to Modernize Sensei's Library

Post by yuzukitea »

I wrote this page: 4-4 Point Traditional Slide Joseki, 2-8 Invasion

I have mixed feelings about how AI joseki should be presented. I agree with others that it's not very helpful to present an AI score of a joseki played out on an empty board (or even an invented arbitrary board position), and it can produce a misleading understanding about what AI thinks about a certain variation.

I think it's definitely better to assess positions in professional games with AI. I've been doing this with a few professional games with every variation, and I feel like I'm fairly comfortable with saying "AI doesn't like this traditional variation" in a hand-wavy sense (i.e. AI views the extend joseki linked above as -1 points in the context of the professional games I checked), but I wouldn't really feel comfortable throwing a number onto it without being able to do statistics running many more professional games through AI.

Even then, I don't really consider myself authoritative enough to say that "X joseki is bad". There's sort of a catch 22 of professionals no longer play this joseki ever since 2016 (so there's no hits in Waltheri after 2016), but at the same time I don't have a documented source from any professional saying that "X joseki is bad" or explaining why "X joseki is bad". It's ultimately just inference and guesswork based on the lack of any professional play.

Meanwhile there are strong amateurs like Badukdoctor saying the extend variation is "good" and there's likely a great number of older joseki books still saying that the extend is "good".

I just don't think I'm qualified enough to provide a good personal opinion, so the best I can do is present both.
bugcat
Dies with sente
Posts: 115
Joined: Tue Nov 21, 2017 6:41 pm
Rank: OGS 8k
GD Posts: 0
DGS: bugcat
Has thanked: 1 time
Been thanked: 24 times

Re: Trying to Modernize Sensei's Library

Post by bugcat »

I wonder whether anyone would like to help update Sumire's article, which hasn't been edited since May 2019.

https://senseis.xmp.net/?NakamuraSumire
https://senseis.xmp.net/?topic=11193 (discussion of updating)

And also Joanne Missingham's, which hasn't had a content update since 2012, apart from noting her 2015 rank up to 7p.

https://senseis.xmp.net/?JoanneMissingham
https://senseis.xmp.net/?topic=11206 (discussion of updating)
bugcat
Dies with sente
Posts: 115
Joined: Tue Nov 21, 2017 6:41 pm
Rank: OGS 8k
GD Posts: 0
DGS: bugcat
Has thanked: 1 time
Been thanked: 24 times

Re: Trying to Modernize Sensei's Library

Post by bugcat »

If anyone would like to start articles on the Yunguseng Dojang or AwesomeBaduk, that'd be appreciated.

We already have a short one on the Nordic Go Dojo and now a longer one on the Osaka Go School.

No-one has yet come forward to help update the Sumire or Joanne Missingham articles.
Genish
Beginner
Posts: 1
Joined: Wed Jul 26, 2023 10:01 pm
GD Posts: 0

Re: Trying to Modernize Sensei's Library

Post by Genish »

bugcat wrote:If anyone would like to start articles on the Yunguseng Dojang or AwesomeBaduk, that'd be appreciated.
naruto senki
We already have a short one on the Nordic Go Dojo and now a longer one on the Osaka Go School.

No-one has yet come forward to help update the Sumire or Joanne Missingham articles.
Is it applicable to all users? or users with a specific number of posts or age?
Post Reply