Life In 19x19 http://www.lifein19x19.com/ |
|
Opening problems for AI: Problem 9 http://www.lifein19x19.com/viewtopic.php?f=15&t=17577 |
Page 1 of 1 |
Author: | Bill Spight [ Fri Jun 19, 2020 5:21 pm ] |
Post subject: | Opening problems for AI: Problem 9 |
This particular opening pattern occurs mostly in the 20th century, but it has occurred in pro play in the 21st century. Waltheri has 51 examples of it. ![]() ![]() ![]() ![]() ![]() This example game is GoGoD 1997-01-20a, Cheong Tae-sang, 7 dan, (W) vs. Yu Ch'ang-hyeok, 9 dan. |
Author: | Harleqin [ Fri Jun 19, 2020 5:32 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Author: | hyperpape [ Fri Jun 19, 2020 5:41 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Author: | Bill Spight [ Fri Jun 19, 2020 5:54 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Author: | zermelo [ Fri Jun 19, 2020 6:33 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
I suppose in Bill's last diagram there's a white stone missing (move 18) from top right. |
Author: | Bill Spight [ Fri Jun 19, 2020 7:36 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
zermelo wrote: I suppose in Bill's last diagram there's a white stone missing (move 18) from top right. Many thanks. ![]() |
Author: | John Fairbairn [ Sat Jun 20, 2020 4:57 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Bill Do forgive me, please, but I am having trouble getting my head round this series of threads. I expect part of the problem is that you are just one of quite a few people who post about AI and openings, but for the reader it all merges into a single mass and it's hard to disentangle who is saying what. Inevitably there are probably also contradictions embedded there. One problem I have is that I have formed the strong impression that the komi matters a lot. Not entirely sure how, but I feel sure my impression is correct. So, in one previous game, the fact that komi was just 4.5, and that in this game it is just 5.5, should be taken strongly into consideration. You personally may not have said anything about komi that should affect my thinking, but the Elf team certainly did (when they were talking to me about using the GoGoD database), so for good or ill, that's one of my starting points. Another problem I have is with the impression (again formed from multiple sources) that Elf/Leela/KataGo etc are somehow rejecting previous human play. I have a LOT of problems with that. Rejection may not be the word used, but that is what the various locutions amount to. I have not the slightest problem in accepting that bot play is superior, or acknowledging that it supersedes human plays. You have repeatedly (and valuably, and maybe alone) stressed that the bots set out simply to win games, not to say anything about human play. In one of the threads in this series you also specifically stress that you, too, are not seeking to say anything detrimental about human pros (indeed, you say lost of positive things about them). But.... As I say, whatever you say becomes part of a mass. It gets picked up by other people, who don't apply your cautions. And even when people do try to resist trashing human pro play, quite frankly it still comes over to me in a sort of "some of my best friends are professional players" tone. Some pros have been quite wrong, and it is perfectly fitting to show that. But it doesn't seem fitting to show them as wrong when they really weren't. In the case of the side extension White 6 here, pros stopped playing that well before AI was on the scene. They stopped playing it around 2000 (based on 95 GoGoD games; and a similar remark could be made about White 4), which was more or less when the larger komi was adopted in Japan and Korea. My interpretation would therefore be that they weren't specially wrong about the extension. They based it on the rules of the game applicable at the time, and changed their views when the rules changed. If they were wrong about anything, it wasn't anything at all to do with the extension, and so to talk about any of the side moves now is (to me) potentially misleading. What they were (apparently) wrong about was spotting the value of the press. Not the unpincered press itself - that appears in countless pro games - but its value. I haven't looked at all that many examples of the unpincered press (but I have looked at more than a few and it has also been discussed by pros), but what seems strongly to emerge is an impression is that humans have almost always played it as part of a plan to press the opponent down on both sides. That doesn't seem to be at all the motivation behind the bot play. So a quite different concept (control/presence in the centre?) is involved. In that case, I don't quite see why numbers/win rates are relevant. The numbers can initially highlight a conceptual discrepancy, but once they've done that we can (almost) ignore them. In topical terms, it seems as if we are talking about the number of infections while what we really want to know is how to create a vaccine. Please feel free to disembowel my thinking. |
Author: | John Fairbairn [ Sat Jun 20, 2020 6:45 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
After posting the above, I read a review of a very old sci-fi book by E.M Forster. I don't like sci-fi at all, but I was gobsmacked by his prescience. I think what he had to say offers food for thought even to go players here. It's fiction but uncomfortably close to the present truth. At least it explains some of my own unease. The review is by Will Gompertz on the BBC website: https://www.bbc.co.uk/news/entertainment-arts-52821993 A taster extract: Quote: The short story is set in what must have seemed a futuristic world to Forster but won't to you. People live alone in identikit homes (globalisation) where they choose to isolate (his word), send messages by pneumatic post (a proto email or WhatsApp), and chat online via a video interface uncannily similar to Zoom or Skype. "The clumsy system of public gatherings had long since been abandoned", along with touching strangers ("the custom had become obsolete"), now considered verboten in this new civilisation in which humans live in underground cells with an Alexa-like computer catering to their every whim. If it already sounds spookily close for comfort, you won't be reassured to know that members of this detached society know thousands of people via machine-controlled social networks that encourage users to receive and impart second-hand ideas. "In certain directions human intercourse had advanced enormously" writes the visionary author drily, before adding later: "But humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean progress of the machine." I see "desire for comfort" as equating to number-grabbing. |
Author: | Bill Spight [ Sat Jun 20, 2020 8:55 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Dear John, Thank you for your as always thoughtful post. ![]() ![]() Does komi matter to the opening? Yes and no. There is no question that playing with 4½ komi versus no komi affected the opening as early as move 4, perhaps as early as move 2. By the 1960s parallel fuseki had become popular, as ![]() ![]() ![]() Now, what about ![]() ![]() ![]() ![]() ![]() So today, with 6½ and 7½ komis, which appear, if anything, to give an advantage to White, the pros have long ago given up ![]() ![]() But the bots don't play it. Is it a mistake? Nothing is certain, OC, but my guess is that it is probably a minor error. By comparison with the 16-4 in the adjacent corner, Elf reckons that the 5-3 loses 5½% and the 6-4 loses 7½%. Playing it once is not so bad, but surely doing it in all four corners to form a swastika 5-3 or swastika 6-4 is very likely a losing strategy. ![]() What the bots allow us to do is to quantify, albeit imperfectly, these feelings and suspicions, and to surprise us when the numbers go against our feelings and suspicions. ![]() |
Author: | Bill Spight [ Sun Jun 21, 2020 9:19 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
More blather. ![]() What is komi? There is statistical komi and there is theoretical komi. Statistical komi is what we know for the 19x19 board. Given a group of players, each of whom plays Black or White half the time, statistical komi is the number of points that Black gives White which yields the odds of a Black win versus the odds of a White win that is closest to 50:50. Today we require a non-integer komi, to prevent ties. Theoretical komi is the best score that each player can guarantee for herself, given perfect play. We know theoretical komi for the 5x5 board. It is 24 pts. by territory scoring, 25 pts. by area scoring. And 24½ pts. for Button Go, BTW. Now, as the skill level of the players increases, statistical komi should converge to theoretical komi. Based upon statistics in the 1970s, theoretical komi for area scoring is probably 7 pts., and for territory scoring it is probably 6 or 7 pts. For AI trained on area scoring the statistical non-integer komi is 7½ pts. In 1977 Terry Benson sent me an article to review about komi submitted to the AGA Journal based upon the statistics of 2800 Japanese professional games published by the Nihon Kiin, 1400 with a komi of 4½, and 1400 with a komi of 5½ (or 5 with White winning jigo). The author regarded it as "plain as a pikestaff" that theoretical komi under Japanese scoring was 7. When published, the article included a footnote by me pointing out that changing the komi might affect the play, something that John Fairbairn has also pointed out. That said, there was zero statistical evidence that changing the Japanese komi to 5½ from 4½ had affected the professionals' play. For the 1400 games played with 5½ komi a komi of 6½ gave the most even results, but the same was true for the games played with 4½ komi. ![]() So does komi affect opening play? Obviously the shift from 0 komi to 4½ komi did. But did the gradual change from 4½ komi to 6½ komi do so? I have not seen any evidence of that. OC, over four or five decades, opening play has changed, but I am unaware of any evidence that these changes affected the statistical distribution of final scores. Based upon statistics, Ing jumped to 7½ or 8 pts. komi for area scoring in the early 1980s. I am unaware of area scoring statistics. What is an error? Errors are easy to define in terms of theoretical komi or, more generally, theoretical value. An error is a play that changes the result that, given perfect play, the player can guarantee negatively for that player. So if theoretical komi is 7 and after Black's first play, the theoretical value of the resulting position is 6, that play is an error. OC, in practice we do not really know either value for the 19x19 board. But what if the actual komi is 4½? With perfect play Black will still win, so is it an error or not? There is a statistical answer. Even if Black should still win by perfect play, if the play reduces the chance that Black will win the game, it is an error. But we know that there are theoretical errors in many positions, that actually increase the chance that Black will in the game. Such plays make a small theoretical sacrifice in order to nail down Black's win. Now bots, it seems, take the statistical point of view. Not that they base their plays entirely upon their estimates of the probability of winning the game, if at all (depending upon the bot, I think), but those estimates do affect their analysis, and in general they play so as to maximize their chance of winning the game. This fact was hyped a few years ago as bots thinking differently from humans. Well, they do, but that's not why. Now, the bots' estimates are based upon erroneous play, not perfect play, but the erroneous play upon which they are based is their own, not the erroneous play of humans. Humans utilize the winrate estimates of the bots without really knowing what they mean, in a practical sense. A play that has a higher winrate than another when bot plays bot, may not have a higher winrate when human plays human. We may assume that a play with a higher winrate estimate is statistically better in human play, but we don't really know. However, the greater the difference between the winrate estimates of two plays, the more likely that the play with the worse winrate estimate is an error, however defined. So when the Elf of the commentaries, which is only one bot, and not currently among the top bots, and a bot which produces relatively large winrate differences, reckons that Sakata's boshi loses more than 20% in winrate by comparison with an enclosure, I am inclined to think that the boshi is a human vs. human statistical error, and probably a theoretical error, as well. But we need to consult other, better bots, as well, to assess different plays and ideas. My purpose in this series. I think that books on the opening have to be rewritten for our AI era. The bots, OC, don't write books, they don't even offer explanations. But, just as the opening theory of the 20th century was largely based upon best play in the 19th century, I think that the opening theory of the 21st century will be largely based upon best play by bots. Science marches on. It is up to us humans to come up with new explanations and ideas. Old ideas need to be discarded or modified. I doubt if the 21st century equivalent of Takagawa's Fuseki Dictionary will be written by anybody now over 20 years old. I have proposed a new last play principle for the opening, which is not to get the last Oba, but to occupy the last open corner. (OC, there are exceptions, but that's the rule.) I think it will hold up. So I wanted to explore where the old textbooks may have it wrong, or where the ideas need to be modified. It was not my purpose to point out human mistakes. In fact, both humans and bots have rejected the 5-3 approach for ![]() ![]() I started emphasizing plays that, by comparison with its top choice, Elf reckons as losing several percentage points, because of my discussion with Sorin. He, quite reasonably, does not consider winrate estimates of human professional plays to be definitive. Fair enough. That's why I went looking for high winrate differences, which, in my educated judgement, have a good chance of indicating theoretical errors. Especially if other bots agree. ![]() |
Author: | John Fairbairn [ Sun Jun 21, 2020 10:52 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Thanks, Bill. You have clarified quite a few things for me, but your point about accommodating sorin's views shone the brightest light. To a large degree inspired by what you said about that, I'd like to ask your opinion about a possible refinement that may, pace sorin, work in some cases even better with lower win rate discrepancies. You may recall I started covering a Japanese series in which Hirata Tomoya takes a position from a game of the pre-AI era and asks several pros (from 9-dan to 1-dan) for their post-AI take on what the highlighted move should be now. I stopped covering it because of a lack of engagement on L19, but I still follow it ina slightly detached way. The series is now in its 37th episode. Two things have become apparent to me. One is that there is a very wide range of opinion among the pros. There is almost always an obvious AI-ish look to the moves (shoulder hits etc) but, truthfully, much of it looks like the party game of trying to pin the tail on the donkey. They know how to imitate AI, but haven't really developed any methods for discriminating between AI-ish moves. Except.... One idea that seems to come up fairly often is that the original pro move (pre-AI) may not be the bot's choice at that stage of the game, but it is still a good move. The bots simply strive to make it even better by playing preparatory moves. These moves are very often probes and forcing moves. Sometimes they may be prophylactic honte type moves. And it now seems, in this series, that pros are trying to find the best preparatory moves themselves, with a view to playing the original pre-AI move in due course (or avoiding it, of course, if the probes etc so dictate). There's still quite a variety of preparatory moves they come up with, but most choices now seem to have that underlying idea of probe, parry or prepare. Now, if my impressions are correct, two things seem to flow from that. One is that in a fundamental, DeepPro kind of way, traditional pro thinking about go theory may actually be sound. Pros are good builders. They know how to build a sturdy house. But they haven't yet mastered the architect's skill of making it blend in with its surroundings. If they want to master that skill, they need mainly to learn to look at the surroundings, and do not need to tamper with their fundamental house-building theories. The other thing is that, to follow that idea through, we should not only be looking at moves where the bots say there had been a big win rate drop. They may just be pure errors: wrong application of basic theory. Instead, we sometimes need games where the win rate drop is on the small side so that, if we dare to make the assumption that the pro move was basically sound, we can try to assess simply how and why the bots are trying to make it better (either more efficient in local shape or better timed). Of course it is still valuable to look at genuine errors, but in that case I think we should be aware that we may be looking for something quite different from the "refinement" approach. Quote: There is statistical komi and there is theoretical komi. There's another kind which I was unaware of until this week. It's the "it could only happen in Japan" kind of komi. Komi is not a new thing. It goes back at least to 1750. It was still rare, of course, but during the 19th century it keeps cropping up. I'd noticed before it crops up an awful lot in games with Honinbo family members. It may have been a Honinbo fad (maybe they tried variatios of go during cholera lockdowns?), or it may just seem that way because we tend to have more Honinbo games than anything else. But what I didn't know before was that they combined komi with the traditional B-B-W kind of handicapping (e.g. with a 2-dan difference you give B-B-W and 2 points komi). This was not mainstream and seems reserved mostly for rengo and other party games. Pinning the komi on the donkey. |
Author: | Bill Spight [ Sun Jun 21, 2020 12:47 pm ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
John Fairbairn wrote: Quote: There is statistical komi and there is theoretical komi. There's another kind which I was unaware of until this week. It's the "it could only happen in Japan" kind of komi. Komi is not a new thing. It goes back at least to 1750. It was still rare, of course, but during the 19th century it keeps cropping up. I'd noticed before it crops up an awful lot in games with Honinbo family members. It may have been a Honinbo fad (maybe they tried variatios of go during cholera lockdowns?), or it may just seem that way because we tend to have more Honinbo games than anything else. But what I didn't know before was that they combined komi with the traditional B-B-W kind of handicapping (e.g. with a 2-dan difference you give B-B-W and 2 points komi). This was not mainstream and seems reserved mostly for rengo and other party games. Pinning the komi on the donkey. Interesting. ![]() John Fairbairn wrote: One idea that seems to come up fairly often is that the original pro move (pre-AI) may not be the bot's choice at that stage of the game, but it is still a good move. The bots simply strive to make it even better by playing preparatory moves. As you may recall, when I looked at the AlphaGo Master games vs. human pros, I noticed that, given the chance to pincer, Master pincered about half as often as the humans. The main explanation I came up with was that Master typically prepared the pincer by bolstering its position on one side or the other. That is, if the opponent jumped up in response to the pincer, it did not threaten to play on one side or the other, because one of those sides was already covered. As a result, the pincered stone often did nothing. I think that there is in general something to that idea, especially since observing that Elf often did not criticize side extensions of 19th century games, or regarded them as minor errors. By contrast, 20th century side extensions from a 4-4 were often criticized. OC, the 4-4 opening was not so popular in the 19th century. ![]() John Fairbairn wrote: Now, if my impressions are correct, two things seem to flow from that. One is that in a fundamental, DeepPro kind of way, traditional pro thinking about go theory may actually be sound. Saveilly Tartakower wrote: Some part of a mistake is always correct. John Fairbairn wrote: The other thing is that, to follow that idea through, we should not only be looking at moves where the bots say there had been a big win rate drop. They may just be pure errors: wrong application of basic theory. Instead, we sometimes need games where the win rate drop is on the small side so that, if we dare to make the assumption that the pro move was basically sound, we can try to assess simply how and why the bots are trying to make it better (either more efficient in local shape or better timed). Yes. Winrates are not everything. In fact, I did not come up with my proposal to occupy the last open corner from winrate estimates. I used non-parametric statistics. Yes, we know that bots are better players than humans. But that does not mean that every play they choose is better than the humans' choice. In fact, we know that there are certain positions where human judgement is better. (This is also the case with chess engines, where superhuman play has a longer history than it does in go.) Back when I was trying to discover the margin of error for Leela 11 (the old days ![]() Now let's compare LZ with humans whose game it is reviewing. Suppose that at some point LZ prefers to play in an area of the board where Black does not play, but LZ's preference is small. That could be the result of random variation. Next, White plays somewhere else, and LZ still prefers the same area as it did before, perhaps on the same point, perhaps on a neighboring point. Still maybe random variation. Now suppose that for 10 moves the humans play elsewhere but LZ still prefers that area, even if only slightly. Damn the small winrate differences, that area is probably a persistent feature of the board that is more urgent than the humans realize. That's how I came up with the proposal about the last open corner. When the opportunity to occupy the last open corner arose in a game review and the human did not do so, Elf almost always preferred it to the human play, even if only by 2% or so, well within Elf's margin of error. |
Author: | dhu163 [ Fri Jun 04, 2021 9:58 am ] |
Post subject: | Re: Opening problems for AI: Problem 9 |
Page 1 of 1 | All times are UTC - 8 hours [ DST ] |
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group http://www.phpbb.com/ |