Hare matches tortoise
Posted: Mon Feb 20, 2023 4:45 am
I was rather taken aback by the following opening. White was Fujisawa Rina & Iyama Yuta and Black Ueno Asami & Son Makoto.
It was a Shuffle Pair Go game - a "sub-event" at the 2023 Pro Pair Go Championships (cos play terms seem to crop up quite a bit in modern commentaries!). For that reason, I'm sure the players were playing to the gallery. The starting formation of the sanrensei looks like a nod to amateur tastes, for example. AI disapproves of it.
But the interesting thing was that when the final position of the extract given here was reached, White had a winning ratio of 61%. That is despite the fact that White had made a single-stone ponnuki, traditionally worth 30 points but here hemmed in on the side, whereas Black had made a two-stone ponnuki, the traditional tortoise shell shape worth 60 points, here facing the open centre which we tend to assume is something AI bots love.
Furthermore, the move that saw Black's chances plummet was apparently 25. That seems understandable - two horrible cutting points don't seem like a sensible recipe for success. But after Black 31 could we not assume those cutting points have been usefully dealt with?
But the plot thickens.
If we look at just the NW quadrant and play only those moves on an empty board (8 moves each), KaTrain gives the result as absolutely equal! In order words, a 30-point ponnuki on the blocked side is worth the same as a 60-point ponnuki in the open centre. Jugged hare is just as good as mock turtle soup, but White later ran away with the game.
Can anyone explain these (to me) counter-intuitive evaluations? My best guess is that overconcentration has something to do with it. But while that's something that once cropped up a lot in pro talk on AI, it's barely mentioned nowadays.
It was a Shuffle Pair Go game - a "sub-event" at the 2023 Pro Pair Go Championships (cos play terms seem to crop up quite a bit in modern commentaries!). For that reason, I'm sure the players were playing to the gallery. The starting formation of the sanrensei looks like a nod to amateur tastes, for example. AI disapproves of it.
But the interesting thing was that when the final position of the extract given here was reached, White had a winning ratio of 61%. That is despite the fact that White had made a single-stone ponnuki, traditionally worth 30 points but here hemmed in on the side, whereas Black had made a two-stone ponnuki, the traditional tortoise shell shape worth 60 points, here facing the open centre which we tend to assume is something AI bots love.
Furthermore, the move that saw Black's chances plummet was apparently 25. That seems understandable - two horrible cutting points don't seem like a sensible recipe for success. But after Black 31 could we not assume those cutting points have been usefully dealt with?
But the plot thickens.
If we look at just the NW quadrant and play only those moves on an empty board (8 moves each), KaTrain gives the result as absolutely equal! In order words, a 30-point ponnuki on the blocked side is worth the same as a 60-point ponnuki in the open centre. Jugged hare is just as good as mock turtle soup, but White later ran away with the game.
Can anyone explain these (to me) counter-intuitive evaluations? My best guess is that overconcentration has something to do with it. But while that's something that once cropped up a lot in pro talk on AI, it's barely mentioned nowadays.