Bill Spight wrote:
lightvector wrote:
Counting liberties on large groups turns out to be a thing that is pretty difficult for a neural net with current known best architectures.
I am reminded of the animals that can instantly distinguish between 5 and 6 objects, but cannot "count" any higher.
Quote:
Leela Zero sticks to wanting to a spirit of wanting to minimize use of high-level human features and heuristics (that's the "zero"), and so does not provide liberty counts to the neural net as input. If you do provide them as input, although a bot may still have blind spots for tesuji or status of groups, I can confirm that blind spots for liberty shortages go away entirely, at least as far as I can tell.
Since dame count is important for playing ladders, does providing them to the net improve the ability to play ladders?
Also, if you can provide dame count as input, what about komi?
For ladders, no, not significantly. Neural nets don't have a problem with counting liberties on small groups, just the ones that are large and often sea-urchin-like. And the ability to evaluate ladders is still critically poor on the positions where they haven't been played out yet, or haven't been played out much, where all groups involved are small.
The hard part of a ladder for the neural net is not anything liberty-related, but rather "understanding" that the stones diagonally a long distance away across empty space could possibly be relevant. Also, unlike humans, the bot is perfectly happy to read the ladder in one variation, solve it (because given enough playouts, the search does still solve it), and then fail to understand the same ladder in another variation, and another, and another. For every one of the dozens or hundreds of variations in any position, the search has to solve the ladder yet again from scratch. Humans would simply read it once and understand when a move could change the ladder's result and require a reread, but there's no currently known good way to make a bot do that in current architectures that "fits in" to the current neural-net driven search. That's for future research.
I've also experimented with adding ladderability of stones as an input feature too.

The result is that the bot that never messes up common ladders (as far as I can tell) and has good evaluations for tactics that depend on them. The drawback is that it makes the bot weaker at solving rare positions like the Lee Sedol ladder game or that other Fine Art game where actually it's correct to chase a broken ladder across the board because the forcing moves gained will kill another group on that other side of the board, and where it requires actually chasing the ladder rather than simply playing a ladder breaker on that other side of the board directly. Because the bot, knowing that the ladder doesn't work, is less willing to spend reading effort on chasing the ladder out compared to a bot that has no idea if it works or not.
Since normal ladder situations are easily 100x or more common than situations where chasing broken ladders across the board is good (driving tesuji don't count, it's only the cases where you need to chase across a long distance that are tricky), for now I'm happy with this tradeoff, although I have some intuitions for future research on how one might try to get the best of both worlds.
Neural nets can easily handle a wide range of komi if komi is provided as an input
and the training data contains a few percent of games with a wide range of komi. I do that in KataGo, and it works very well.