Well, the bots are at present good tools to assist a human analyst, but I would not rely upon any computer program alone for analysis. They are not trained to be analysts. Now, lightvector has used human games in training KataGo to make it better for analysis, but the main impetus for bot development is to increase its overall playing strength. The same seems to be true for chess engines, some 25 years or so since the first superhuman engines appeared. Chess engines still have well known blind spots.Uberdude wrote:I've seen Elf (v2 I think) have a blindspot and some % winrate change when shown the shockingly unusual move 3 of a 3-4 point instead of a 4-4. Other bots are less blinkered and more reliable for analysis IMO.
Ordinarily I would not be surprised that KataGo or any other bot would prefer a different play than Elf. From what little information I have, a concordance rate between bots of around 80% is not unexpected. I do expect bots to agree about sufficiently bad human errors, which is the case here. KataGo figures a winrate loss for Sansa's play at 8%, about half the loss that Elf figures, but that's in line with how differently they estimate winrates. I am however surprised that, having spent at least 143.8k rollouts on this position, Elf did not pick up KataGo's top choice and give it at least 1500 rollouts.