There was a nice example of this I posted about a while ago, I think from a neural network bot shortly before AlphaGo came along. Said bot was doing well in a fight and had trapped some key cutting stones with a potential crane's nest tesuji (they had the 3 liberties and opponent 2 extensions on the side). The other bot opponent then played the one-point jump to escape everyone stronger than 20 kyu knows is doomed to fail. Neural bot wedged, other bot atarid, neural bot connected instead of the squeeze and woopsy game over. A little MCTS reading would have saved the day, but presumably playing out the doomed crane's nest tesuji is so rare in the strong player games used for training the neural network hadn't learned how to refute it.Bill Spight wrote:Back when I was considering such things, it was plain that a good player needed to be able to refute bad moves, but there are some bad moves that a good opponent will never play, and so the ability to refute them may not be learned
Also with exploration vs exploitation there is a conflict in what is needed in different situations. We see blindspots of bots not considering moves even 1 ply deep which is due to insufficient exploration. But to read ladders you want high exploitation and low exploration to quickly go deep down the 1 relevant variation.