Thanks, guys. Good points.

I talked about beating a dead horse because I think people now pretty much agree that top computer programs can make mistakes by believing that one play is superior to another, assessing the difference in terms of the probability of winning the game, and that humans can catch some of those mistakes by assessing the difference in terms of points.

But there still are those who claim that humans just don't understand how the top programs think, and while humans may think that they recognize some computer mistakes, the programs are better than they are, so the just don't really know.
I'd like to make two points. First, people do think in terms of the probability of winning. They just don't do it very well. Second, I'd like to make the case that assessing the chances of winning by point evaluation works better and better as the end of the game nears (at least at the strong amateur level and above), and that it is likely, at the moment, that human evaluation is better than computer evaluation at some point during the endgame.
uPWarrior wrote:
It is known that {% of winning} produces stronger AIs, as it is maximizing the correct objective after all.
When uPWarrior says that the correct objective is the percentage of winning he is thinking probabilistically. We humans do that all the time. We consider an even game between two players of the same level to be a 50-50 proposition, while an even game between, say, a 1 kyu and a shodan means that the shodan will win about 2/3 of the time. If a move is a small gote, but there are much larger moves on the board and we cannot read the game out, we consider that Black will play the gote half the time and White will play it half the time. Or if there is a small sente for Black, and we cannot read the game out, we consider that Black will get to play it almost 100% of the time. But 30 moves into an even game, if you ask us what the probability is that Black will win, we are hard pressed to make an estimate.
I suspect that strong human players can be trained to make good probability estimates of winning the game. The reason is that gamblers made fair bets even before the invention of probability theory. The training would consist of having players make modest bets on the outcomes of top level games while the games were in progress. Over time, I expect that the players would learn to make fair bets.

BTW, it would be an interesting research project to see how well top programs assess the probability of winning the game, using pro game records. Have the programs assess the position after move 100, for instance, and compare the percentage of wins vs. the assessed probabilities. My impression is that the programs are more accurate after 100 moves than after 200 moves. That is, at the endgame stage they underestimate the chances of the winners. I think that by betting on the projected winner at the odds assessed by the program, you would clean up.
More later. Gotta run.