Mike Novack wrote:jts wrote: Glofish may be 1d, but the instructions he came up with are not 1d instructions.
Who suggested that they were? (1 dan instructions)
But are you willing to agree that they are perhaps 6-8 kyu instructions? Gnugo isn't supposed to be anything like as strong as 6 kyu.
The instructions are (in effect) to coordinate one's separate positions so that they have a combined effect, that gnugo has a weakness in that it will not attempt to contest this. But isn't that sort of coordination precisely the sort of thing human players as weak as gnugo have trouble with?
To demonstrate that this is a method that human players significantly weaker than gnugo could use to easily defeat gnugo need to have games between players of those strengths an gnugo. Since there are a number of bots using gnugo, all we should have to do is wait and see if there is a noticable decline in the rankings of these bots.
1) I think a substantially weaker player could execute these instructions. They require judgment about when cuts can happen that might cause problems for a 15 kyu, but I think a 10 kyu could do it. It would be nice to have someone try.
2) Your point at the end depends on their being ratings arbitrage on KGS. It is easy to imagine that this would not happen (I wrote about this a tiny bit: viewtopic.php?f=18&t=2646&p=43981&hilit=arbitrage#p43981).
3) What is remarkable about GnuGo's errors, as opposed to those of a human, is that they lend themselves to an easily taught strategy to get a win. Every human makes strategic errors. But I don't think you can typically play a 10 kyu, diagnose their errors, then give another 10 kyu a recipe to beat them. Glofish has shown that you can do that with GnuGo.
I think this point goes a bit beyond deterministic play. What you have to teach the 10 kyu in order to beat GnuGo is not general principles of Go, nor do you need to show them a game tree. You can teach them a few tricks, and they can use those to beat the bot (assuming that my claim that you could do this is right).
4) A question out of curiousity: how consistent are current MCTS systems' evaluations of a position/move? That is, if you give them reasonable time controls & hardware, do they tend to give the same moves high ratings? I suppose the most interesting case is a game that is fairly even, and not in the endgame, but the other cases might be interesting.
