Crazy Stone Deep Learning first impressions
Posted: Mon May 16, 2016 6:49 pm
This is the first Go engine I've used, so I don't know if any of the features I am calling out are standard or are unique to Crazy Stone.
It ran perfectly fine on Windows 7 inside Parallels on my MacBook Pro (quad-core 2.8Ghz i7). Phew!
The UI is a weird combination of slick and clunky (e.g., ugly fonts), but it gets the job done.
I use chess engines all the time to analyze my play, as I don't have a teacher, and they are invaluable for catching my mistakes, both tactical and strategical. I was hoping that the new Crazy Stone would provide a similar function for go. (Of course there are dozens of people here happy to give feedback on games, but I don't want to overuse their good will!) Initial impressions are that it very much can. I fed it a recent game of mine, had it analyze for 10 minutes, and it gave me some important insights I hadn't seen in my own review (mostly having to do with sacrificing stones).
I played a couple of quick games at the 5 kyu level. In both cases I had a comfortable opening lead, got lazy, got tricked tactically in a big life and death situation, and lost. I learned plenty from going over the ensuing analyses, so that's great. I did feel that it didn't play a lot like a 5 kyu human - lots and lots of pushing over and over, very little tenuki. This was just two games though. If I have to play people (or Crazy Stone on a higher level) to get more interesting fuseki, that's okay. On the other hand, in one game it "misread" a relatively straightforward life and death issue in a human sort of way, and so did I; in the analysis it was happy to point out what it "missed". (Scare quotes are all because of course it would have gotten it right running at full strength.)
The process of analysis was slightly awkward. I might have missed something in the UI, but I couldn't seem to get it into a mode where I could just make moves on the board and have the computer's opinion constantly update. If I made a branch of the game tree, I would be put back in game-playing mode, where I could ask for a hint but not see the engine's current analysis. Once I made a move, I could see what it thought of it, though.
Anyway, I'm quite happy with it so far. I think that having a strong player always around to point out my worst mistakes within minutes is really going to help tighten the feedback loop of learning. Of course, at my level, I'm not in a position to assess its true strength.
It ran perfectly fine on Windows 7 inside Parallels on my MacBook Pro (quad-core 2.8Ghz i7). Phew!
The UI is a weird combination of slick and clunky (e.g., ugly fonts), but it gets the job done.
I use chess engines all the time to analyze my play, as I don't have a teacher, and they are invaluable for catching my mistakes, both tactical and strategical. I was hoping that the new Crazy Stone would provide a similar function for go. (Of course there are dozens of people here happy to give feedback on games, but I don't want to overuse their good will!) Initial impressions are that it very much can. I fed it a recent game of mine, had it analyze for 10 minutes, and it gave me some important insights I hadn't seen in my own review (mostly having to do with sacrificing stones).
I played a couple of quick games at the 5 kyu level. In both cases I had a comfortable opening lead, got lazy, got tricked tactically in a big life and death situation, and lost. I learned plenty from going over the ensuing analyses, so that's great. I did feel that it didn't play a lot like a 5 kyu human - lots and lots of pushing over and over, very little tenuki. This was just two games though. If I have to play people (or Crazy Stone on a higher level) to get more interesting fuseki, that's okay. On the other hand, in one game it "misread" a relatively straightforward life and death issue in a human sort of way, and so did I; in the analysis it was happy to point out what it "missed". (Scare quotes are all because of course it would have gotten it right running at full strength.)
The process of analysis was slightly awkward. I might have missed something in the UI, but I couldn't seem to get it into a mode where I could just make moves on the board and have the computer's opinion constantly update. If I made a branch of the game tree, I would be put back in game-playing mode, where I could ask for a hint but not see the engine's current analysis. Once I made a move, I could see what it thought of it, though.
Anyway, I'm quite happy with it so far. I think that having a strong player always around to point out my worst mistakes within minutes is really going to help tighten the feedback loop of learning. Of course, at my level, I'm not in a position to assess its true strength.