Hi,when I run pachi19x19_X64 It frequently pop up WARNING: tree memory limit reached stoping search try increasing Max_tree_size. my setting is -t _1200 therads=4,max_tree_size=3072 What's wrong?
Play with pachi 12 It frequently pop up warning say Tree memory limit reached stopping search try increasing max_tree_size. but my max_tree_size set to 3G. Don't know whats wrong?
Play with pachi 12 It frequently pop up warning say Tree memory limit reached stopping search try increasing max_tree_size. but my max_tree_size set to 3G. Don't know whats wrong?
Could you try beta3, shouldn't get the popup anymore (unless using really long thinking times) There was a bug with big max_tree_size values also, should be fixed now.
Btw you can view your tree search memory usage with:
Code:
pachi.exe -d4 ... [15000] best 52.5% xkomi 0.0 | seq D4 E17 K11 Q4 | can w D4(52.5) D17(52.1) D16(50.8) | 13.9Mb
I saw a non-beta version of 12.00 was released. Thanks for this LemonSqueeze. I quite like having an opponent like Pachi, which isn't super stronger than me.
Can we expect a Linux release for Pachi 12.00? I was not able to compile it, neither from the release point source of the latest git source.
There are binary packages for ubuntu, see readme about adding pachi ppa.
Thanks for the information.
I tried this on Ubuntu and it did not work. Apparently, pachi requires an old version of libboost-system (libboost-system1.54.0) and on the newest version of ubuntu, it is not available any more. Here I am using Ubuntu 17.10, and libboost-system1.62.0 and libboost-system1.63.0 are available. libgoogle-glog0 also is not available any more (maybe replaced by libgoogle-glog0v5?)
Anyway, this time I was able to compile after cloning the repository (trying to compile the released sources did not work)
I had a very quick try and I ran into a very strange behaviour: the first move proposed by pachi as black is... L17
This happens without dncc at short thinking time, but can also happen when using dncc with a long thinking time (as in the log above) but not necessarily all the time.
Another strange behaviour (and more annoying), is that starting from black second move, in self play, I got this message at each move in the stderr:
Code:
*** LOST ON TIME internally! (-14.46, spent 24.46s on last move)
and this happens whatever time settings I use (either in command line or through GTP command). Then Pachi seems to reply immediately to the genmove commands, without thinking on the move. This does not happen when I play one the colour manually. When using it in GoGui, if Pachi takes both colors, it is very obvious something is wrong.
Anyway, good work, Pachi is very promising!
_________________ I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
I tried this on Ubuntu and it did not work. Apparently, pachi requires an old version of libboost-system (libboost-system1.54.0) and on the newest version of ubuntu, it is not available any more. Here I am using Ubuntu 17.10, and libboost-system1.62.0 and libboost-system1.63.0 are available.
Looks like you tried to install the ubuntu 16.04 package in 17.10, which can't work. I'm not going to support 17.10 but i could create a package for 18.04 if someone requests it (right now 14.04 and 16.04 are supported). actually, this might even work for 17.10 =)
Quote:
Anyway, this time I was able to compile after cloning the repository (trying to compile the released sources did not work)
Yes, best is to clone the repository to build. You could build from the release sources but you'll have to tweak the Makefile, it's looking for current git version which is missing here.
Code:
patterns_mm.spat not found, mm patterns disabled.
This is bad, you really need patterns or reading will be atrocious (hence L17). I just noticed i forgot to update the Makefile for mm patterns. You probably used 'make install-data' to install the data files which is why it's missing. Fixed now, if you update your repo and install again it should work. Or just copy the files directly: 'cp patterns_mm.* /usr/local/share/pachi/' (or wherever you installed it).
Quote:
if Pachi takes both colors, it is very obvious something is wrong.
Yes, for self-play you need two pachi instances and something like gogui-twogtp, you can't have one play both sides right now. Maybe should document this better.
Thanks for the feedback, that was very useful. Good job with GoReviewPartner btw !
This is bad, you really need patterns or reading will be atrocious (hence L17).
In fact, the readme says something like:
Code:
> Mostly useful when running without dcnn (dcnn can deal with fuseki).
And this is sort of true: when playing against Pachi on quick thinking time (5~10 seconds per moves), Pachi-dcnn just plays normal opening. The strange opening moves happened when I start using long thinking time.
lemonsqueez wrote:
You probably used 'make install-data' to install the data files which is why it's missing.
Haha, no in fact I just compiled it and ran it directly from the terminal. I was in fact interested in checking the stderr output when Pachi is thinking to see what information can be found there (more on it after).
lemonsqueez wrote:
Yes, for self-play you need two pachi instances and something like gogui-twogtp, you can't have one play both sides right now. Maybe should document this better.
Well noted!
lemonsqueez wrote:
Thanks for the feedback, that was very useful. Good job with GoReviewPartner btw !
In fact, I am considering adding support to Pachin in GoReviewPartner, because:
I think it will adequately fill the gap between Gnugo (ok to review DDK games, but too weak for SDK games) and Leela (somehow too strong for SDK games)
Because it is based on MonteCarlo tree search, and the DCNN is used only to suggest move (not for game evaluation apparently) this means Pachi can probably adjust its play according to komi or handicap, that's a big plus.
For the same reason as above, Pachi should be able to handle Chinese and Japanese rule sets pretty well (and aga|new_zealand|simplified_ing)
Even if not used for analysis, it can still make a good sparring partner for low dan players or 1~3kyu players in live analysis, with review by a second stronger AI.
Can also be used on 9x9 (and I guess 13x13, but haven't tried yet)
It seems to play very nicely on old computers like mine, without GPU. And it plays nicely on Linux (that's a big deal for me, my laptop is so slow in Windows...)
There seems to be a lot of interesting extra features available inside that can be used for game analysis
_________________ I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
In fact, I am considering adding support to Pachi in GoReviewPartner
Cool, looking forward to it. Let me know if you need anything.
pnprog wrote:
In fact, the readme says something like:
Code:
> Mostly useful when running without dcnn (dcnn can deal with fuseki).
And this is sort of true: when playing against Pachi on quick thinking time (5~10 seconds per moves), Pachi-dcnn just plays normal opening. The strange opening moves happened when I start using long thinking time.
There is some confusion here, the readme note is about the opening book and i was talking about patterns which are used in tree search. Without them mcts will be at least 1-2 stones weaker and it'll come up with strange moves. It doesn't completely collapse because dcnn helps, but basically this makes one of pachi's components much weaker. Except for really low playouts expect overall strength to go down.
I haven't done really long thinking times tests in a while so feel free to experiment. I'd keep the long thinking times for middle game though, during fuseki monte-carlo evaluation is too noisy. It'll likely override dcnn moves with moves that are worse. With --fuseki-time you can have it play fast during fuseki so dcnn stays on top, and slow afterwards. Something like 'pachi -t 120 --fuseki-time =5000'
There is some confusion here, the readme note is about the opening book and i was talking about patterns which are used in tree search. Without them mcts will be at least 1-2 stones weaker and it'll come up with strange moves. It doesn't completely collapse because dcnn helps, but basically this makes one of pachi's components much weaker. Except for really low playouts expect overall strength to go down.
You were right there, I was making the confusion between joseki dictionary and the pattern database. I have it set up all right now
lemonsqueez wrote:
I haven't done really long thinking times tests in a while so feel free to experiment. I'd keep the long thinking times for middle game though, during fuseki monte-carlo evaluation is too noisy. It'll likely override dcnn moves with moves that are worse. With --fuseki-time you can have it play fast during fuseki so dcnn stays on top, and slow afterwards. Something like 'pachi -t 120 --fuseki-time =5000'
I had a try and it seems to work just fine now. By the way, I am curious how Pachi can decide when the fuseki is over.
I think I have enough information to start, but just to double check with you:
Code:
[280000] best 49.0% xkomi 7.5 | seq Q16 L17 D4 C11 | can b Q16(49.0) Q17(48.5) K10(49.0) P16(49.0)
280000 is the total number of play outs/simulations reached up to that point?
49.0% is the winrate associated to the best move found up to that point?
seq Q16 L17 D4 C11 indicates the best move so far is Q16 (the one associated with the 49.0% from above), and L17 D4 C11 the expected follow up? The number of move of the follow up sequence seems to always be 3, regardless of how much thinking time I put in it. Is this a fixed parameters?
Q16(49.0) Q17(48.5) K10(49.0) P16(49.0) are the other moves considered by Pachi at this point? (alternative moves)
They do not appear ordered from best to worst (Q17:48.5 < K10:49.0), so is it safe to order them from best to worst using the % value, or that order is somehow important and should be preserved? (lower % but more thoroughly tested with more play outs)
The number of alternative moves seems to always be 3, regardless of how much thinking time I put in it. Is this a fixed parameters?
Is there a way to get a follow up sequence for those moves as well? I noticed that for some of them, a follow up sequence can sometime be found in the other log lines above, but it might not be the most probable follow up I guess
Thanks !
_________________ I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
I had a try and it seems to work just fine now. By the way, I am curious how Pachi can decide when the fuseki is over.
Nothing fancy right now, fuseki is fixed at first 20 moves for 19x19, less for smaller boards.
Quote:
Code:
[280000] best 49.0% xkomi 7.5 | seq Q16 L17 D4 C11 | can b Q16(49.0) Q17(48.5) K10(49.0) P16(49.0)
280000 is the total number of play outs/simulations reached up to that point?
49.0% is the winrate associated to the best move found up to that point?
seq Q16 L17 D4 C11 indicates the best move so far is Q16 (the one associated with the 49.0% from above), and L17 D4 C11 the expected follow up?
Q16(49.0) Q17(48.5) K10(49.0) P16(49.0) are the other moves considered by Pachi at this point? (alternative moves)
They do not appear ordered from best to worst (Q17:48.5 < K10:49.0), so is it safe to order them from best to worst using the % value, or that order is somehow important and should be preserved? (lower % but more thoroughly tested with more play outs)
Yes, that's correct. The candidates are ordered from best to worst, but for mcts best means most visited, the winrates are more indicative and can be out of order.
Quote:
The number of move of the follow up sequence seems to always be 3, regardless of how much thinking time I put in it. Is this a fixed parameters?
The number of alternative moves seems to always be 3, regardless of how much thinking time I put in it. Is this a fixed parameters?
Is there a way to get a follow up sequence for those moves as well? I noticed that for some of them, a follow up sequence can sometime be found in the other log lines above, but it might not be the most probable follow up I guess
Yes, these lines are more of a summary, they only show the 4 best candidates and follow-up moves regardless of how deep the tree is. You can get the full picture by looking at the whole tree:
Might be easier to use. Also you get best sequence for each candidate. Number of candidates is fixed at 4 but this could be increased. Same for sequence length. There's also 'reporting=jsonbig' which has even more stuff.
The three view offers a lot of information, but it's a little overwhelming. GRP would needs a "tree" view of the game to take advantage of this, and I am trying to avoid that (I am a proponent that less information, is easier to navigate and becomes more useful).
The json report is very handy I guess I will go for this.
4 alternatives + 4 moves per alternative is probably enough. FanHui was quoted saying that, if you can read 3 moves deep, then you are already dan level
How come they don't share the same winning percentage?
I was under the impression that Pachi was performing a sort of mini max algorithm, so the win rate of the complete "path" would be deduced from the deepest leaf (following a "mini max" propagation from deepest leaf to first leaf) of something like that (although here, I understand that Pachi goes deeper than 4 moves, to we don' know the value of that latest leaf).
Said differently, what are the meaning of those values
_________________ I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
How come they don't share the same winning percentage?
I was under the impression that Pachi was performing a sort of mini max algorithm, so the win rate of the complete "path" would be deduced from the deepest leaf (following a "mini max" propagation from deepest leaf to first leaf) of something like that (although here, I understand that Pachi goes deeper than 4 moves, to we don' know the value of that latest leaf).
Well that makes sense, for each node the percentage is the average winrate over all followup moves explored. Within this subtree mcts explored other moves besides this sequence so the winrates are going to differ depending on what was explored: better moves pull it up, worse ones pull it down.
To get an idea of the situation at the end of the sequence you would look at the winrate of the last node (provided there are enough playouts for this node or it won't be very significant).
If you want to use this in GoReviewPartner try json reporting:
So I implemented support into GoReviewPartner using json reporting, so far so good. I am very satisfied with the result
lemonsqueez wrote:
Number of candidates is fixed at 4 but this could be increased.
I searched in the code, and modified the values for the number of variations, and deepness for each variations. But the resulting json formatting was incorrect (some comma missing between elements of arrays). I was able to modify the code to fix it, compiled it and tested it on my computer.
Next I sent a pull request on the project's Github, and got a tons of issues with that Travis CI build. I don't know much about C++, but I am fairly confident this is unrelated to the code changes I mage:
Code:
The command "if [ "$TRAVIS_OS_NAME" = linux ] && [ "$DCNN" = 1 ]; then sudo apt-get install libcaffe-cpu-dev -y ; fi" failed and exited with 100 during .
I don't know what to do next to have that pull request, accepted. Could you help me check?
_________________ I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot post attachments in this forum