Page 3 of 9
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Fri May 12, 2017 8:50 am
by pookpooi
In this news
http://sports.sina.com.cn/go/2017-05-12 ... 4772.shtml
Meng Tailing 6p suggest Ke Jie to stop looking for AI weakness and play in his usual style.
This confirm that Ke Jie has been train under AI-specific program for a while (his losing streak against FineArt maybe one of his experiment) and it might affect his results against human recently.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Fri May 12, 2017 2:11 pm
by djhbrown
Meng is wrong; Ke Jie is right. You can't beat her at her own game, you have to probe beneath the surface.
The dog does have weaknesses, although finding them is harder for her than for most opps. based on how she works, i'm guessing one of them is a lack of positional judgment, compensated for by extraordinary reading ability.
We will see

Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 13, 2017 6:38 am
by Waylon
djhbrown wrote:
The dog does have weaknesses, although finding them is harder for her than for most opps. based on how she works, i'm guessing one of them is a lack of positional judgment, compensated for by extraordinary reading ability.
We will see

From my experience with Zen, Crazy Stone and Leela, I'm guessing just the opposite: positional judgement is superb, the reading ability is not yet perfect.

Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 13, 2017 6:42 am
by djhbrown

that's the difference between us, a difference between an objective examination of the technology, and a subjective impression gained from personal experience
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 13, 2017 1:46 pm
by dfan
Waylon wrote:From my experience with Zen, Crazy Stone and Leela, I'm guessing just the opposite: positional judgement is superb, the reading ability is not yet perfect.

The consensus of the 9ps that lost to Master seemed to be that its positional judgment was qualitatively better than theirs.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 13, 2017 2:53 pm
by Bonobo
dfan wrote:The consensus of the 9ps that lost to Master seemed to be that its positional judgment was qualitatively better than theirs.
Methinks this would fall under …
djhbrown wrote:a subjective impression gained from personal experience
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Tue May 16, 2017 10:03 am
by pookpooi
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Tue May 16, 2017 1:14 pm
by alphaville
I agree with her. Ke Jie has 30% chance to win, if he receives 3 stones handicap.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Tue May 16, 2017 2:46 pm
by yoyoma
pookpooi what is the source of the AlphaGo rating graph you posted?
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Wed May 17, 2017 4:23 am
by djhbrown
alphaville wrote:Ke Jie has 30% chance to win
He has a 100% chance to win the only thing that counts: the appearance money, because he's already got it. Which is rather unfair to Poor Porky Pie PRers who have to feed parents and brother on a few clickbait crumbs from the rich man's table, sourced-up by a dash of Auntie Dee's makeitupasyougoalong Homemade graphic Catsup.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Fri May 19, 2017 4:51 am
by pookpooi
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 20, 2017 7:46 am
by pookpooi
Rumor around weibo that not only the new AlphaGo will learning without human game record, but it also doesn't use Monte Carlo Tree Search
http://www.weibo.com/p/23041873040b820102wwci
My opinion, is this even possible? Please take this as a grain of salt. But there's information that DeepMind dedicated the whole day (24th) to conference. (Is someone roaming through DeepMind presentation slides without permission? That's very bad)
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 20, 2017 10:05 am
by alphaville
pookpooi wrote:Rumor around weibo that not only the new AlphaGo will learning without human game record, but it also doesn't use Monte Carlo Tree Search
http://www.weibo.com/p/23041873040b820102wwci
My opinion, is this even possible? Please take this as a grain of salt. But there's information that DeepMind dedicated the whole day (24th) to conference. (Is someone roaming through DeepMind presentation slides without permission? That's very bad)
Any chance you can post the weibo info without the need for us to login there?
The Nature paper stated that neural-network only (no MCTS) version of AlphaGo was already stronger than all other commercial computer-go games at the time of Fan Hui match, but way weaker than the MCTS version.
It sounds impossible at first sight to think that a pure static analysis version, without MCTS, would be stronger than a pro; on the other hand, with a very deep neural-network architecture, I guess it may be possible, since some of the layers of the network would effectively do some "reading ahead". Just speculating here.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 20, 2017 11:55 am
by pookpooi
For me it doesn't require sign up to access weibo, it just that they're in Chinese language, but here in spoiler.
Re: Predict AlphaGo on Future of Go Summit in Wuzhen, China
Posted: Sat May 20, 2017 1:25 pm
by Uberdude
It's important to note that saying this new version of AlphaGo doesn't do Monte Carlo Tree Search is not the same as saying it doesn't do tree search, though I don't know if that technical precision was lost in translation. You could have a tree search version which constructs a game tree using a policy network to suggest moves, and a value network to evaluate nodes in that tree without doing monte-carlo rollouts to the end of the game. In their published work so far combining the rollouts with the value network gives the best results, but as the networks keep getting better it wouldn't surprise me if tree+policy+value+rollouts was now 15p strength and tree+policy+value "only" 13p and therefore good enough for the upcoming match.