Page 6 of 7

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Sun Jan 31, 2016 5:31 am
by mumps
Marcel Grünauer wrote:There are videos of a Korean press conference where the AlphaGo team also answers questions.

Briefing
https://www.youtube.com/watch?v=yR017hmUSC4

Q & A
https://www.youtube.com/watch?v=_r3yF4lV0wk

There they mention that they are going to sponsor tournaments and try to make the game more popular, so that's very nice to hear.

By the way, one of the questions was whether they have any plans to register AlphaGo as a professional player in Korea; they don't have such a plan, but it's an interesting idea.
They'll be providing some sponsorship for the London Open 2016.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 01, 2016 11:34 am
by Shawn Ligocki
mika wrote:
Mike Novack wrote:So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.
Are you absolutely certain about that? Since here somebody is arguing that some ANNs can be compressed to save space without sacrificing accuracy. See for example http://arxiv.org/abs/1510.00149.
Well, the AlphaGo team did actually train a "compressed" neural net for the move prediction policy engine. It sacrificed a significant amount of accuracy (57% -> 24%), but sped the computation up by over 1000 times (3ms -> 2µs). Enough to be worth it for evaluating the Monte Carlo rollout.
AlphaGo Paper wrote: We trained a 13 layer policy network, which we call the SL policy network, from 30 million positions from the KGS Go Server. The network predicted expert moves with an accuracy of 57.0% on a held out test set, using all input features, and 55.7% using only raw board position and move history as inputs, compared to the state-of-the-art from other research groups of 44.4% at date of submission 24 (full results in Extended Data Table 3). Small improvements in accuracy led to large improvements in playing strength (Figure 2,a); larger networks achieve better accuracy but are slower to evaluate during search. We also trained a faster but less accurate rollout policy pπ(a|s), using a linear softmax of small pattern features (see Extended Data Table 4) with weights π; this achieved an accuracy of 24.2%, using just 2 µs to select an action, rather than 3 ms for the policy network.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 01, 2016 6:31 pm
by pasky
That's not a compressed neural network but just a frequency-based specialized pattern classifier.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 01, 2016 7:05 pm
by Shawn Ligocki
pasky wrote:That's not a compressed neural network but just a frequency-based specialized pattern classifier.
Ah, my mistake. I see the paper you referenced is specifically referring to compressing an existing neural network rather than training a smaller network.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Thu Feb 11, 2016 6:51 am
by phillip1882
incredible, i didn't think it would happen for another 5 years. an 8-9 Dan computer go program.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Sun Feb 14, 2016 9:27 pm
by yoyoma
Announcement of announcement! :cool: There will be a press conference in Seoul Feb 22 5pm about the AlphaGo vs Lee Sedol match. They will announce the match location, time controls, and other details.

Source (Korean): http://www.cyberoro.com/news/news_view. ... =1&cmt_n=0

BTW the dates of the matches were announced earlier: March 9, 10, 12, 13, and 15.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 1:20 am
by pookpooi
More interview from GeekWire.com

“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.

Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”

For Hassabis, the AlphaGo project is about much more than beating one of the world’s best Go players. The principles that are being put to work in the program can be applied to other AI challenges as well, ranging from programming self-driving cars to creating more humanlike virtual assistants and improving the diagnoses for human diseases.

“We think AI is solving a meta-problem for all these problems,” Hassabis said.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 4:46 am
by Krama
pookpooi wrote:More interview from GeekWire.com

“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.

Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”

For Hassabis, the AlphaGo project is about much more than beating one of the world’s best Go players. The principles that are being put to work in the program can be applied to other AI challenges as well, ranging from programming self-driving cars to creating more humanlike virtual assistants and improving the diagnoses for human diseases.

“We think AI is solving a meta-problem for all these problems,” Hassabis said.
Skynet.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 4:47 am
by Uberdude
In case you missed it, Fan Hui won all his games at the 1st European Professional Go Championship last weekend:
http://senseis.xmp.net/?EuropeanProfess ... ampionship

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 7:54 am
by pookpooi
even more interview that I found in twitter account that has attend the AAAS meeting

Pro Go players give champ Lee Sedol 97% odds against #AlphaGo (AI). @demishassabis: “Our internal tests are telling us something different”
Image
He also make a headline in the Observer Tech Monthly
Image

In his schedule He'll give a talk at AAAI this Tuesday, and lecture at Oxford University next Friday (no, I'm not his secretary nor stalker)

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 9:16 am
by mumps
The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
pookpooi wrote:More interview from GeekWire.com

“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.

Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”
The one thing they always say is that they can work out that it's improving against itself, and perhaps other bots they're using for calibration purposes.

However, they can't be sure whether it's improving against humans, which is one reason they had the match with Fan Hui.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 9:35 am
by uPWarrior
mumps wrote:The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
I don't remember anything about a maximum depth limit when I read the paper, and I couldn't find it when I skimmed it just now. Could you point me to that passage? I always assumed that the depth was dynamic, but that could have been my subconscious adding a bias to my initial read.

EDIT: I just skimmed it again and I think this passage in the "Searching with policy and value networks" section makes it clear that there is no depth limit:
The tree is traversed by simulation (that is, descending the tree in complete games without backup), starting from the root state.
This is MCTS after all.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 10:10 am
by Uberdude
mumps wrote:The one thing they always say is that they can work out that it's improving against itself, and perhaps other bots they're using for calibration purposes.

However, they can't be sure whether it's improving against humans, which is one reason they had the match with Fan Hui.
I would be surprised if DeepMind didn't test AlphaGo anonymously against Tygem 9ds before the match with Lee Sedol. Of course some pro playing on Tygem is not as serious as a real match, but if they are beating top pros on Tygem (see https://docs.google.com/spreadsheets/d/ ... I4YWEHdYBs for accounts) then they could reasonably feel more confident against Lee. It's worth noting that current #1 Ke Jie played thousands of games on Tygem to get stronger.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Mon Feb 15, 2016 10:42 am
by pookpooi
Uberdude wrote: I would be surprised if DeepMind didn't test AlphaGo anonymously against Tygem 9ds before the match with Lee Sedol. Of course some pro playing on Tygem is not as serious as a real match, but if they are beating top pros on Tygem (see https://docs.google.com/spreadsheets/d/ ... I4YWEHdYBs for accounts) then they could reasonably feel more confident against Lee. It's worth noting that current #1 Ke Jie played thousands of games on Tygem to get stronger.
I don't think they would do that because users there want to look for any account that can be AlphaGo right now on Tygem, they even mistook Aja Huang (6D programmer from Deepmind team) account to be AlphaGo because that account win against some pro.
I think they've a contract to some pro to work on Deepmind team internally, so they can do something crazy like adding handicap stones to the pro side.

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Posted: Tue Feb 16, 2016 3:36 am
by mumps
uPWarrior wrote:
mumps wrote:The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
I don't remember anything about a maximum depth limit when I read the paper, and I couldn't find it when I skimmed it just now. Could you point me to that passage? I always assumed that the depth was dynamic, but that could have been my subconscious adding a bias to my initial read.

EDIT: I just skimmed it again and I think this passage in the "Searching with policy and value networks" section makes it clear that there is no depth limit:
The tree is traversed by simulation (that is, descending the tree in complete games without backup), starting from the root state.
This is MCTS after all.
Hmm, I can't find in the paper where I got that impression from. They do talk about searches sometimes being full and sometimes truncated though.
By contrast, AlphaGo’s use of value functions is based on truncated Monte Carlo search algorithms, which terminate rollouts before the end of the game and use a value function in place of the terminal reward.