They'll be providing some sponsorship for the London Open 2016.Marcel Grünauer wrote:There are videos of a Korean press conference where the AlphaGo team also answers questions.
Briefing
https://www.youtube.com/watch?v=yR017hmUSC4
Q & A
https://www.youtube.com/watch?v=_r3yF4lV0wk
There they mention that they are going to sponsor tournaments and try to make the game more popular, so that's very nice to hear.
By the way, one of the questions was whether they have any plans to register AlphaGo as a professional player in Korea; they don't have such a plan, but it's an interesting idea.
Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-0
-
mumps
- Dies with sente
- Posts: 112
- Joined: Thu Aug 12, 2010 1:11 am
- GD Posts: 0
- Has thanked: 9 times
- Been thanked: 23 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
- Shawn Ligocki
- Dies with sente
- Posts: 109
- Joined: Sat Dec 28, 2013 12:10 am
- Rank: AGA 1k
- GD Posts: 0
- KGS: sligocki
- Online playing schedule: Ad hoc
- Location: Boston
- Has thanked: 159 times
- Been thanked: 19 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Well, the AlphaGo team did actually train a "compressed" neural net for the move prediction policy engine. It sacrificed a significant amount of accuracy (57% -> 24%), but sped the computation up by over 1000 times (3ms -> 2µs). Enough to be worth it for evaluating the Monte Carlo rollout.mika wrote:Are you absolutely certain about that? Since here somebody is arguing that some ANNs can be compressed to save space without sacrificing accuracy. See for example http://arxiv.org/abs/1510.00149.Mike Novack wrote:So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.
AlphaGo Paper wrote: We trained a 13 layer policy network, which we call the SL policy network, from 30 million positions from the KGS Go Server. The network predicted expert moves with an accuracy of 57.0% on a held out test set, using all input features, and 55.7% using only raw board position and move history as inputs, compared to the state-of-the-art from other research groups of 44.4% at date of submission 24 (full results in Extended Data Table 3). Small improvements in accuracy led to large improvements in playing strength (Figure 2,a); larger networks achieve better accuracy but are slower to evaluate during search. We also trained a faster but less accurate rollout policy pπ(a|s), using a linear softmax of small pattern features (see Extended Data Table 4) with weights π; this achieved an accuracy of 24.2%, using just 2 µs to select an action, rather than 3 ms for the policy network.
-
pasky
- Dies in gote
- Posts: 43
- Joined: Wed Apr 21, 2010 6:49 am
- Has thanked: 4 times
- Been thanked: 22 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
That's not a compressed neural network but just a frequency-based specialized pattern classifier.
Go programmer and researcher: http://pasky.or.cz/~pasky/go/
EGF 1921, KGS ~1d and getting weaker
EGF 1921, KGS ~1d and getting weaker
- Shawn Ligocki
- Dies with sente
- Posts: 109
- Joined: Sat Dec 28, 2013 12:10 am
- Rank: AGA 1k
- GD Posts: 0
- KGS: sligocki
- Online playing schedule: Ad hoc
- Location: Boston
- Has thanked: 159 times
- Been thanked: 19 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Ah, my mistake. I see the paper you referenced is specifically referring to compressing an existing neural network rather than training a smaller network.pasky wrote:That's not a compressed neural network but just a frequency-based specialized pattern classifier.
-
phillip1882
- Lives in gote
- Posts: 323
- Joined: Sat Jan 08, 2011 7:31 am
- Rank: 6k
- GD Posts: 25
- OGS: phillip1882
- Has thanked: 4 times
- Been thanked: 39 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
incredible, i didn't think it would happen for another 5 years. an 8-9 Dan computer go program.
-
yoyoma
- Lives in gote
- Posts: 653
- Joined: Mon Apr 19, 2010 8:45 pm
- GD Posts: 0
- Location: Austin, Texas, USA
- Has thanked: 54 times
- Been thanked: 213 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Announcement of announcement!
There will be a press conference in Seoul Feb 22 5pm about the AlphaGo vs Lee Sedol match. They will announce the match location, time controls, and other details.
Source (Korean): http://www.cyberoro.com/news/news_view. ... =1&cmt_n=0
BTW the dates of the matches were announced earlier: March 9, 10, 12, 13, and 15.
Source (Korean): http://www.cyberoro.com/news/news_view. ... =1&cmt_n=0
BTW the dates of the matches were announced earlier: March 9, 10, 12, 13, and 15.
-
pookpooi
- Lives in sente
- Posts: 727
- Joined: Sat Aug 21, 2010 12:26 pm
- GD Posts: 10
- Has thanked: 44 times
- Been thanked: 218 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
More interview from GeekWire.com
“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.
Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”
For Hassabis, the AlphaGo project is about much more than beating one of the world’s best Go players. The principles that are being put to work in the program can be applied to other AI challenges as well, ranging from programming self-driving cars to creating more humanlike virtual assistants and improving the diagnoses for human diseases.
“We think AI is solving a meta-problem for all these problems,” Hassabis said.
“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.
Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”
For Hassabis, the AlphaGo project is about much more than beating one of the world’s best Go players. The principles that are being put to work in the program can be applied to other AI challenges as well, ranging from programming self-driving cars to creating more humanlike virtual assistants and improving the diagnoses for human diseases.
“We think AI is solving a meta-problem for all these problems,” Hassabis said.
-
Krama
- Lives in gote
- Posts: 436
- Joined: Mon Jan 06, 2014 3:46 am
- Rank: KGS 5 kyu
- GD Posts: 0
- Has thanked: 1 time
- Been thanked: 38 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Skynet.pookpooi wrote:More interview from GeekWire.com
“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.
Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”
For Hassabis, the AlphaGo project is about much more than beating one of the world’s best Go players. The principles that are being put to work in the program can be applied to other AI challenges as well, ranging from programming self-driving cars to creating more humanlike virtual assistants and improving the diagnoses for human diseases.
“We think AI is solving a meta-problem for all these problems,” Hassabis said.
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
In case you missed it, Fan Hui won all his games at the 1st European Professional Go Championship last weekend:
http://senseis.xmp.net/?EuropeanProfess ... ampionship
http://senseis.xmp.net/?EuropeanProfess ... ampionship
-
pookpooi
- Lives in sente
- Posts: 727
- Joined: Sat Aug 21, 2010 12:26 pm
- GD Posts: 10
- Has thanked: 44 times
- Been thanked: 218 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
even more interview that I found in twitter account that has attend the AAAS meeting
Pro Go players give champ Lee Sedol 97% odds against #AlphaGo (AI). @demishassabis: “Our internal tests are telling us something different”

He also make a headline in the Observer Tech Monthly

In his schedule He'll give a talk at AAAI this Tuesday, and lecture at Oxford University next Friday (no, I'm not his secretary nor stalker)
Pro Go players give champ Lee Sedol 97% odds against #AlphaGo (AI). @demishassabis: “Our internal tests are telling us something different”

He also make a headline in the Observer Tech Monthly

In his schedule He'll give a talk at AAAI this Tuesday, and lecture at Oxford University next Friday (no, I'm not his secretary nor stalker)
-
mumps
- Dies with sente
- Posts: 112
- Joined: Thu Aug 12, 2010 1:11 am
- GD Posts: 0
- Has thanked: 9 times
- Been thanked: 23 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
However, they can't be sure whether it's improving against humans, which is one reason they had the match with Fan Hui.
The one thing they always say is that they can work out that it's improving against itself, and perhaps other bots they're using for calibration purposes.pookpooi wrote:More interview from GeekWire.com
“This really is our Deep Blue moment,” Demis Hassabis, Google DeepMind’s president of engineering, said this weekend at the American Association for the Advancement of Science’s annual meeting in Washington.
Hassabis said most Go players are giving Sedol the edge over AlphaGo. “They give us a less than 5 percent chance of winning … but what they don’t realize is how much our system has improved,” he said. “It’s improving while I’m talking with you.”
However, they can't be sure whether it's improving against humans, which is one reason they had the match with Fan Hui.
-
uPWarrior
- Lives with ko
- Posts: 199
- Joined: Mon Jan 17, 2011 1:59 pm
- Rank: KGS 3 kyu
- GD Posts: 0
- Has thanked: 6 times
- Been thanked: 55 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
I don't remember anything about a maximum depth limit when I read the paper, and I couldn't find it when I skimmed it just now. Could you point me to that passage? I always assumed that the depth was dynamic, but that could have been my subconscious adding a bias to my initial read.mumps wrote:The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
EDIT: I just skimmed it again and I think this passage in the "Searching with policy and value networks" section makes it clear that there is no depth limit:
This is MCTS after all.The tree is traversed by simulation (that is, descending the tree in complete games without backup), starting from the root state.
-
Uberdude
- Judan
- Posts: 6727
- Joined: Thu Nov 24, 2011 11:35 am
- Rank: UK 4 dan
- GD Posts: 0
- KGS: Uberdude 4d
- OGS: Uberdude 7d
- Location: Cambridge, UK
- Has thanked: 436 times
- Been thanked: 3718 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
I would be surprised if DeepMind didn't test AlphaGo anonymously against Tygem 9ds before the match with Lee Sedol. Of course some pro playing on Tygem is not as serious as a real match, but if they are beating top pros on Tygem (see https://docs.google.com/spreadsheets/d/ ... I4YWEHdYBs for accounts) then they could reasonably feel more confident against Lee. It's worth noting that current #1 Ke Jie played thousands of games on Tygem to get stronger.mumps wrote:The one thing they always say is that they can work out that it's improving against itself, and perhaps other bots they're using for calibration purposes.
However, they can't be sure whether it's improving against humans, which is one reason they had the match with Fan Hui.
-
pookpooi
- Lives in sente
- Posts: 727
- Joined: Sat Aug 21, 2010 12:26 pm
- GD Posts: 10
- Has thanked: 44 times
- Been thanked: 218 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
I don't think they would do that because users there want to look for any account that can be AlphaGo right now on Tygem, they even mistook Aja Huang (6D programmer from Deepmind team) account to be AlphaGo because that account win against some pro.Uberdude wrote: I would be surprised if DeepMind didn't test AlphaGo anonymously against Tygem 9ds before the match with Lee Sedol. Of course some pro playing on Tygem is not as serious as a real match, but if they are beating top pros on Tygem (see https://docs.google.com/spreadsheets/d/ ... I4YWEHdYBs for accounts) then they could reasonably feel more confident against Lee. It's worth noting that current #1 Ke Jie played thousands of games on Tygem to get stronger.
I think they've a contract to some pro to work on Deepmind team internally, so they can do something crazy like adding handicap stones to the pro side.
-
mumps
- Dies with sente
- Posts: 112
- Joined: Thu Aug 12, 2010 1:11 am
- GD Posts: 0
- Has thanked: 9 times
- Been thanked: 23 times
Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Hmm, I can't find in the paper where I got that impression from. They do talk about searches sometimes being full and sometimes truncated though.uPWarrior wrote:I don't remember anything about a maximum depth limit when I read the paper, and I couldn't find it when I skimmed it just now. Could you point me to that passage? I always assumed that the depth was dynamic, but that could have been my subconscious adding a bias to my initial read.mumps wrote:The paper in Nature only identifies one specific Go thing they code into it, apart from the obvious rules to prevent illegal moves: ladder. This may be to overcome the depth of 20 moves they use in their other analysis.
EDIT: I just skimmed it again and I think this passage in the "Searching with policy and value networks" section makes it clear that there is no depth limit:This is MCTS after all.The tree is traversed by simulation (that is, descending the tree in complete games without backup), starting from the root state.
By contrast, AlphaGo’s use of value functions is based on truncated Monte Carlo search algorithms, which terminate rollouts before the end of the game and use a value function in place of the terminal reward.