swim into the dark forest
- djhbrown
- Lives in gote
- Posts: 392
- Joined: Tue Sep 15, 2015 5:00 pm
- Rank: NR
- GD Posts: 0
- Has thanked: 23 times
- Been thanked: 43 times
swim into the dark forest
curiosity. that's what drives science and technology innovation.
mcts is better at Go than alpha-beta. and dcnn is better at Go than stone pattern reflex reactions.
so dcnn+mcts is the best so far.
mcts is better at Go than alpha-beta. and dcnn is better at Go than stone pattern reflex reactions.
so dcnn+mcts is the best so far.
Last edited by djhbrown on Tue May 02, 2017 12:30 am, edited 1 time in total.
- oren
- Oza
- Posts: 2777
- Joined: Sun Apr 18, 2010 5:54 pm
- GD Posts: 0
- KGS: oren
- Tygem: oren740, orenl
- IGS: oren
- Wbaduk: oren
- Location: Seattle, WA
- Has thanked: 251 times
- Been thanked: 549 times
Re: swim into the dark forest
This has to be proven. If there's anything about commonsense, it's often not common and it's often wrong.djhbrown wrote: it's commonsense that swim is better than dcnn, because swim is thoughtful, whereas dcnn is just a bunch of reflexes.
- emeraldemon
- Gosei
- Posts: 1744
- Joined: Sun May 02, 2010 1:33 pm
- GD Posts: 0
- KGS: greendemon
- Tygem: greendemon
- DGS: smaragdaemon
- OGS: emeraldemon
- Has thanked: 697 times
- Been thanked: 287 times
Re: swim into the dark forest
This is the 4th thread by djhbrown on "Commonsense Go". See one, two, three.
djhbrown, if you are serious: what you have is a hypothesis. Test it. Making claims without evidence only hurts your credibility.
Even a slow bad python script that plays at 20k level would at least show that your "commonsense go" can exist outside your own head. If you're thinking "I think I have a great idea, but I need help figuring out how to implement it," just say that.
djhbrown, if you are serious: what you have is a hypothesis. Test it. Making claims without evidence only hurts your credibility.
Even a slow bad python script that plays at 20k level would at least show that your "commonsense go" can exist outside your own head. If you're thinking "I think I have a great idea, but I need help figuring out how to implement it," just say that.
-
Jhyn
- Lives with ko
- Posts: 202
- Joined: Thu Sep 26, 2013 3:03 am
- Rank: EGF 1d
- GD Posts: 0
- Universal go server handle: Jhyn
- Location: Santiago, Chile
- Has thanked: 39 times
- Been thanked: 44 times
Re: swim into the dark forest
There is a saying among computer scientists (and mathematicians), that the fastest way to find the deepest problems or mistakes in a scientific paper is to check all the sentences starting by "it is obvious", "it is clear", "we can easily see that".
Anyway, in this case, the commensense statement looks to me wrong by common sense: if a bot is able to explain his moves in plain English using existing concepts, then it means that it has been programmed using existing manmade theory, and is thus unable to do brilliant moves that expand the field of what is considered playable in the way that AlphaGo did - moves that "just work", but make little sense or are blind spots to the top pros (see pro commentary https://deepmind.com/research/alphago/a ... s-english/.
Looking into making the computer "more human" in his thought process have obvious applications in teaching and commenting, but the belief that human-like thinking will make it stronger is very much to be proven (and, I would say, on its way to be proven wrong).
Anyway, in this case, the commensense statement looks to me wrong by common sense: if a bot is able to explain his moves in plain English using existing concepts, then it means that it has been programmed using existing manmade theory, and is thus unable to do brilliant moves that expand the field of what is considered playable in the way that AlphaGo did - moves that "just work", but make little sense or are blind spots to the top pros (see pro commentary https://deepmind.com/research/alphago/a ... s-english/.
Looking into making the computer "more human" in his thought process have obvious applications in teaching and commenting, but the belief that human-like thinking will make it stronger is very much to be proven (and, I would say, on its way to be proven wrong).
La victoire est un hasard, la défaite une nécessité.
- daal
- Oza
- Posts: 2508
- Joined: Wed Apr 21, 2010 1:30 am
- GD Posts: 0
- Has thanked: 1304 times
- Been thanked: 1128 times
Re: swim into the dark forest
I play rather thoughtfully, but not very well.djhbrown wrote: it's commonsense that swim is better than dcnn, because swim is thoughtful, whereas dcnn is just a bunch of reflexes.
Patience, grasshopper.
-
lightvector
- Lives in sente
- Posts: 759
- Joined: Sat Jun 19, 2010 10:11 pm
- Rank: maybe 2d
- GD Posts: 0
- Has thanked: 114 times
- Been thanked: 916 times
Re: swim into the dark forest
Simply having the idea like "swim" is great, but by itself it only gets you a tiny, tiny fraction of the way there, and everyone else has their own ideas already too. If you're serious about developing the idea and making a genuine contribution, get your hands dirty and go for it!
The MCTS part is even written for you already, so you don't even have to do all that work yourself! Go clone Darkforest or Pachi or any other open source Go engine, and build your own logic right on top of their existing MCTS algorithms. If you can make an improvement, that would be really cool and speak volumes. If you don't really know how to code or haven't done a serious project before, there's no better way or time to learn than to jump in to something that you're invested in and excited about.
The MCTS part is even written for you already, so you don't even have to do all that work yourself! Go clone Darkforest or Pachi or any other open source Go engine, and build your own logic right on top of their existing MCTS algorithms. If you can make an improvement, that would be really cool and speak volumes. If you don't really know how to code or haven't done a serious project before, there's no better way or time to learn than to jump in to something that you're invested in and excited about.
- djhbrown
- Lives in gote
- Posts: 392
- Joined: Tue Sep 15, 2015 5:00 pm
- Rank: NR
- GD Posts: 0
- Has thanked: 23 times
- Been thanked: 43 times
Re: swim into the dark forest
as it happens, i have done a couple of serious projects before, starting with a PhD in machine learning (my cv is at the bottom of link).lightvector wrote:If you don't really know how to code or haven't done a serious project before,
Last edited by djhbrown on Tue May 02, 2017 12:31 am, edited 1 time in total.
-
Mike Novack
- Lives in sente
- Posts: 1045
- Joined: Mon Aug 09, 2010 9:36 am
- GD Posts: 0
- Been thanked: 182 times
Re: swim into the dark forest
But I think the concern of some of us about this common sense idea that "knowing" is somehow superior to being able to "simply do" is that this is NOT obvious.
WHY? (except for being able to explain how")
I think you should take a closer look at those two approaches "by reflex" to understand what their limitations actually are.
MCTS ---- what would be the result of this approach WITHOUT the limitation of time controls? In other words, what if one did NOT have to (in "practical implementations") use pruning, etc. Yes, not practical in the real world (because time IS bounded) but hasn't this been shown to solve the problem "what is the best next move" at lower complexity (less time) that exhaustive search? In other words, isn't this algorithm a "cracking" of the problem, even if not yet a cracking that is practical?
DCNN ---- neural nets are trained to evaluate a function. A program implements the neural net but then the neural net still has to be trained. The function in this case being "given a board position, return the best next move". When dealing with neural nets there are TWO very separate questions. First, is the net of sufficient size and complexity of potentioal connections that it COULD be trained to evaluate the function. Second, even if possible, has its training achieved that (yet).
That you seem to think of "training to correctly recognize one more input-output pair" as the same as making a hard coded change to an AI program means that you are misunderstanding the (extremely interesting) property of neural nets that they DO seem to generalize form the learning process. In other words, training for "one more pair" results in MORE than one pair being learned.
I would also like you to look at your "common sense" idea that being able to EXPLAIN the solution implies a superior solution to the "play go" problem. WHY? Would your cat's ability to catch mice be improved by your cat being able to explain its thinking process to you? What thought processes it is going through from the first "sense something there" to the satisfying crunch of the first bite after the captured toy has broken and no more fun to simply play with?
WHY? (except for being able to explain how")
I think you should take a closer look at those two approaches "by reflex" to understand what their limitations actually are.
MCTS ---- what would be the result of this approach WITHOUT the limitation of time controls? In other words, what if one did NOT have to (in "practical implementations") use pruning, etc. Yes, not practical in the real world (because time IS bounded) but hasn't this been shown to solve the problem "what is the best next move" at lower complexity (less time) that exhaustive search? In other words, isn't this algorithm a "cracking" of the problem, even if not yet a cracking that is practical?
DCNN ---- neural nets are trained to evaluate a function. A program implements the neural net but then the neural net still has to be trained. The function in this case being "given a board position, return the best next move". When dealing with neural nets there are TWO very separate questions. First, is the net of sufficient size and complexity of potentioal connections that it COULD be trained to evaluate the function. Second, even if possible, has its training achieved that (yet).
That you seem to think of "training to correctly recognize one more input-output pair" as the same as making a hard coded change to an AI program means that you are misunderstanding the (extremely interesting) property of neural nets that they DO seem to generalize form the learning process. In other words, training for "one more pair" results in MORE than one pair being learned.
I would also like you to look at your "common sense" idea that being able to EXPLAIN the solution implies a superior solution to the "play go" problem. WHY? Would your cat's ability to catch mice be improved by your cat being able to explain its thinking process to you? What thought processes it is going through from the first "sense something there" to the satisfying crunch of the first bite after the captured toy has broken and no more fun to simply play with?
- djhbrown
- Lives in gote
- Posts: 392
- Joined: Tue Sep 15, 2015 5:00 pm
- Rank: NR
- GD Posts: 0
- Has thanked: 23 times
- Been thanked: 43 times
Re: swim into the dark forest
Polya’s First Principle: Understand the problem
Polya’s Second Principle: Devise a plan
Polya’s Second Principle: Devise a plan
Last edited by djhbrown on Tue May 02, 2017 12:33 am, edited 1 time in total.
-
Mike Novack
- Lives in sente
- Posts: 1045
- Joined: Mon Aug 09, 2010 9:36 am
- GD Posts: 0
- Been thanked: 182 times
Re: swim into the dark forest
I think going into the philosophy of knowledge/problem solving not going to help except to see some of the underlying assumptions we each make.
For example, I do NOT believe in "intelligent design" (that the natural order did not just "evolve"). Thus I do not accept what seems to be your underlying assumption that a PLANNED route to the solution of a problem is necessarily superior to one that is EVOLVED. I would doubt that trying to plan/design the "program" that makes the cat very good at catching mice would result in a superior mouser than the one that has evolved.
The current problem, in this case, is "play a very strong game of go". It's OK to suggest we attack instead a different problem, "play a very good game of go AND be able to help humans understand why played as did". OK to suggest that this would be a more useful problem for us to tackle. My objection is to the claim that this would ALSO lead to a better "very good game" than the current problem. That is very far from obvious.
The training of a DCNN is not completely unlike an evolutionary process.
For example, I do NOT believe in "intelligent design" (that the natural order did not just "evolve"). Thus I do not accept what seems to be your underlying assumption that a PLANNED route to the solution of a problem is necessarily superior to one that is EVOLVED. I would doubt that trying to plan/design the "program" that makes the cat very good at catching mice would result in a superior mouser than the one that has evolved.
The current problem, in this case, is "play a very strong game of go". It's OK to suggest we attack instead a different problem, "play a very good game of go AND be able to help humans understand why played as did". OK to suggest that this would be a more useful problem for us to tackle. My objection is to the claim that this would ALSO lead to a better "very good game" than the current problem. That is very far from obvious.
The training of a DCNN is not completely unlike an evolutionary process.
-
Jhyn
- Lives with ko
- Posts: 202
- Joined: Thu Sep 26, 2013 3:03 am
- Rank: EGF 1d
- GD Posts: 0
- Universal go server handle: Jhyn
- Location: Santiago, Chile
- Has thanked: 39 times
- Been thanked: 44 times
Re: swim into the dark forest
May I add that the Polya advice is destined to students, who I assume are supposed to have a teacher. It's all well to be given a map to your destination instead of going blindly - but it means that somebody else have to go there first to draw the map! From there you must explore, and how can you have a map when you explore? In other words, how can you get stronger that human players if you use the exact same thought processes than them?
La victoire est un hasard, la défaite une nécessité.