it's not just tenuki

For discussing go computing, software announcements, etc.
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

Re: it's not just tenuki

Post by Kirby »

Sounds to me you are just speculating, djhbrown. Of course I am, too, but at least it's an opinion based on what Aja said. To me, the power behind their approach is the limited domain knowledge. I don't see a reason to stray from that philosophy.

Besides, if the current version of AlphaGo is really as strong as they say, self play provides better quality games than against Lee Sedol.
be immersed
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: it's not just tenuki

Post by Mike Novack »

djhbrown wrote:

but that would be the DCNN equivalent of patching the code to fix a single case; it would not remedy the systemic underlying design flaw, which i perceive to be a lack of focus due to a lack of a conceptual overview - a lack of positional judgement!
I think you might be helped understanding the difference between a neural net not (yet) getting something right and a bug in its implementation. That a neural net cannot yet do something (cannot "correctly" evaluate the function) for an input it has not yet been trained on is not a "bug". Nor does correcting this one case ONLY fix that one case. Were that the situation, neural nets wouldn't be good for very much.

In the beginning (before training) a neural net can't do anything. It is then trained (cell values adjusted, for the moment ignore how) so that for each input from its training set, it produces the correct output. Again ignoring the process, except to point out that the adjustments must not just get the new input/result correct but must not mess up all the previous input/result pairs. What happens (what a neural net is good for) is that not only will the neural net give the correct input/result pairs it has been trained on, it becomes likely that given an input it has never seen before (one it has NOT been trained on) it will also give the correct result.

So fixing this one KNOWN "error" is actually likely to fix other errors not yet encountered.
Post Reply