John Fairbairn wrote:My own view is that nearly all Japanese rules problems derive from amateurs trying to use professional rules. They are trying to insert a nail with a Swiss Army knife when all you need is a hammer - and common sense.
Let me try to discuss in what sense J89 are "professional".
In my understanding, J89 were primarily designed to be applied on games between two (Japanese) professional Go players.
In my own words, the base lines for creating this revision of the J49 rules were:
(1) Comply with Japanese tradition.
(2) Find consistent solutions for all the earlier rules disputes in Japan, and prevent future cases.
(3) Find a compelling solution for use outside of Japan (by professionals and amateurs alike).
Being professionals in playing the game of Go, the authors succeeded with (1) and (2).
However, they failed with (3), lacking the last five percent of professional approach (/ attitude) to the creation of a set of rules.
They simply overlooked (being no amateurs) that things can happen in amateur games that professionals wouldn't even dream of. They also overlooked (being no Westerners) that Western attitude (being not aware of (1) and (2)) demands to take the legal text of the rules literally.
On the other hand, some responses in the Western world have been typically Western in that common sense has been completely left out (along with knowledge of probability theory).
But was it all a drama? Of course not.
After all, there are several approaches to creating rule sets based on Western thought patterns.
Are there any secret recipes as to how you can be sure to win under ruleset A against a player who has only known ruleset B until now? I don't think so. Therefore, the differences between the various rulesets can be only marginal, not affecting the character of the game.
--------------------------------------
In another thread I already mentioned that no AI on this planet would be able to solve Igo Hatsuyôron 120. Which is solely due to the fact that they were not designed for this purpose.
No AI would dream of this board position, just because it was not encountered while training the neural network.
But is this a drama? Of course not.
People still use the programmes for the purposes these have been developed for.
To overcome the above mentioned "weakness", lightvector used an approach to include positions of Igo Hatsuyôron 120 as material for the training of a specialised neural network.
This not only resulted in KataGo's ability to solve the problem, but also in an improvement on the (at that time) best known solution sequence.
Friday9i later continued the training of the specialised network, also with training material that emerged from our further analyses.
However, even with KataGo's specialised network, there was no guarantee that the correct continuation of a subvariation, which included several mistaken moves, would be found!
During the training, the neural network had simply learned to avoid these multiple-mistake-sequences, and therefore lacked sufficient experience with what might happen thereafter.
But was this a drama? Of course not.
We got new insights into the problem, which would have been unrevealed without.
Friday9i executed some kind of "enforced" training with these positions, resulting in KataGo's correct handling afterwards.
However, even with KataGo's specialised network, there was no guarantee that the correct continuation of a subvariation, which started with a "simple" (according to human understanding) valid change in the order of correct moves (but which added a lot of uncertainty to the position in AI understanding), would be found!
During the training, the neural network had simply learned to avoid that "too noisy" path, and therefore lacked sufficient experience with what might happen thereafter.
But was this a drama? Of course not.
We got new insights into the problem, which would have been unrevealed without.
Friday9i executed some kind of "enforced" training with this position, resulting in KataGo's correct handling afterwards.
--------------------------------------
There seems to be a general understanding that an AI only delivers "good" results for the application cases for which it has been trained. And that the quality of the results reflects the quality of the training material.
It would be advisable to apply this knowledge to the assessment of rulesets as well.