Kellin Pelrine Beats AI

For discussing go computing, software announcements, etc.
Post Reply
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Kellin Pelrine Beats AI

Post by RobertJasiek »

From the BGA mailing list, I have learned that Kellin Pelrine, a strong US amateur, has beaten a strong AI in 14 of 15 games. Congratulations!

https://www.ft.com/content/175e5314-a7f ... 3219f433a1
https://goattack.far.ai/human-evaluation
https://goattack.far.ai/pdfs/go_attack_paper.pdf

The news article leaves it ambiguous which AI has been beaten. Is it KataGo?
gennan
Lives in gote
Posts: 497
Joined: Fri Sep 22, 2017 2:08 am
Rank: EGF 3d
GD Posts: 0
Universal go server handle: gennan
Location: Netherlands
Has thanked: 273 times
Been thanked: 147 times

Re: Kellin Pelrine Beats AI

Post by gennan »

Yes, I think it was. But the creator of KataGo has been training KataGo recently to patch this exploit that was uncovered by that AI research group. I suppose the KataGo instances running on KGS have not yet been updated to fix this vulnerability.
2 months ago a 5k player on OGS also reported a succesful attempt to exploit this vulnerability: https://forums.online-go.com/t/potentia ... /45380/106.
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: Kellin Pelrine Beats AI

Post by lightvector »

The latest 60 block net has improved a lot in understanding these positions, but most of the improvement is in positions that are not too many moves away from the final capture of the group, so if you take care to play in a way such that point where things become inevitable is much farther from that I think the current version is probably still quite exploitable. It will take more cycles though successive nets for the learning to "propagate backward" from the final end positions of the group being captured or not, to learn earlier positions. We'll see how it goes over the next months. :)
hyperpape
Tengen
Posts: 4382
Joined: Thu May 06, 2010 3:24 pm
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Location: Caldas da Rainha, Portugal
Has thanked: 499 times
Been thanked: 727 times

Re: Kellin Pelrine Beats AI

Post by hyperpape »

Have there been any attempts to exploit other strong AI using these techniques? Curious whether this is particular to KataGo or if it's a common problem of self-play based training.

Beyond that, is there any background on how these vulnerabilities were discovered? I remember the earlier demonstrations where weak bots beat KataGo, but hadn't heard about how it was found.
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: Kellin Pelrine Beats AI

Post by RobertJasiek »

Everything outside the ordinary can be tried to find gaps in an AI's "understanding": unusual rules applications, topologies, ko fights, semeais, life and death, tactics, strategies, hardware failures, software bugs, cheating by exploiting weaknesses in human protocols managing AI im- or export... We wish AI to have human-like understanding of the game (or any other domain, such as self-driving cars) but AI does not have it. Human understanding has gaps and AI understanding has - possibly other - gaps. Making AI robust against severe consequences of gaps is a general problem for future research.
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: Kellin Pelrine Beats AI

Post by lightvector »

@hyperpape - I tested LZ, ELF, and I think MiniGo some time back when cyclic group topology weakness was first found, in like 2020 or 2021, and all of them shared the same misevaluations, so it's fundamental to AlphaZero with the standard convnet neural net architectures. And if you understand the inductive bias of the architecture, I think it's also easy to understand why. The move ordering and kinds of sequences you need to use to evoke the issue against each one of course could be different just by random happenstance, and it could of course "accidentally" vary between how hard or easy it is to elicit just due to different preferences between the bots in how they play, but all of them share the same misgeneralization about how to determine the life and death of groups.
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: Kellin Pelrine Beats AI

Post by RobertJasiek »

lightvector wrote:the same misgeneralization about how to determine the life and death of groups.
How can you even know a) that AI is aware of LD groups at all, b) that there is some generalisation at all and c) what the (mis)generalisation is? You havn't reverse-engineered the neural nets etc. to identify such concepts in them, have you? It is all black box and guessed interpretation, isn't it?
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: Kellin Pelrine Beats AI

Post by lightvector »

Definitely it's not proven!

Given what we know about how image-processing neural nets work (for which detailed analysis *has* been done of their internal activations), and the fact that convnets in Go use exactly the same layers and operations, we can have a good high-level idea of what the neural nets are doing (e.g. the inductive bias to attend to local features and the fact that information about liberties and eyes is necessarily going to be transmitted by waves of activations of subsets of the internal channels one step at a time across the board). I also have a colleague who did a little bit of exploratory unpublished work looking at the activations within a KataGo net as well.

You can also get much richer information about how the net is processing the board by visualizing the ownership prediction, which predicts the final ownership of every individual point on the board, rather than just giving an overall winrate or score, so you can exactly tell what the net "thinks" about the final life-and-death of every group, and how it varies as you adjust the position. And you can see how that varies as you make controlled edits to a position, making or breaking a connection here or there, filling in an eye or not, adding a liberty or removing it, etc. Across many such positions, the big mistaken evaluation happens when you add a connection giving a cyclic topology to the group, and not other things, including not merely surrounding one or more groups, and including not varying the life and death of the surrounded group or bordering groups in ways that don't affect the correct outcome of the semeai.

Back many years ago, I also trained many smaller nets with various shallower depth (where the number of conv layers didn't allow information to propagate all the way across the board) and looked at how ownership predictions varied through a large group - you could also see how far the information about the presence of one or two eyes could propagate outward along the group to change the ownership prediction, and at what point that propagation died out and more distant stones in the exact same chain were no longer predicted as definitely alive.

So, yes, some actual direct inspection of activations, but also plenty of black box "guessed" interpretations. I think all this is likely right at a high level, but of course we don't have all the mechanistic details.
Pippen
Lives in gote
Posts: 677
Joined: Thu Sep 16, 2010 3:34 pm
GD Posts: 0
KGS: 2d
Has thanked: 6 times
Been thanked: 31 times

Re: Kellin Pelrine Beats AI

Post by Pippen »

How about taking advantage of the horizont problem by making the game as complex as possible so that even top AI cannot look forward too much and beyond the horizon it would be a 50/50 chance that the complexity is in favor of the human.
Post Reply