RobertJasiek wrote:
However, the process should be improved as follows:
- Human arbiters should have at least the same power as software in step (1), that is, also have the possibility to suggest initially suspicious games.
This is exactly how the process works. We generate graphs only to games that are brought to our attention one way or the other. I will then analyse the graphs for if the game looks the tiniest bit suspicious or not, and if it does, then I will analyse the game itself. Su Yang does not operate in the same fashion.
RobertJasiek wrote:
Step (2) is described to depend on a player's given level (such as rank or rating). However, this involves prejudice because it overlooks the possibility that a player can have learnt very much from AI before the game and therefore play similar to AI on many moves. Furthermore, a player can have a particular strength, such as the endgame, where good play can often result in many same moves by AI and the player.
This applies very little to our model. We only compare the convergence of a player's moves with the AI's at a very late stage of the analysis, at which point we anyway have an idea on if cheating has happened or not. More important are metrics showing a player's general performance, irrelevant of if the player's chosen move is the AI's first or third or tenth choice.
RobertJasiek wrote:
In step (3), there is too little description of the, what I have called, "something else" evidence. A player can, e.g., provide counter-evidence by explaining his thinking and decision-making as detailed as time allows him in a dispute schedule. Such evidence can be very strong but is not properly mentioned in the description of the process.
We are in general very happy to hear of any counter-evidence that the suspect can provide, but, in my experience, a player's verbal 'evidence' is worth much less than the information that our graphs give. In the first Corona Cup, we let several players 'off the hook' after listening to their explanations, and by this point two of them have been confirmed to have cheated. This is why we mention video footage as the players' recommended 'protection'.
RobertJasiek wrote:
Step (4) pretends that the anti-cheating team would be the only arbitration body. It is not. There are also the arbitration bodies "referee" and "appeals committee" and, if the EGF General Tournament Rules should apply (I cannot know because this has not been clarified yet), the "EGF [Tournaments and] Rules Commission". The relation between the anti-cheating team and the other arbitration bodies are unclear with each other, in relation to the EGF General Tournament Rules (if they apply) and in relation to the player's right to a fair trial (he has a right to know in advance which arbitration body decides at which procedural order and why that, if applying, is according to the EGF General Tournament Rules, which do not refer to an anti-cheating team at all).
I cannot account for all the details, as I am not the main tournament organiser. However, I can mention that we are currently handling the first cheating case of the tournament, for which the procedure has been as follows:
- We (the anti-cheating team) receive a cheating accusation.
- We analyse the game in question and (this time) unanimously find it to be suspicious.
- We contact the player, tell them of our suspicion, and ask for any possible counter-evidence they may have.
- Upon receiving no counter-evidence necessitating rejudgement, we then check with the tournament referee that he accepts our judgement.
- Upon receiving the referee's approval, we tell the player of our decision and briefly explain how we came to find their play suspicious, and instruct them to contact the appeals committee if they disagree with the decision.
The above procedure may not unfold exactly the same way each time because it has not been set in stone. Still, in general the players can expect roughly this level of communication from the organisers and possibility to influence the outcome.
RobertJasiek wrote:
See also my earlier remarks on open decision-making and impact on player reputation.
I am not the main tournament organiser, so I cannot comment on this. However, in general I will try to influence the procedure so that any reputation damage to participants is minimised.
RobertJasiek wrote:
The tournament announcement speaks of "state-of-the-art anti-cheating tools". I do not understand why the mentioned software tools should be the state-of-the-art. It would be easier to understand if they were just described as "whatever tools the anti-cheating team wants to use". Furthermore, an earlier claim was made that such tools have identified many cheaters, but I see no evidence for that claim and in particular none for the tools applied to go.
I do not intend to discuss the semantics of what 'state-of-the-art' means. To my best knowledge, no current go anti-cheating tools can compete with the level of analysis and precision of my graphing tools combined with my experience. For example Yike's Hawkeye, from what I know, is a much simpler and error-prone solution, whose main benefit is that it is automatic. I have worked on the model for almost a year, constantly improving it, and I will continue improving it from now on too.
How should I show that my tools have identified many cheaters while protecting their reputation?
In addition to the first Corona Cup and a larger number of non-tournament online games, my model has also been used in the 15th Korea Prime Minister Cup and for example the Canada Open Online tournament. There were no cheating cases in the KPMC, but you can ask the organisers of the Corona Cup and the
Canada Open Online tournament (the webpage has an email address) if you want evidence of the model working.