I have no idea - is there documentation on how it will work? Or is this the sort of thing that they're not going to even say how it works, to try to reduce the ability of people to circumvent it, and accept that it may have some false positives?
I would be surprised if it affected the CUDA version of KataGo, since most of the computation for that version is simply through NVIDIA's own official deep learning library, CUDNN. For the OpenCL version however, which has a lot of custom kernels, it's hard to say without more info.
|