For those who aren't familiar with probabilistic rating systems:
In chess, a win is 1 board point, a loss would give you 0 board points, and a draw would give you 0.5 a board point.
In Go too a win is 1 board point, a loss would give you 0 board points, and a draw would give you 0.5 a board point.
In fact it's pretty much the same for any two-player contest.
However, one difference between Go and chess is that we traditionally associate better endgame play with a larger winning margin, although we don't necessarily think this way for the middle game.
In any case, this means that the proportion of board points won should be dependant on the score each game. As an example, let's say a neural net trains with no-komi, which is how I think ALL neural nets should train. In one game, black wins by 7 points. If the neural net is training using area scoring, we assume that this means black has 184 points and white has 177 points. Of course this may not be the case, but this simplifying assumption is always used because what's really being done is associating a hypothetical meaning. we then divide 184 by 361 to get black's board points, and 177 by 361 for white's. If territory scoring is used, then we divide by 360 instead of 361. The same idea applies if there's komi, since komi just changes the absolute difference in score, and were simply association proportions with score differences.
Of course, some may say, there are already go AI's that are trained to score more points these days. But the main reason I want to use this method specifically is because we can use it to derive a precise komi.
First of all, I'll again say that I really think it's best for Neural Nets to train with no komi. komi is more like an addition to the rules, but since humans haven't proven, it seems like an unnatural addition in a Neural net. I want to compare games from the olden days with a Neural Net that trained on no komi, then use that same neural net to look at new games with any komi it might use, not the other way around. Since the AI was trained on no komi, its attitude to any komi will be neutral, unlike a net trained on 7.5 points analysing a game with 5.5 points komi. Then again, I rarely actually use AI, I mainly freeload off of other people's observations, haha

.
Secondly, a neural net can do half of their training on one area scoring ruleset like Chinese and another half of their training on a territory scoring ruleset like Japanese, running two independent games with each ruleset each training run. Then, after both games are finished, the bot combines the node values made in both games to an average between them, and then starts a new training run of two simultaneous games using different rulesets, but both based on the same combined node value made in the previous run. Yes, I much prefer the continous improvement in AlphaZero than the one every 100 elo in the old AplhaGo Lee.
Thirdly using no-komi training we can then train a neural net to get strong
and find komi. Just take the average board point difference in it's no-komi training games with a bias towards more recent games with stronger nets, a weighted average. So the neural net can give a running value on what it thinks komi should be while it's training!
As I've implied in a recent post, however, it feels like programming at that level is for wizards, not mere mortals like me. I barely know how to even use these NN programs on my own, with all the techno-mumbo-jumbo a more savvy person would find easy, haha. My brain tells me this. Advanced Calculus? No problem. Hello World? No chance. It will probably take me a dozen years to try to make a neural net myself. However, I plan to start working on it in 2030 and finish it by 2036, along with and editor for Go, Karuta, Shogi, esChess and Xjianggqi, and call it Globalent Go AI and editor.