(edit: 2022/09/11 I am trying to write some of this up more formally. Disclaimer: much of the below was speculative musings without enough thought to back it up. Don't take any of it too seriously)
(edit: 2022/09/25 On the other hand, while some was nonsense, much really was seeing how NNs are quite a general discrete potential model, especially for energy dissipating through a crystal, whose fractal nature can even be used to model Go. I am more confident much can be written up now. Even some that I said was difficult to quantify, I now think can be quantified and measured. On the other hand, I still expect that almost all this is old news scientifically, though it is new to me. This thread now seems to be a demonstration that curious whimsical musings can turn into papers)
As far as I recall:
There are around 3^361 possible Go board position, but more like 10^170 according to Tromp.
The number of weights in a 40 block neural net based on Alphago (like katago) is around 10^8.
The number of neurons in a human brain is around 10^11.
Do the dimensions match? Is dimensions the right way to think about the problem?
For example given n input bits (e.g. to represent board positions), you want to output 1 number to represent the value of that position. The dimension of the function space is 2^n * 1 = 2^n.
However, does the number of weights in a neural network roughly correspond to dimensions. The key thing is that neural network outputs are not linear in their weights or inputs, so thinking of the functions as a vector space probably doesn't lead very far. Non-linearity is critical because otherwise linearity would imply that if a stone A helps player X, and a stone B helps player X, then having both stones A and B on the board will help player X. This is false, for example if A and B together is self-atari or suicide.
On the other hand, matrices still play a key role in neural networks. The linearity point is why I was surprised when I was first told by a Deepmind employee that the AI was based on matrices. However the study of matrices is very deep, with highly optimised (e.g. parallelised) algorithms to work with them - adding, multiplying etc.
It is also easier to think in terms of linear things (since we are familiar with adding up normal numbers and matrices are just collections of numbers), and the calculus formulae are much simpler to write down with matrices (merely the chain rule: dAx/dx = A).
Neural networks in 5 paragraphs:
I suppose that strength in a 2 player game is about the confidence and skill of choose a line where you have to play m moves perfectly in a row in order to profit (compared to a simple variation), as well lots of ways to filter out bad moves to increase the lower bound for how bad a move you might play. Much of "aggression" is probably to assume that you will play such tight (紧凑) sequences more perfectly than your opponent. In particular, being able to live in your opponent's moyos and save weak cutting stones.
What I really want to see is some relation between the network structure (e.g. neurons in a bottleneck) with a small number of key concepts (human or otherwise) that we can measure and interpret with some kind of symbolic logic/arithmetic. But I don't know if that is reasonable or what that would look like or how to make progress.