Why humans fail against AI

Higher level discussions, analysis of professional games, etc., go here.
John Fairbairn
Oza
Posts: 3724
Joined: Wed Apr 21, 2010 3:09 am
Has thanked: 20 times
Been thanked: 4672 times

Why humans fail against AI

Post by John Fairbairn »

In the latest Go World (Sept 2018), O Meien has revealed more thoughts on why humans fail against AI.

He says pros are trying new AI-influenced moves and commentators on TV games are very fond of alluding to AI moves, but this doesn't mean they understand anything yet. Rather they are doing this in the hope of discovering another way up the mountain.

But one thing seems certain already and that is that humans think locally whereas AI takes account of the whole board.

O's interesting take on this is that humans have become fixated on the size of a move. We need some sort of measure to help us navigate through a game, and size of a move has proven to be an incredibly useful measure. Apart from different ways of counting value directly, pros have even established an array of tricks that help them in their quest for a move's value, such as miai and tewari.

But evaluating the size of a move has major drawbacks. For one thing it is based mostly on assessment of the difference of "if I play there or he plays there." It is therefore a very local thing and it is easy to feel that a move is bigger simply because it is in a busier area (e.g. "the opponent's vital point is my vital point"). In the fuseki, a wariuchi splitting move is often seen as common sense for a similar reason, but AI has shown a quiet move elsewhere can be better.

But focusing on the size of move can have more pernicious effects. O says the reason the early shoulder hit so beloved of AI programs has been a blind spot for pros is because the pro just assumes that as defending against is a gain territorially it must be bad. Pros have therefore never even considered such moves as candidate moves.

There is, of course, another measure which pros have labelled 'intuition' or 'feeling' and this does seem to have more of a full-board feel to it. But how do we learn to trust it?

He gives a simple example to ponder on. Take the very well known joseki where Black begins with komoku (c4) but we vary the order of moves (i.e. a version of tewari). So, White plays against that 1. f3, Black 2. d3, White 3. e4, Black 4. e3, White 5. f5, Black 6. c6.

In this order of moves, we can see that the exchange of Black 4 for White 5 is bad, but so is the exchange of White 3 for Black 6. The assumption so far has been that the stupidities cancel out. But in terms of the whole board, which move was more stupid: White 3 or Black 4? It seems as if the task for modern pros is to find a way to make that sort of assessment rather than focusing on the size of a move. That may then lead to playing shoulder hits ourselves.

O's whole text is quite long and has several specific examples. I have latched on to what interested me most, so if you want the more rounded version you will need to look up his text. But I think what I have posted here is enough to indicate how and why pro thinking may evolve away from the size of moves.
RobertJasiek
Judan
Posts: 6272
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: Why humans fail against AI

Post by RobertJasiek »

Move size (value) need not be abandoned. Quite contrarily, its current concept can be extended to include all aspects incl. value of global interaction, which we must study to measure numerically.
Uberdude
Judan
Posts: 6727
Joined: Thu Nov 24, 2011 11:35 am
Rank: UK 4 dan
GD Posts: 0
KGS: Uberdude 4d
OGS: Uberdude 7d
Location: Cambridge, UK
Has thanked: 436 times
Been thanked: 3718 times

Re: Why humans fail against AI

Post by Uberdude »

John Fairbairn wrote:In this order of moves, we can see that the exchange of Black 4 for White 5 is bad, but so is the exchange of White 3 for Black 6. The assumption so far has been that the stupidities cancel out.
and as RJ is here too, I'll repeat what I said in a recent thread
Uberdude wrote:I think far too much judgement by pros of josekis (or at least what I've gleaned of it) is based on reference/tewari/comparison (where such comparisons can accumulate asymmetric errors) to other results they think (possibly mistakenly) are ok, instead of trying to make more absolute judgements based on where the stones end up. "Well we played these moves which are probably okay and it started even so it ended even" isn't great. Robert Jasiek's territory/influence stone counting approach is a nice idea in that direction, but just not good enough to be useful. Basically you need a massive function with gazillions of parameters finely tuned to judge a multitude of facets of a position. Hang on a minute, I just described a neural network!
Is no mention made of white 1, the distant approach? (and I presume f5 meant f4). It's very slack territorally (yes, the same could be said of early shoulder hits ;-) ), and although the human knowledge is that it can be good in situations like Kobayashi opening where you don't want to be pincered but emphasise the side (with the latent threat of getting a more efficient corner with attach if they insist on pincering), I expect AIs would have a dismal view of it in most situations strong humans have played it. Without even checking Elf, I predict it will say it's pretty bad even with other corners occupied by positions that aren't begging for an approach/invasion.
Bill Spight
Honinbo
Posts: 10905
Joined: Wed Apr 21, 2010 1:24 pm
Has thanked: 3651 times
Been thanked: 3373 times

Re: Why humans fail against AI

Post by Bill Spight »

John Fairbairn wrote:In the latest Go World (Sept 2018), O Meien has revealed more thoughts on why humans fail against AI.
I take it you mean the Nihon Kiin magazine. :)
He says pros are trying new AI-influenced moves and commentators on TV games are very fond of alluding to AI moves, but this doesn't mean they understand anything yet. Rather they are doing this in the hope of discovering another way up the mountain.
As I have said, I think that will happen fairly soon, as youngsters who learn from AI come to the fore. :)
But one thing seems certain already and that is that humans think locally whereas AI takes account of the whole board.
That seems a little broad.
O's interesting take on this is that humans have become fixated on the size of a move. We need some sort of measure to help us navigate through a game, and size of a move has proven to be an incredibly useful measure. Apart from different ways of counting value directly, pros have even established an array of tricks that help them in their quest for a move's value, such as miai and tewari.
Well certainly the size of a move, even as estimated by Ishida Yoshio, nicknamed "The Computer", has been typically undervalued in the opening. The two moves of an enclosure together have gained 25 points plus, yet books on evaluating positions count territory of less than half that value. It seems to me that pros have persistently undervalued influence. (They may have overvalued it in the New Fuseki, but they have retreated from that.)
But evaluating the size of a move has major drawbacks. For one thing it is based mostly on assessment of the difference of "if I play there or he plays there." It is therefore a very local thing
It need not be. In fact, my heuristic of comparing two plays by treating them as miai is good as long as neither is sente and they do not interact too much. The miai can be quite distant from each other (in fact, that is preferable), so that the comparison is not local. :)

and it is easy to feel that a move is bigger simply because it is in a busier area (e.g. "the opponent's vital point is my vital point").
Speaking for myself, I have been surprised at how often the AI bots have played in the busy areas. I guess I tenuki too much. ;)
In the fuseki, a wariuchi splitting move is often seen as common sense for a similar reason, but AI has shown a quiet move elsewhere can be better.
Interestingly, in his thinking about 21st century go, Go Seigen downgraded the wariuchi, although not always consistently.
But focusing on the size of move can have more pernicious effects. O says the reason the early shoulder hit so beloved of AI programs has been a blind spot for pros is because the pro just assumes that as defending against is a gain territorially it must be bad. Pros have therefore never even considered such moves as candidate moves.
Again, this seems to me like undervaluing influence in estimating the size of a play.
There is, of course, another measure which pros have labelled 'intuition' or 'feeling' and this does seem to have more of a full-board feel to it. But how do we learn to trust it?
Consulting the AI bots will be a big help, although it may be hard for an old dog to learn new tricks. Just this morning I was watching a go video that said that you can't learn direction of play from Leela Zero. Au contraire, IMHO. Right now, the best teachers of direction of play are Leela Zero, Elf, AlphaGo, and other top AI bots. It may not be direction of play as I learned it years ago, but you have to discard old ways of thinking. :)
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.
John Fairbairn
Oza
Posts: 3724
Joined: Wed Apr 21, 2010 3:09 am
Has thanked: 20 times
Been thanked: 4672 times

Re: Why humans fail against AI

Post by John Fairbairn »

Just this morning I was watching a go video that said that you can't learn direction of play from Leela Zero. Au contraire, IMHO. Right now, the best teachers of direction of play are Leela Zero, Elf, AlphaGo, and other top AI bots. It may not be direction of play as I learned it years ago, but you have to discard old ways of thinking.
As usual, I'm just trying to provoke discussion, but I do have a gut feeling about the above that suggests it could reward more thought.

I am probably the one to blame for the phrase "direction of play." I did not invent it, but propagated it by translating Kajiwara's book. It actually came from Stuart Dowsey at the London Go Centre. When I was working on the book, we were discussing possible titles and for reasons I can't fully remember, I went with Stuart's suggestion. I have regretted it mildly ever since.

Now, in the light of AI's success, I regret it even more.

The reason for my regret is that the Japanese literally means "direction of a stone", but hidden behind that for western players who don't know Japanese is a range of meanings to do with the fact that Japanese does not normally mark nouns as singular or plural. The possible meanings are therefore direction of a stone, direction of stones, directions of a stone, directions of stones. But we have to add on to that the fact that the meaning "stones" is also the normal way to refer to "group" (or groups).

"Direction of play" cuts through that. However it also loses something. I suspect what has happened over the years is that direction of play in the west (and possibly in Japan) has been seen as a vector, i.e. it has magnitude as well as direction. Since we are predisposed to think of move size, as O Meien has pointed out, we run the risk of using direction of play as yet another support in our quest for move size. But AI has shown we should be emphasising the "direction" of the vector much more than the "magnitude". And on top of all that, we should be remembering more that it is directionS we need to think about so that we focus more on the whole board instead of magnitude, which limits our focus too much to what is local.

Studying direction of play is going to be important from now on, but the first step would be a change of name to one that reminds us of everything that is going on behind the Japanese original 石の方向. Any good suggestions?

As a thought experiment on actual study of the concept, could we try overemphasising the direction part of the vector? For example, if we draw lines of the AI's preferred move from each stone/group to other salient points, do interesting patterns emerge? Or interesting differences from lines drawn from our own choice of move? Wilcox used this idea (though Miyamoto preceded him) in a simplistic way with his sector lines. May this idea needs re-visiting?
John Fairbairn
Oza
Posts: 3724
Joined: Wed Apr 21, 2010 3:09 am
Has thanked: 20 times
Been thanked: 4672 times

Re: Why humans fail against AI

Post by John Fairbairn »

Well certainly the size of a move, even as estimated by Ishida Yoshio, nicknamed "The Computer", has been typically undervalued in the opening. The two moves of an enclosure together have gained 25 points plus, yet books on evaluating positions count territory of less than half that value.
It doesn't undermine the point you are making, but my memory of what Ishida mainly said is not quite that.

First he stressed he was talking about big points only. Then I think he said that any big point in the opening (first move in a corner or on the side, or shimari or kakari) is worth not quite 20 points, but if the move has a follow-up (or denies the opponent one) it is worth a little more, and he accordingly distinguishes Super Big Points (23-24) points, Class I (22-21 points) and Class II (20-19). If a move is worth 25 points or more it is classed as an urgent point rather than a big point.

Personally I never really understood what these numbers meant or how they were reached, but I'm pretty sure they are specifically to do with big points. What I also remember clearly some (?)40+ years later is that when I first opened his book and saw moves labelled as 20+ points I thought I had found gold in them thar hills. For me, at least, it turned out to be fool's gold.

Ishida also had a view on the value of bad moves. A minor bad move loses 2-3 points. A bad move loses 5 points. A very bad move loses 10 points. Moves that lose 10 points are few and far between. He obviously hadn't seen my games.
Elom
Lives in sente
Posts: 827
Joined: Mon Aug 11, 2014 1:18 am
Rank: OGS 9kyu
GD Posts: 0
Universal go server handle: WindnWater, Elom
Location: UK
Has thanked: 568 times
Been thanked: 84 times

Re: Why humans fail against AI

Post by Elom »

John Fairbairn wrote: ...

The reason for my regret is that the Japanese literally means "direction of a stone", but hidden behind that for western players who don't know Japanese is a range of meanings to do with the fact that Japanese does not normally mark nouns as singular or plural. The possible meanings are therefore direction of a stone, direction of stones, directions of a stone, directions of stones. But we have to add on to that the fact that the meaning "stones" is also the normal way to refer to "group" (or groups).

"Direction of play" cuts through that. However it also loses something. I suspect what has happened over the years is that direction of play in the west (and possibly in Japan) has been seen as a vector, i.e. it has magnitude as well as direction. Since we are predisposed to think of move size, as O Meien has pointed out, we run the risk of using direction of play as yet another support in our quest for move size. But AI has shown we should be emphasising the "direction" of the vector much more than the "magnitude". And on top of all that, we should be remembering more that it is directionS we need to think about so that we focus more on the whole board instead of magnitude, which limits our focus too much to what is local.

Studying direction of play is going to be important from now on, but the first step would be a change of name to one that reminds us of everything that is going on behind the Japanese original 石の方向. Any good suggestions?

...

Maybe stone's direction or stone direction. Putting the word stone first seems to go some way in producing a similar linguistic effect in English as described above in Japanese.

Or do away with direction altogether and choose between terms such as 'compass of groups' (groups imply single stones in it's definition), 'tide of stones' (think of stones as a wave), 'electro-magnetic movement' (in practical use, the direction of the electromagnet, being prerequisite for function, is more important than the charge it receives), and so on...
On Go proverbs:
"A fine Gotation is a diamond in the hand of a dan of wit and a pebble in the hand of a kyu" —Joseph Raux misquoted.
John Fairbairn
Oza
Posts: 3724
Joined: Wed Apr 21, 2010 3:09 am
Has thanked: 20 times
Been thanked: 4672 times

Re: Why humans fail against AI

Post by John Fairbairn »

"Stone's direction" doesn't attend to the singular/plural problem. We must be careful not to look at one stone/group at a time.

With tongue in cheek, I have considered "lithic radiation" :) Values can be measured in lithocuries.
User avatar
EdLee
Honinbo
Posts: 8859
Joined: Sat Apr 24, 2010 6:49 pm
GD Posts: 312
Location: Santa Barbara, CA
Has thanked: 349 times
Been thanked: 2070 times

Post by EdLee »

Or do away with direction altogether
I feel this is moving toward the right direction.

Winrates. Maybe we can borrow from quantum terms.
Area/region of interest.
Key area/region.

( Is the temperature thing still good in the age of bots ? )
Uberdude
Judan
Posts: 6727
Joined: Thu Nov 24, 2011 11:35 am
Rank: UK 4 dan
GD Posts: 0
KGS: Uberdude 4d
OGS: Uberdude 7d
Location: Cambridge, UK
Has thanked: 436 times
Been thanked: 3718 times

Re: Why humans fail against AI

Post by Uberdude »

John Fairbairn wrote: As a thought experiment on actual study of the concept, could we try overemphasising the direction part of the vector? For example, if we draw lines of the AI's preferred move from each stone/group to other salient points, do interesting patterns emerge? Or interesting differences from lines drawn from our own choice of move? Wilcox used this idea (though Miyamoto preceded him) in a simplistic way with his sector lines. May this idea needs re-visiting?
Reminds me a of a lecture Ko Joyeon 2p gave at the London Open a few years ago. It was all about "lines", and she didn't really manage to explain it very well so I don't remember much, but it did seem to be an interesting and novel approach.

(and also djhbrown! though his approach seemed to be if you draw enough lines on the board then of course a move decided to be good by another approach to go playing will be on some of them, but so will lots of bad moves)
Bill Spight
Honinbo
Posts: 10905
Joined: Wed Apr 21, 2010 1:24 pm
Has thanked: 3651 times
Been thanked: 3373 times

Re: Why humans fail against AI

Post by Bill Spight »

John Fairbairn wrote:The reason for my regret is that the Japanese literally means "direction of a stone", but hidden behind that for western players who don't know Japanese is a range of meanings to do with the fact that Japanese does not normally mark nouns as singular or plural. The possible meanings are therefore direction of a stone, direction of stones, directions of a stone, directions of stones. But we have to add on to that the fact that the meaning "stones" is also the normal way to refer to "group" (or groups).

"Direction of play" cuts through that. However it also loses something. I suspect what has happened over the years is that direction of play in the west (and possibly in Japan) has been seen as a vector, i.e. it has magnitude as well as direction. Since we are predisposed to think of move size, as O Meien has pointed out, we run the risk of using direction of play as yet another support in our quest for move size. But AI has shown we should be emphasising the "direction" of the vector much more than the "magnitude". And on top of all that, we should be remembering more that it is directionS we need to think about so that we focus more on the whole board instead of magnitude, which limits our focus too much to what is local.

Studying direction of play is going to be important from now on, but the first step would be a change of name to one that reminds us of everything that is going on behind the Japanese original 石の方向. Any good suggestions?
IMO, direction of play is fine. Since the concept really can only be explained by putting stones on the board, I don't think we can ask too much of language alone. I often talk of development, which implies "of one's stones", and at least overlaps with ishi no houkou.

Later I'll post some diagrams from the AlphaGo Teaching tool. :)
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.
User avatar
emeraldemon
Gosei
Posts: 1744
Joined: Sun May 02, 2010 1:33 pm
GD Posts: 0
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
Has thanked: 697 times
Been thanked: 287 times

Re: Why humans fail against AI

Post by emeraldemon »

I loved this article, and I feel that it's highly relevant.

I read this one in the terrific short story collection Stories of Your Life and Others. One of the stories in that collection was the basis for the film Arrival. I really enjoyed that movie, but it's not even my favorite story in this collection.
User avatar
daal
Oza
Posts: 2508
Joined: Wed Apr 21, 2010 1:30 am
GD Posts: 0
Has thanked: 1304 times
Been thanked: 1128 times

Re: Why humans fail against AI

Post by daal »

I agree with Bill that we can't expect too much translating go to English - especially if we go through the intermediate step of Japanese. It seems that in order to find a better term than the already good "direction of play," one must have a pretty good understanding of what is meant as fas as stones are concerned. Since I have lent my copy of the book to a friend, I can't refer to that, so I'll just say that I like both development and movement. Perhaps one could avoid the singular/plural problem by avoiding stones altogether with something like "positional development" or "directional movement in go."
Patience, grasshopper.
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

Re: Why humans fail against AI

Post by Kirby »

My view is kind of...

Computers have been naturally better than humans at certain activities for a long time. Iteration is a simple example. A computer can multiply two numbers a million times in a row in under a second. For a human to do the same math, it'd take a really long time.

More relevant to games, computers are also really good at search. Like in chess, a computer program can be pretty strong by minimax with some simple heuristic - by the computer's nature, it can hold more in memory than humans at any given time.

With go, for a long time, things were different. Even though computers have various strengths compared to people, human intuition, go theory, etc., could overcome the natural advantages that computers have with search, etc.

But with deep learning, computers caught up on the intuition part. Before, even with natural advantages computers have with search, memory, and stuff like that, human theory was superior so we still had a chance.

Once computers surpassed a fraction of the intuition ability that humans had, we don't have a chance.

It's not because computer theory or intuition is necessarily better than that of humans - it just has to be good enough so computers can surpass us at what computers are good at.

For example, if a computer had exactly my evaluation ability and exactly my go strategy, and exactly my go intuition, it would win. Why? Because it can store more board states in memory than I can.

It's like competing with a calculator to do multiplication. We could both have perfect theory and understanding. But computer will win simply because it has more efficient resources for computing something it understands.

I don't necessarily think that I can learn much math theory from a calculator. But it's for sure better at solving arithmetic. And good for checking my work I guess.
be immersed
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: Why humans fail against AI

Post by lightvector »

Yeah, some of what Kirby said matches my thoughts.

Having played with Go AI and neural net training over the last 8 months now (as a software developer, in my spare time I've been using Go as an excuse to gain experience in the modern wave of machine learning developments that have taken place in the last 5 years or so), I've played quite a lot now with the raw position-by-position behavior of neural nets (not augmented with any search) as well as more recently what happens when you do put an "mcts" layer in front, I find it interesting to see what the neural nets are good and bad at.

Maybe this would be different for the very top available nets as I haven't tested them (e.g. ELF and the very latest LZ nets), but as far as I can tell, even moderately large neural nets I've trained on pro games or on Leela Zero data are noticeably worse than me (AGA 3d-ish) on average at snap-recognition of local tactics and tesuji and life-and-death-related moves. A wide variety of local fights that I solve "on sight" the policy net is noticeably uncertain about. The value net raw output is also not always good at recognizing the status of corner groups or capturing races, and sometimes requires the position to be played out a bit more before it becomes confident of the situation.

The raw policy net (still with no search) is better than me at having the instinct for the best-shape move in open-space situations. And in things like "do I keep extending or hane now?", or "do I jump or turn here" where it comes down to your feeling for the possible results and reading might only be of limited help if your intuition isn't already nudging you towards the answer. And the raw value net is massively better than me in judging large-scale influence-territory tradeoffs, strength of groups in running fights, and and all the other major parts of evaluating the board in the early midgame.

I find it actually sort of interesting that my personal "policy net" (and probably that of any other mid-dan player) with no conscious reading can be more accurate in sharp tactics and life and death, or at least be competitive. Unsurprisingly, when you add in search though, the whole thing becomes incredibly strong, as the search solves precisely those tactical things that the neural net was worse at, and then the power of the neural net for open-space and overall global judgment absolutely shines.

Maybe it would be different for a pro, but I've definitely learned some things about whole-board judgment from all this as well (and maybe have improved by about a stone at a result). The way I look at and value different kinds of shapes in the early midgame has changed quite a bit in a way that's hard to put into words, and I attribute it mostly to osmosis from just seeing example after example over time of what these bots say about different positions.

For me at least, what terminology to use or figuring out how to conceptualize something and verbalize it hasn't been relevant at all. Probably learning could be faster if we could wave a magic wand and have Go bots understand human language and how to "teach" in English or Japanese or whatever, but that's certainly not a requirement. Simply interacting with bots a lot has already let me improve my own intuition a little in the facet of the game where the bots seem to be the best, and interestingly that facet is *not* one of reading or calculation.
Post Reply