It is currently Thu Oct 31, 2024 4:50 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 67 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
Offline
 Post subject: Re: Why humans fail against AI
Post #21 Posted: Thu Aug 23, 2018 8:30 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
hyperpape wrote:
John Fairbairn wrote:
"Direction of play" cuts through that. However it also loses something. I suspect what has happened over the years is that direction of play in the west (and possibly in Japan) has been seen as a vector, i.e. it has magnitude as well as direction.
Why do you think it has come to mean that? That a direction isn't a vector is part of the meaning of the words, since a vector is direction plus magnitude. So I don't know that I'd think your choice of term was at fault.

I took a brief look, btw: and I found that Charles Matthews glossed Kajiwara as thinking of direction as a vector: https://senseis.xmp.net/?DirectionOfPlay%2FDiscussion. So perhaps this is a common misconception. But it's a weird one.


Well, "direction" of play should (IMO, in agreement with Charles) be a vector. And that's why Kajiwara was wrong (according to today's bots). Look at it this way. Influence falls off geometrically with distance, and as a result, adjacent corners are simply too far apart for Kajiwara's conclusions. More later on this point.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #22 Posted: Thu Aug 23, 2018 8:35 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
I repeat what O has said. This method of using size of moves as a measure has been incredibly useful for us. Even where AI prefers a different move it does not automatically mean that the human move is bad. The AI move might only be microscopically better. But it is precisely that smidgeon of improvement that we are trying to understand. It seems to me (and I think this is the thrust of what O is saying) is that we may need to take several large steps back in order to move that inch forward.


Well, it seems (to me, anyway) that that smidgeon is well within the bots' margin of error. Which means that trying to understand that smidgeon is straining after gnats. Meanwhile, current bots are indicating fairly large errors, even by pro 9 dans. That's where we should focus now.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #23 Posted: Thu Aug 23, 2018 8:39 am 
Gosei

Posts: 1628
Liked others: 546
Was liked: 450
Rank: senior player
GD Posts: 1000
Elom wrote:
Now I cannot help but think of each stone as having reverse gravity, or negative gravity, like a white hole. Putting stones together increases their negative gravity (decreases their gravity). If you have groups on the board, they repulse each other so that new stones would rather be on open areas.

Here's a demonstration of a white hole at 7:00


(it is likely erroneous but logical to think of big points as deep and thick stones or groups as mountainous in this fashion).


Does your idea of gravity agree with present ideas of light and heavy? Putting stones together might make their combined gravity positive, e.g. a poorly shaped heavy group or or more negative e.g stones in a sabaki group.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #24 Posted: Thu Aug 23, 2018 9:02 am 
Gosei

Posts: 1628
Liked others: 546
Was liked: 450
Rank: senior player
GD Posts: 1000
I recall that Takemiya said people should make the moves they feel like playing. He had directed this comment at amateur, and thus implicitly weak, players. It seems that the zero-type AI players learn by trying things and seeing what happens. This works for them because they play many millions of games in a short time. Humans also learn by playing, without studying. Most of us might know of players who learned the rules at some demonstration, began attending local club meetings and never studied but reached SDK or even low dan kevels without any studying except playing their games. Unlike the AI that plays millions of games, these people play, perhaps, hundreds of games or maybe a few thousands, but the learning process is the same, AI or human. We more serious or studious humans stopped just playing and seeing what happens because we've incorporated what we've studied, and that narrows what we consider playing. Having a larger scale field of view would benefit us. We also have "styles of play" that limit us. Flexibility is also needed, another characteristic AI players have. I'd like to know what sort of principles there might be that the AIs also follow. Perhaps something like efficiency of moves might be one.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #25 Posted: Thu Aug 23, 2018 9:56 am 
Dies with sente

Posts: 108
Location: France
Liked others: 14
Was liked: 18
Rank: FFG 1d
John Fairbairn wrote:
Let us now return to the go board and consider this position from O Meien.



I was very much taken aback to find that this very ordinary looking position has never appeared in pro play. I triple checked in amazement. Although O said nothing about why he chose it, I assume it was because he wanted a tabula rasa to make his point. Which was that a move around A is the duh move here. He was talking only about size of move, on the basis of the sort of thinking that goes "if I don't play there he will play there and that's HUGE for him." I'm sure we all recognise that approach. If we add direction of play to the mix (which he did not) and apply it in the usual way, the case for A becomes overwhelming. After all the direction of play for each Black corner group is down the right side and to let White scupper two directions at once must be suicidal, no?

O, trying to approach this from an AI perspective, suggests a different way of playing, as follows. He plays keima in the top left, allows the double extension for Black on the right, then does an AI shoulder hit at A.



He says the shoulder hit this early would not have previously occurred to pros because it allows Black B or C, both of which confirm Black's territory (and are, of course, ajikeshi). (But, trying to think like an AI, he goes on to postulate Black E now instead of B or C.)

Now what I get when I run LeelaZero is that the AI essentially agrees with his new way of thinking. The difference is that it chooses the keima at D (and not c14) as its best move, and even at this stage it rates the shoulder hit as second best. The wariuchi on the right is certainly considered but is microscopically lower (52.1% against 52.4% but these tiny edges are what we presumably need to focus on).

It might shock O but LZ barely considered his White keima, so I would infer he needs to think even more about some aspect of play in that corner. My guess is that the LZ keima protects White better from the angry airt of the directional power of Black's upper right shimari. O may also be shocked to learn that after this keima, LZ does not opt for the double extension the right side. It prefers E, though in this case the difference between E and a move in the right centre is nanoscopic rather than microscopic.

LZ, after its White keima at D, looks at a clutch of moves in the right centre and also a clutch of moves (though rated lower) in the upper left (including another early shoulder hit), but E is a singleton in that area. Possibly it sees singleton moves as more urgent, or prefers the certainty of a single choice that maybe allows a deeper search (???).

Anyway, I think direction of play (among other things) merits some re-appraisal. But, if you allow me to share an impression I often get, I can't shake the idea that, as here, suggestions for new thinking are resisted on L19 in a knee-jerk way, and I find that strange. Even if you think about something and come to the same conclusion you had before, the very act of thinking has made you stronger. And that leads to a another new idea: direction of thinking...


I ran ELF to see if it had a different opinion and it did. To attach the enclosure in the upper right corner is it first idea. The enclosure LZ suggested is barely considered (74 playouts when I stopped the analysis).


Attachments:
elf3.png
elf3.png [ 453.55 KiB | Viewed 12853 times ]
elf4.png
elf4.png [ 481.99 KiB | Viewed 12853 times ]

This post by explo was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #26 Posted: Thu Aug 23, 2018 10:16 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
Let us now return to the go board and consider this position from O Meien.



I was very much taken aback to find that this very ordinary looking position has never appeared in pro play. I triple checked in amazement. Although O said nothing about why he chose it, I assume it was because he wanted a tabula rasa to make his point. Which was that a move around A is the duh move here. He was talking only about size of move, on the basis of the sort of thinking that goes "if I don't play there he will play there and that's HUGE for him." I'm sure we all recognise that approach. If we add direction of play to the mix (which he did not) and apply it in the usual way, the case for A becomes overwhelming. After all the direction of play for each Black corner group is down the right side and to let White scupper two directions at once must be suicidal, no?


Actually, direction of play (at least for me) is what makes the right side look big. Change the directions of the two enclosures and the right side becomes so-so. (IMO.)

Quote:
O, trying to approach this from an AI perspective, suggests a different way of playing, as follows. He plays keima in the top left, allows the double extension for Black on the right, then does an AI shoulder hit at A.



Pity he did not seem to try the position out on one or more of the top bots. As you point out below, F-17 looks better than C-14.

Quote:
He says the shoulder hit this early would not have previously occurred to pros


Go Seigen excepted, OC. :)

Quote:
because it allows Black B or C, both of which confirm Black's territory (and are, of course, ajikeshi). (But, trying to think like an AI, he goes on to postulate Black E now instead of B or C.)


Right. :)

Quote:
Now what I get when I run LeelaZero is that the AI essentially agrees with his new way of thinking. The difference is that it chooses the keima at D (and not c14) as its best move, and even at this stage it rates the shoulder hit as second best. The wariuchi on the right is certainly considered but is microscopically lower (52.1% against 52.4% but these tiny edges are what we presumably need to focus on).


No, they are not. 0.3% is well within Leela Zero's margin of error. Playing around with AlphaGo Teach makes me think that its margin of error is at least 1%, even with 10,000,000 simulations. :shock: With that small a difference the wariuchi looks fine. :D

Quote:
It might shock O but LZ barely considered his White keima, so I would infer he needs to think even more about some aspect of play in that corner. My guess is that the LZ keima protects White better from the angry airt of the directional power of Black's upper right shimari. O may also be shocked to learn that after this keima, LZ does not opt for the double extension the right side. It prefers E, though in this case the difference between E and a move in the right centre is nanoscopic rather than microscopic.


Of course, yo! Margin of error. :)

Quote:
Anyway, I think direction of play (among other things) merits some re-appraisal.


Absolutely. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


Last edited by Bill Spight on Thu Aug 23, 2018 11:16 am, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #27 Posted: Thu Aug 23, 2018 10:22 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
explo wrote:
I ran ELF to see if it had a different opinion and it did. To attach the enclosure in the upper right corner is it first idea. The enclosure LZ suggested is barely considered (74 playouts when I stopped the analysis).


Great idea! :)

Just a note about the long sequence of play. That is worth something qualitatively, but each ply increases the margin of error, so it needs to be taken with a large grain of salt. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #28 Posted: Thu Aug 23, 2018 11:48 am 
Dies with sente

Posts: 108
Location: France
Liked others: 14
Was liked: 18
Rank: FFG 1d
Bill Spight wrote:
explo wrote:
I ran ELF to see if it had a different opinion and it did. To attach the enclosure in the upper right corner is it first idea. The enclosure LZ suggested is barely considered (74 playouts when I stopped the analysis).


Great idea! :)

Just a note about the long sequence of play. That is worth something qualitatively, but each ply increases the margin of error, so it needs to be taken with a large grain of salt. :)


Agreed. I added in case someone wondered what it had in mind. Actually, for a while, the main variation involved pushing a ladder...

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #29 Posted: Fri Aug 24, 2018 1:44 am 
Oza

Posts: 3698
Liked others: 20
Was liked: 4660
Hi Bill

Quote:
Well, "direction" of play should (IMO, in agreement with Charles) be a vector.


I'm not disputing that this can be a useful way of looking at things. The questions I am raising are whether this is what the Japanese ishi no hoko means (and I think not quite, though there is some overlap) and whether either variant of DOP is in some way a useful prism for looking at AI programs.

Quote:
Well, it seems (to me, anyway) that that smidgeon is well within the bots' margin of error.


I don't yet feel this point is well founded. I admit I sloppily put simple winrates but I don't have the mathematical background to express these things properly and it seems from other threads here that there is debate even among experts what winrates even mean. But what also seems clear to me is that they alone are not a measure of a moves value, and for that reason their margin of error (on its own) is largely irrelevant.

LZ, via Lizzie, actually expresses a move's evaluation in a two-dimensional way. There is winrate and a figure that seems to mean something like number of rollouts. I have no idea how important each of these factors is relative to each other but LZ seems to think rollouts is very important because it will sometimes choose a move with a low winrate but high rollouts over one with a higher winrate and low rollouts.

But on top of that there seem to me to be other important factors, such as time, i.e. the stage in the game at which the evaluation is made. So it is really a multi-dimensional evaluation. Even a multi-dimensional value can have a margin of error of course, but from the practical point of view of a bot choosing a type of move consistently, I'm not clear whether margin of error then matters so much.

There is a related point you make: that margins of error multiply as you go deeper into a search. I can "see" that but I can't relate that to the other thing I "see", which is that chess programs generally perform better the deeper they search. I have suspected from the very beginning, and still believe, that the bots out-perform humans in go mainly because they search better and so make fewer crunching mistakes (and for that reason - i.e. it's the last mistake that decides the game - all the "research" into josekis and fusekis is somewhat misguided). AlphaGo Zero seems to have upset the apple cart, in chess as well as go, but until shown otherwise I will still believe that it is ultimately just searching better (not just deeper but perhaps because it makes a better selection of candidate moves and prunes better). So if a new version of AlphaGo came along with the same policy network but had the hardware to do an even deeper search, I'd initially expect that to be even stronger - notwithstanding the multiplication of the margin of errors that it would surely still be making. Is there some way the margin of error in talking about the margin of error cancels out the original margin of error? In a vague way that seems to be why Monte Carlo search works.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #30 Posted: Fri Aug 24, 2018 2:14 am 
Gosei

Posts: 1733
Location: Earth
Liked others: 621
Was liked: 310
If the best ending sequence is short enough (practically deterministic, aka known), Monte Carlo will find it.
If the best ending sequence is too long (practically stochastic, aka unknown) (think Igo Hatsuyoron #120 or mildly shorter ;-)) Monte Carlo will find a suboptimal play that is sometimes difficult to refute.

I am not sure computing power will develop in a way that all go problems will become "short enough"

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #31 Posted: Fri Aug 24, 2018 4:20 am 
Gosei

Posts: 1592
Liked others: 888
Was liked: 531
Rank: AGA 2k Fox 3d
GD Posts: 61
KGS: dfan
John Fairbairn wrote:
LZ, via Lizzie, actually expresses a move's evaluation in a two-dimensional way. There is winrate and a figure that seems to mean something like number of rollouts. I have no idea how important each of these factors is relative to each other but LZ seems to think rollouts is very important because it will sometimes choose a move with a low winrate but high rollouts over one with a higher winrate and low rollouts.

In MCTS (and in the variants used by AlphaGo Zero etc.) the move to play is generally chosen by highest number of visits (technically in AlphaGo Zero etc. these are not rollouts, since they aren't played out to the end of the game). Since it is always visiting the most promising move, this usually matches pretty well with the winrate, and it avoids situations where some other move suddenly looks good at the last second but there isn't time to verify it to a necessary degree of confidence. On the other hand, you can run into the converse issue where the move you've been spending all our time on suddenly gets refuted by something you see further down, and there isn't time for an alternative move to bubble to the top. People on the Leela Zero team have been thinking about this.

Quote:
There is a related point you make: that margins of error multiply as you go deeper into a search. I can "see" that but I can't relate that to the other thing I "see", which is that chess programs generally perform better the deeper they search.

For both go engines and chess engines, 1) they perform better if you let them search deeper, and 2) the "principal variation" (PV) returned by the engine makes less and less sense as it goes on (because it's spending less time there). Chess players know not to pay much attention to the later moves of the PV. I think one issue in go is that because the variation is displayed graphically instead of in notation it's harder to avoid paying attention to the long tail of it. I think Lizzie would benefit from showing only the variation only as far as it is being visited some reasonable number of times.

Quote:
I have suspected from the very beginning, and still believe, that the bots out-perform humans in go mainly because they search better and so make fewer crunching mistakes (and for that reason - i.e. it's the last mistake that decides the game - all the "research" into josekis and fusekis is somewhat misguided). AlphaGo Zero seems to have upset the apple cart, in chess as well as go, but until shown otherwise I will still believe that it is ultimately just searching better (not just deeper but perhaps because it makes a better selection of candidate moves and prunes better).

What is new about AlphaGo Zero is that its instinct (one visit and one pass through the neural net) is constantly being trained to match its calculation (the results from doing a more extended tree search). So it is learning to absorb its search into its intuition, which in turn enhances its search.

Quote:
So if a new version of AlphaGo came along with the same policy network but had the hardware to do an even deeper search, I'd initially expect that to be even stronger - notwithstanding the multiplication of the margin of errors that it would surely still be making.

It certainly would be, but see below about "multiplication of the margin of errors".

Quote:
Is there some way the margin of error in talking about the margin of error cancels out the original margin of error? In a vague way that seems to be why Monte Carlo search works.

I think it is dangerous to mix 1) the error in the winrate at the root node of the search, which certainly goes down as more search is done and 2) the error in the PV, which will go up the farther into the sequence we go, because less and less energy is being expended there. It is true that very accurate moves can be made at the root of the tree despite not spending lots of time at the leaves, but this has been true since the very first chess computer programs. Human players do the same.

It's not that the leaf errors get bigger as you search more (one visit/network evaluation is one visit, whether or not it's one ply into the future or twenty), it's that the root error gets smaller.


This post by dfan was liked by 6 people: bernds, Bill Spight, Elom, lightvector, Waylon, wolfking
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #32 Posted: Fri Aug 24, 2018 4:36 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Also just a Lizzie usage point for John, if some move like a wedge shows as 51% and 14 visits whilst a move LZ likes shows as 55% and 5k visits then that's enough to say "LZ doesn't much like the wedge and wouldn't play it (at this amount of search)". But saying it thinks the wedge is 4% worse isn't so reliable as only having 14 visits means it's got a bigger error than the 5k move it liked. So play the wedge move and watch what happens to the win% bar as you let LZ analyse for 5k visits on this move: if it stays around 51% then fine that estimate based on 14 visits from before turned out to be pretty good, but it could change a fair bit, e.g. become 49% (it's even worse than it thought), become 53% (not as bad as it thought but still a fair but worse than the 55% move it liked), or in rare cases 57% (better than the move it though, so a blindspot). Note this talk of errors is in comparison to what LZ would think given infinite time, which is of course a different question to what the truth is for perfect play, or of a stronger program, or of top pros playing from this position.


This post by Uberdude was liked by 4 people: Bill Spight, dfan, lightvector, wolfking
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #33 Posted: Fri Aug 24, 2018 5:07 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Hi John. :)

Moi wrote:
Well, it seems (to me, anyway) that that smidgeon is well within the bots' margin of error.


John Fairbairn wrote:
I don't yet feel this point is well founded.


I have been sitting on some data regarding Leela Zero's margin of error for quite a while. I don't have enough data to do any more than give some first impressions, but it is still worth a note or two.

Quote:
I admit I sloppily put simple winrates but I don't have the mathematical background to express these things properly and it seems from other threads here that there is debate even among experts what winrates even mean.


I have been skeptical about the meaning of win rates for quite a while. However, I was wrong as far as the zero bots are concerned. Their win rate estimates are what they sound like they are. From a given position with a certain player to play they are estimates of the winning percentages of the bot versus itself. :)

Edit: There are a number of people here who are experts. I see that dfan has responded while I was composing this. I defer to him, if there is any disagreement in what we say. :)

Quote:
But what also seems clear to me is that they alone are not a measure of a moves value, and for that reason their margin of error (on its own) is largely irrelevant.


That sounds like some things moha has written, but I don't think that is exactly what he means. IIUC, he means that the bots do not need to use margins of error to improve their estimates. Instead, they use the number of playouts.

Quote:
LZ, via Lizzie, actually expresses a move's evaluation in a two-dimensional way. There is winrate and a figure that seems to mean something like number of rollouts.


The number of rollouts indicates how good the program regards the winrate estimate. The more rollouts, the better the estimate. The margin of error is also an indication of how good an estimate is. The smaller the margin of error, the better.

Quote:
I have no idea how important each of these factors is relative to each other but LZ seems to think rollouts is very important because it will sometimes choose a move with a low winrate but high rollouts over one with a higher winrate and low rollouts.


The program actually chooses its plays based upon the number of rollouts alone. Despite the fact that the number of rollouts is not itself an estimate of how often the bot will win. This works because when the program searches the game tree, it add rollouts to the most promising plays. So the number of rollouts is an indirect indication of how good a play is.

Suppose that, after the search of the game tree, one play has a winrate estimate of 63% with 10,000 rollouts, while another play has a winrate estimate of 64% with only 100 rollouts. Now, either one of these plays could be best. However, the margin of error of the play with only 100 rollouts is greater than the margin of error of the play with 10,000 rollouts. We can say that with confidence without knowing the actual margins of error, because the number of rollouts is also an indication of how good the estimates are.

Why not pick the play with the highest winrate estimate, even though it has the greatest error rate? Because of how the program works. The play with only 100 rollouts has that few rollouts because during the search the program did not often think it might be the best play. It has been shown that, with this way of searching, picking the play with the most rollouts gives the best chance of winning games.

Quote:
But on top of that there seem to me to be other important factors, such as time, i.e. the stage in the game at which the evaluation is made. So it is really a multi-dimensional evaluation. Even a multi-dimensional value can have a margin of error of course, but from the practical point of view of a bot choosing a type of move consistently, I'm not clear whether margin of error then matters so much.


Indeed, a proper evaluation is multidimensional. :) But to pick a play we have to reduce the choice to one dimension. The programs do that by reducing the evaluation to the number of rollouts. However, that number carries little meaning. We humans also want to know who is ahead, and by how much. The winrate estimate gives that kind of information. It would also be useful to us humans to know the margin of error of that estimate, but the program does not calculate that. If the play with a winrate estimate of 56% has a margin of error of 2%, then we cannot say that it is better than another play with a winrate estimate of 55%. We may still pick the higher rated play, but we cannot say that the other play is a mistake.

Quote:
There is a related point you make: that margins of error multiply as you go deeper into a search.


Not the deeper you search, but the longer the sequence. (OC, to get the longer sequence you have to do a deeper search. ;))

Quote:
I can "see" that but I can't relate that to the other thing I "see", which is that chess programs generally perform better the deeper they search.


They perform better the greater they search (given a good search strategy). The depth of search is an indicator of the size of the search. But that is because, unlike humans, chess programs do not do a depth first search. In general, a search that explored only a few plays to a great depth would do worse than one that explored more plays to lesser depth. The chess programs' search strategies do not simply try to search deeply, which is why depth is an indicator of how much search they are doing.

Now, to derive a long sequence of "best" play, a great number of other sequences have been searched, which gives us confidence in the first play chosen. With each subsequent play in the sequence, fewer rival sequences have been searched, so we have less confidence in the second play, even less in the third, and so on. Our confidence in the last play of a long sequence is pretty low, as is our confidence in the sequence as a whole.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by 3 people: Elom, Gomoto, wolfking
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #34 Posted: Tue Aug 28, 2018 2:00 am 
Oza
User avatar

Posts: 2420
Location: Ghent, Belgium
Liked others: 359
Was liked: 1020
Rank: KGS 2d OGS 1d Fox 4d
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
14 years ago, Charles Matthews wrote one of the Sensei's Library articles that still stands as it was: "The Articulation Problem" https://senseis.xmp.net/?ArticulationProblem

(giving you some time to read it)

Thinking of it, I am also reminded of my university days, where I would have a "Higher Algebra" professor teaching us about Lie groups. He was not a very good teacher and we had to read a lot of books from the library to get the theory. Incidentally, that poor teaching method, forcing self learning, proved more effective than some of the well conceived syllabi and teaching by professors better at articulating their craft. At the time I held the conviction that someone who could not well articulate their knowledge, probably didn't understand it very well either.

Today I no longer think that way. I now believe that beyond a certain level of expertise, it becomes very difficult to convey your understanding in plain language. Math itself needed a whole new set of languages to become intelligible and teachable (and this symbolic language still represents a major hurdle for pupils). If you read the original text by Galilei, you can only feel for the poor sod who didn't have algebra at his disposal. Newton went as far as inventing calculus to express the laws of physics. Still, many areas where deep understanding can be reached, there is no such vehicle and plain human language is used to transfer the knowledge.

In Go, as Charles has writted, we have for a long time borrowed from Japanese. Some of us, like me and Charles defended the attempt to translate or rather refrase Go concepts in English, for the Western world, while others like John F. (if that's correct, John) felt that the losses in translation and the poor attempts at reinvention were not as effective and we'd better stick with what a whole culture had already built up in their own language. Then Korean domination came about and we started mixing Japanese terms and English terms with Korean terms like haengma.

Even so, Korean and Japanese professionals had a hard time as well to transfer their expertise to mundane players through language only. They often resolved to what we dismissively named "teaching by examples". This is thick (atsui), this is not thick. Right now I'm thinking they were not quite "teaching by examples" but rather using a language which was more effective to transfer their understanding, namely the stones themselves.

Today the strongest players are neither Japanese nor Korean. They don't speak any human language. I don't know about the internals of AI to guess how their understanding is harnassed or conceptualized. As Robert Jasiek has pointed out, a striking amount of decision making confirms human understanding. But still, they don't express their heuristics as "thickness" or "territory". As John F. has pointed out, they seem to evaluate positions in a similar way as professionals do, by counting "inefficiencies".

The question I'm really asking here, continuing on Charles' great article, is: should we humans keep trying conveying the knowledge we draw from professionals (now AIs) in the carrier that has be en so successful for mankind, language, or should we express thoughts in a more effective way, one that is closer to professional/AI thinking? And what would such "language" look like?


This post by Knotwilg was liked by: Elom
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #35 Posted: Tue Aug 28, 2018 2:41 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Some interesting points Knotwilg, and the shortcomings of language at conveying concepts within our human brains apply to many things. When we've had discussions here such as "is this group thick/atsumi/whatever" my view is it's not so important to try to condense all the facets of a position into one word, be it Japanese or English. Saying "this group has is locally thick (a strong connected wall) but isn't actually alive yet so you need to be careful in the future" is fine. Sure, if there was a single word that accurately conveyed that information that'd be grand and could make for denser information transfer, but I don't think the lack of it is what holds back increasing strength.

But as Marcel responded to this bit and it also stuck out to me I think it need correction/clarification:
Knotwilg wrote:
As John F. has pointed out, they seem to evaluate positions in a similar way as professionals do, by counting "inefficiencies".


I don't think John was claiming that was how AIs worked based on any technical insight (none of us really know, it's still mostly a magical black box, but maybe some AI researchers at DeepMind or elsewhere have some better ideas, interpretability of neural networks is a hot topic), more that he was impressed with the speed/ease at which an AI said "no, this is not even, one player is better" and give a clear numerical score in a position that even strong humans would easily gloss over. (It impresses me too so maybe I was projecting).

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #36 Posted: Tue Aug 28, 2018 3:18 am 
Oza

Posts: 3698
Liked others: 20
Was liked: 4660
Quote:
The question I'm really asking here, continuing on Charles' great article, is: should we humans keep trying conveying the knowledge we draw from professionals (now AIs) in the carrier that has been so successful for mankind, language, or should we express thoughts in a more effective way, one that is closer to professional/AI thinking? And what would such "language" look like?


You mention a gradual but major change in your thinking. It's funny how these creep up on you. One thing that changed my view of the go "articulation" problem was watching baseball. Not coming from a baseballing nation, I was an outsider - rather like a western go player dealing with Japanese pro games. That helped to focus my thinking.

What I realised was that we talk in go about pro players and amateur players as if they are a seamless continuum, and as if amateurs can become pros if only they didn't have to worry about the mortgage or mowing the lawn, but actually the real distinction should be between pros and fans.

Baseball has a massive fan base. It has fans who can quote facts and statistics at you all night in a bar. Through Bill James and others it has a massive statistics industry which even backroom coaches and administrators now rely on. This statistical world has spawned a large new vocabulary, mainly based on acronyms, which fans and their "water carriers" (sports commentators and baseball journalist) can (and do) use among each other.

But the actual pro players just go out and hurl, smack or catch the ball. They don't work out percentages or use fancy vocabulary. They often don't even understand the acronyms. They rely on intuition. They don't really care about anything the fans care about (though they do cynically understand the benefits of tossing the odd crumb of insider "knowledge" out). I can remember how shocked I was when I discovered how few players actually knew anything about Jackie Robinson, despite the annual big-time ceremonies. And the bewildered look of "Who?" when (say) Ty Cobb was mentioned. No fan would be found lacking there.

But this dichotomy is characteristic of most (?all) professions and I think we can make a step forward by acknowledging it exists in go, too.

It has certain consequences. For example, complaints about certain pros being unable to teach become wide of the mark, and - in the main - so do efforts to get them to teach. It also means that for amateurs (i.e. fans) the vocabulary and concepts are more to do with entertainment than enlightenment, and so heuristics, classification and simplifications are a good thing for them. Rules disputes are good entertainment, cheating is fodder for entertainment. Statistics always go down well. Books, videos, lectures are grist to the mill. All of this will be best provided by non-pros.

Pros need none of this. They just need mental discipline, an awl to prick the thigh when tiredness looms, and above all time to soak up thousands of positions.

We can therefore expect the pro response and the fan response to whatever AI has to teach us to be quite different. For fans, the new language will be about more entertainment, with even more simplifications and heuristics, and more statistics. Pros won't even notice what we are up to.

Separately: yes uberdude is right to stress that I don't know how AIs work. I am just expressing a view at the output end of the black box. I see certain similarities there between bots and pros, but I don't think that what goes on inside the black box in each case is likely to be the same. I talk above about pros and fans. Probably better to talk about bots and pros and fans. In a vague way we can see the trichotomy even in baseball. There used to be pros and fans. Then Bill James and computer power came along to add statistics, a sort of metalanguage for the game.


This post by John Fairbairn was liked by 4 people: Elom, ez4u, Knotwilg, yakcyll
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #37 Posted: Tue Aug 28, 2018 7:13 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
To me, concepts matter in go, at least to humans. Language matters because we use language to represent concepts and to communicate about them. (I think that go programs have concepts, too, but they aren't talking. ;)) Among the strategy games I know, go seems to me to be the most literate. I think this is a big help for those of us who learn the game as adults. When I was shodan I thought that anyone could become an amateur dan player by learning around 50 concepts. However, some of those concepts were quite difficult to learn. What is the difference between thick and heavy? Between light and thin? One's understanding of such terms is a matter of degree. As a shodan I felt that, despite my imperfect understanding of those terms, I could answer those questions pretty well. That understanding helped me to play better, by avoiding heavy plays and thin plays. I had an even better understanding of sente and gote, while still imperfect. Those concepts had helped my game, even when my understanding of them was lousy. As a DDK I strove to take and keep sente (in the sense of initiative) and, if invaded, to make sure that the invading group did not live in sente. And, OC, I tried not to die in gote myself. ;)

As for teaching by example, I have no sense that that is inappropriate. To wax philosophical, one way of defining a concept is as a list of all of its examples. :) Without examples, the words have no meaning.

Pros find the language useful, as well. For, ahem, example, Sakata mentions discussing in post mortems whether certain plays are kikashi or not.

Early in my go career I ran across a passage in English that talked about light and heavy play. The author said something like, you can't understand light play from the English meaning of light, that the Japanese term conveyed something akin to a rapier strike. Even then, I recognized that as BS. Not that some people would not find the metaphor helpful, but without examples on the go board, it was pretty useless.

Over time I developed a thick style of play, without realizing it. I was trying to emulate Go Seigen. ;) I feel like my understanding of light, heavy, thick, and thin, has grown over time, and is pretty good. I even have come to regard lightness as having to do with attitude, something that would not help someone who had not already amassed a number of examples of light play. But I have to say, looking at AlphaGo self play games, I sensed a level of lightness the likes of which I had never seen before, something that I would find difficult to emulate. I have to admit that I have a heavy style of play. :sad:

Dwyrin says that if you consult the bots you will not learn about direction of play. He may well be right. 25 years from now we may find that concept of minor significance. If you float on the clouds above the board, what difference does direction make? You can go this way, you can go that way, as the wind blows.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #38 Posted: Tue Aug 28, 2018 11:12 pm 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Bill Spight wrote:
But I have to say, looking at AlphaGo self play games, I sensed a level of lightness the likes of which I had never seen before, something that I would find difficult to emulate. I have to admit that I have a heavy style of play. :sad:

Don't be sad Bill. Let us say your play is consistent, while AlphaGo's is schizophrenic ;-)

P.S.
Bill Spight wrote:
Dwyrin says that if you consult the bots you will not learn about direction of play. He may well be right.

Remember that most of dwyrin's audience is kyu players, and lots of ddks. I think some of his criticisms of AI are valid for weaker players, but less so for stronger ones. One easy thing I have learnt and can apply fairly successfully to my games is that as white with double 4-4, when black approaches on move 5 aiming for a Kobayashi or mini-Chinese opening, I will now low approach his 3-4 instead of answering with patient knight move. (Not a new move from bots, but their general dismal view of some other choices humans also played are notable, and the ability to easily try out different responses from black and see good ways for white to continue is new).

Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #39 Posted: Tue Aug 28, 2018 11:29 pm 
Lives in gote

Posts: 388
Liked others: 417
Was liked: 198
Knotwilg wrote:
Today I no longer think that way. I now believe that beyond a certain level of expertise, it becomes very difficult to convey your understanding in plain language.


I think this is very interesting, and it is not purely a limitation of the language. It reminds me of the "Dreyfus model" according to which, as one becomes a master in any domain, it is harder and harder to explain how they are doing what they are very good at: http://361points.com/articles/novice-to-expert/

Knotwilg wrote:
The question I'm really asking here, continuing on Charles' great article, is: should we humans keep trying conveying the knowledge we draw from professionals (now AIs) in the carrier that has be en so successful for mankind, language, or should we express thoughts in a more effective way, one that is closer to professional/AI thinking? And what would such "language" look like?


There is already such a very expressive language, both for professionals and AIs: it is a deep and thick tree of variations. When professionals analyze a game, they don't just talk about it in abstract high-level ideas, but they lay down tons of very long variations. The public commentaries that one usually sees, which are meant for amateurs, are really just the tip of the iceberg compared to a pro-to-pro analysis of a game.

_________________
Sorin - 361points.com


This post by sorin was liked by 2 people: Bill Spight, Elom
Top
 Profile  
 
Offline
 Post subject: Re: Why humans fail against AI
Post #40 Posted: Wed Aug 29, 2018 2:01 am 
Oza
User avatar

Posts: 2420
Location: Ghent, Belgium
Liked others: 359
Was liked: 1020
Rank: KGS 2d OGS 1d Fox 4d
KGS: Artevelde
OGS: Knotwilg
Online playing schedule: UTC 18:00 - 22:00
sorin wrote:
Knotwilg wrote:
Today I no longer think that way. I now believe that beyond a certain level of expertise, it becomes very difficult to convey your understanding in plain language.


I think this is very interesting, and it is not purely a limitation of the language. It reminds me of the "Dreyfus model" according to which, as one becomes a master in any domain, it is harder and harder to explain how they are doing what they are very good at: http://361points.com/articles/novice-to-expert/

Knotwilg wrote:
The question I'm really asking here, continuing on Charles' great article, is: should we humans keep trying conveying the knowledge we draw from professionals (now AIs) in the carrier that has be en so successful for mankind, language, or should we express thoughts in a more effective way, one that is closer to professional/AI thinking? And what would such "language" look like?


There is already such a very expressive language, both for professionals and AIs: it is a deep and thick tree of variations. When professionals analyze a game, they don't just talk about it in abstract high-level ideas, but they lay down tons of very long variations. The public commentaries that one usually sees, which are meant for amateurs, are really just the tip of the iceberg compared to a pro-to-pro analysis of a game.


Nice article Sorin and on the 2nd paragraph, you are right, this is how pros and AIs "talk".

At the end of your article you invite us to think about our own domain. Mine has become "business organization" and indeed, a couple of weeks ago I had a very profound thought, one that went through my mind extremely fast and where I "saw" the truth about a particular change in a peer organization, and I wondered how I would be able to convey that thought to the decision maker in that organization. Already while I was articulating my thought, it was degrading. The length of my argument, the words I was using, the unconvincing scaffolding of my experience, ... all of it was deteriorating the thought itself. So much even that I started doubting my thought, whether it was real or "well thought through".

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 67 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group