A little experiment with 400 games

For discussing go computing, software announcements, etc.
Post Reply
Vargo
Lives in gote
Posts: 337
Joined: Sat Aug 17, 2013 5:28 am
GD Posts: 0
Has thanked: 22 times
Been thanked: 97 times

A little experiment with 400 games

Post by Vargo »

I was very surprised to see that two 400 game matches could have a 6% difference in winrates (the winrates were 33% and 39%, with the same parameters)

To be sure that the results of these two 400 game matches were within reasonable probabilistic limits, I set up a 100000 throws match of "Heads or Tails". The result comes in seconds, so, this "heads or tails" experiment can be run several times.

Bottom line is :

For a 100000 throws game,
After 400 throws the winrate is anywhere between ~44% and ~56%
After 1000 throws the winrate is usually between ~46% ~54%
After 10000 throws, it's within 1-2 % of 50%
and at 100000 throws, it's usually very close to 50%


It's really a surprise to me that 400 games gives so little information !

Below, two graphs, in each , the 4 pictures describe the same game, after 400, 1000, 10000 and 100000 throws
4gr2.gif
4gr2.gif (31.93 KiB) Viewed 6477 times
4gr3.gif
4gr3.gif (29.51 KiB) Viewed 6477 times
PS. I'm running a third (and last) 400 game match between #196 and 92297ff ;-)
dfan
Gosei
Posts: 1599
Joined: Wed Apr 21, 2010 8:49 am
Rank: AGA 2k Fox 3d
GD Posts: 61
KGS: dfan
Has thanked: 891 times
Been thanked: 534 times
Contact:

Re: A little experiment with 400 games

Post by dfan »

The variance of a binomial distribution (summing up the results of n trials where each one has a probability of success of p) is np(1-p), and the standard deviation is the square root of that. 68% of results lie within one standard deviation of the mean and 95% of results lie within two.

For this case, that means that if we assume that the engines are exactly equal in strength, the standard deviation of a 400-game experiment is sqrt(400 * .5 * .5) = 10. So we expect 68% of our experiments to end with a result in the range of 200 ± 10 wins for the first engine (47.5% to 52.5% win rate) and 95% to end with 200 ± 20 wins (45% to 55%).
lightvector
Lives in sente
Posts: 759
Joined: Sat Jun 19, 2010 10:11 pm
Rank: maybe 2d
GD Posts: 0
Has thanked: 114 times
Been thanked: 916 times

Re: A little experiment with 400 games

Post by lightvector »

Yep, the fact that you need so many games to reliably test strength differences (unless they're extreme) becomes painfully apparent if you're a bot developer.
Vargo
Lives in gote
Posts: 337
Joined: Sat Aug 17, 2013 5:28 am
GD Posts: 0
Has thanked: 22 times
Been thanked: 97 times

Re: A little experiment with 400 games

Post by Vargo »

dfan wrote:So we expect 68% of our experiments to end with a result in the range of 200 ± 10 wins for the first engine (47.5% to 52.5% win rate)
You're right, and it also means that when you run 400 game matches between evenly matched contestants, you'll get false winrates by 3% or more (...46%, 47% or 53%, 54%...) in ~30% of the matches !
That's a lot...
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: A little experiment with 400 games

Post by Mike Novack »

But back to Vargo's original question. What has been shown for the binomial is for the special case where the p is 0.5 (the engines are actually of equal strength)

BUT -- the whole purpose of the experiment was to determine if one engine was in fact stronger than the other (p NOT 0.5).

Here the results were 0.39 and 0.33 and the question being asked was the difference between these meaningful.

Try doing the expansion again with a value for p something like p = 0.36 and then see if results of carrying out the experiment of 0.39 and 0.33 are unlikely or not.
dfan
Gosei
Posts: 1599
Joined: Wed Apr 21, 2010 8:49 am
Rank: AGA 2k Fox 3d
GD Posts: 61
KGS: dfan
Has thanked: 891 times
Been thanked: 534 times
Contact:

Re: A little experiment with 400 games

Post by dfan »

Yes, I was just providing the analysis to the experiment described in this thread, partially because the full parameters to that experiment were known (the coin was fair) and the parameters to the original experiment are not (we don't know what the "real" strength difference between the two engines is).

That said, if we assume a coin that comes up heads 36%, of the time, the standard deviation of a 400-game result is now 9.6 instead of 10, which means that ~68% of the time the result will lie between .36 * 400 - 9.6 and .36 * 400 + 9.6 wins, which comes out to a winning percentage range of 33.6% to 38.4%. 95% of the time the observed result of a 400-game match will lie within 31.2% and 40.8%.
mhlepore
Lives in gote
Posts: 390
Joined: Sun Apr 22, 2012 9:52 am
GD Posts: 0
KGS: lepore
Has thanked: 81 times
Been thanked: 128 times

Re: A little experiment with 400 games

Post by mhlepore »

I think in the case of two runs of 400 flips, with one giving 33% heads and one giving 39% heads, it makes sense to to a straightforward test of proportions. The variance would get pooled.
moha
Lives in gote
Posts: 311
Joined: Wed May 31, 2017 6:49 am
Rank: 2d
GD Posts: 0
Been thanked: 45 times

Re: A little experiment with 400 games

Post by moha »

One should be very careful when calculating expected deviation, particularly because these simple figures assume sample independence, which is not necessarily true. For example the first few moves and choices of openings / first josekis can have large effect on the outcome, and these are often identical within the sample set. So you may not even getting 400 effective games worth of samples, which further increases variance.
Post Reply