Code: Select all
Model Tensor Storage ALU TFlops SLI possible USD (net)
Cores TFlops Cores 32b 64b
RTX 2080 TI 544 113,8 DDR6 11GB 616GB/s 4352 13,45 0,42 yes ~1000
RTX 3080 272 = 0,5x 238 = 2,1x DDR6X 10GB = 0,9x 760GB/s = 1,2x 8704 = 2x 29,77 0,93 = 2,2x no 700
RTX 3090 328 = 0,6x 285 = 2,5x DDR6X 24GB = 2,2x 936GB/s = 1,5x 10496 = 2,4x 35,58 1,11 = 2,6x yes 1500Do these values mean that 1x RTX 3080 is roughly as fast for Go-AI as 2x RTX 2080 TI (with SLI)?
Is the advantage of using 2 GPUs (with SLI) just 2x the speed of 1 GPU or is there an additional advantage?
Is 0,5x the number of Tensor Cores any disadvantage or is the factor 2,1x of Tensor TFlops the only relevant value?
Are 24GB instead of 10GB storage just 1,2x faster (936GB/s instead of 760GB/s) or can also 2,4x more go positions be stored? How do these sizes of GPU storage and, say, 64GB of RAM cooperate for storing more go positions?