Have computers changed chess openings?
Posted: Tue Apr 05, 2016 3:12 pm
I don't know very much about openings or high-level play in chess. Did computer analysis ever rule out formerly popular moves as bad, or create new openings?
Life in 19x19. Go, Weiqi, Baduk... Thats the life.
https://www.lifein19x19.com/
As I know that there are professional opening books available such as the Fritz Powerbooks that extend up to move 30, I would say yes. As I have recently replayed a commented game that started the comments with "Magnus Carlsen tries a novel approach in this opening; this move has not been played before" at move 27 (!), I would say yes.EdLee wrote:I heard yes.
This is really interesting! :-)Babelardus wrote: As I know that there are professional ...
Chessbase Fritz Powerbook 2016Anzu wrote:This is really interesting!Babelardus wrote: As I know that there are professional ...
I think this idea is subtle. Just because computers can beat us at chess (and probably in the near future go) doesn't mean they play really well in an absolute sense. It might be that we are just really bad ("puny humans!"). There still might be a long way to go in playing strength. For example in computer chess the top program beats the 3rd best with some regularity and ratings of computers chess machines are going up rapidly.The computer has not only changed chess openings, it has changed everything. To a computer, chess holds no more mysteries. If you want to know if your move was right, you ask the computer for a 5 minute analysis, and the answer will be definitive.
That engines are stronger than humans is clear. What I always would have liked to know is how much stronger the engines actually are. Because humans and engines don't play in the same tournaments, the ELO scales are not comparable. Also, you can't just have Magnus Carlsen play Stockfish or Komodo in a 6 game match; he'll probably lose by 6-0. Because of that, you can only say that the engine is stronger than Magnus Carlsen, but not by how much; 300 ELO? 500? Even more?ericf wrote:I think this idea is subtle. Just because computers can beat us at chess (and probably in the near future go) doesn't mean they play really well in an absolute sense. It might be that we are just really bad ("puny humans!"). There still might be a long way to go in playing strength. For example in computer chess the top program beats the 3rd best with some regularity and ratings of computers chess machines are going up rapidly.The computer has not only changed chess openings, it has changed everything. To a computer, chess holds no more mysteries. If you want to know if your move was right, you ask the computer for a 5 minute analysis, and the answer will be definitive.
see: https://praxtime.com/2014/03/24/chess-t ... -progress/
It's certainly possible that new methods of playing Chess and Go will be developed that are even better than what we have now. Take a look at Stockfish for example. Note that Stockfish 7 64-bit is 37 ELO points stronger than Stockfish DD 64-bit 4CPU. What this says is that version 7 is stronger than version DD while only using 25% computing power.For example, suppose that deep learning ala alphago get to the point that they can be pros every time. It might happen that some completely different approach might crush alphago and that approach could play go very differently than humans do, while alphago plays a lot like humans since its neural net was trained on human play.
I heard chess programs are already at least 1 pawn handicap past the best humans.emeraldemon wrote:Could Magnus Carlsen defeat a computer with a handicap? Say the computer was down a pawn. A knight?
Probably.emeraldemon wrote:Could Magnus Carlsen defeat a computer with a handicap? Say the computer was down a pawn. A knight?

The most recent handicap games against a top-level player were against Hikaru Nakamura (currently world #6) earlier this year.emeraldemon wrote:Could Magnus Carlsen defeat a computer with a handicap? Say the computer was down a pawn. A knight?
My bold face. Even among human pro players a joseki might be abandoned because one side has what seems a very small advantage, e.g. one point or less. AlphaGo might abandon a joseki because of even smaller advantage, not due to the joseki being "crushed". However, can AlphaGo make such quantified judgments?dfan wrote:It is pretty rare that a move formerly considered reasonable has been completely refuted by a computer and abandoned (I can't think of an example off the top of my head). Consider the Go analogy; AlphaGo probably isn't going to discover that some established joseki actually ends in disaster for one side due to some incredible tesuji.
What happens more often is that computers can pretty exhaustively search the possibilities in a position or line and confirm that it is promising or not. A typical opening line might obtain some static advantage for one side but concede some dynamic possibilities to the other, and the computer can check whether those dynamic possibilities really come to anything in the end with best play. If not, the player can go ahead and play the risky move (if he remembers or can reproduce the computer analysis over the board!). As a result, chess style has become more "concrete", focusing on whether moves work out in actual variations, rather than evaluating them with rules of thumb and pattern-matching. (Of course people have always done both; it's just that the balance has shifted more to the former.)
One interesting result is that many top players now play openings that are less sharp and concrete, avoiding computer-assisted preparation on the part of the opponent and trying to "just play chess", relying on their more human strategic skills. Magnus Carlsen (the current world champion) in particular is the poster child for this approach.
I can't speak about AlphaGo, but I would say yes; if not now, then it will in time. A chess engine is able to quantify positional differences into 1/100th of a pawn.gowan wrote: My bold face. Even among human pro players a joseki might be abandoned because one side has what seems a very small advantage, e.g. one point or less. AlphaGo might abandon a joseki because of even smaller advantage, not due to the joseki being "crushed". However, can AlphaGo make such quantified judgments?