2/18/2026 at 1:23:38 AM
I don't really understand the criticism. The authors aren't claiming to have the strongest chess engine without search. They are just showing that they got a chess engine to a respectable level with their process, which is somewhat different from LC0. They do in fact explain that explicitly:> Leela Chess Zero’s networks, which are trained with self-play and RL, achieve higher Elo ratings without using explicit search at test time than our transformers, which we trained via supervised learning. However, in contrast to our work, very strong chess performance (at low computational cost) is the explicit goal of this open source project (which they have clearly achieved via domain-specific adaptations). We refer interested readers to [https://arxiv.org/abs/2409.12272] (which was published concurrently to our work) for details on the current state-of-the-art and a comparison against our network.
And I don't think the criticism of their writing is on point either. I don't think they are secretly implying that their engine is better than Stockfish. And it's 100% plausible for human masters to rigorously analyze many positions with engine assistance and correctly establish whether Stockfish's evaluation is right or not.
by mquander
2/18/2026 at 3:13:44 AM
First of all the title is misleading: "GM level" to most of us means moves of the quality that a GM makes when playing at classical time control. As of several years ago, LC0 needed around 35 search nodes per move to do that. With LC0's new transformer architecture, that number has probably gotten a lot lower, but not all the way down to 0. Second of all, the article complains about the Google paper not citing some other publication. So that's a concrete criticism though I haven't checked its validity.by throwaway81523