Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Tomasz Sobczyk
Date: Sun Aug 15 12:05:43 2021 +0200 Timestamp: 1629021943 New NNUE architecture and net Introduces a new NNUE network architecture and associated network parameters The summary of the changes: * Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on. * The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code. * The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures. The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143 The training utilized 2 datasets. dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing dataset B - as described in https://github.com/official-stockfish/Stockfish/commit/ba01f4b95448bcb324755f4dd2a632a57c6e67bc The training process was as following: train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training). convert the .ckpt to .pt --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function. The first training command: python3 train.py \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=1.0 \ --max_epochs=600 \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 The second training command: python3 serialize.py \ --features=HalfKAv2_hm^ \ ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \ ../nnue-pytorch-training/experiment_$1/base/base.pt python3 train.py \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=0.8 \ --max_epochs=600 \ --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4 LLR: 2.97 (-2.94,2.94) <-0.50,2.50> Total: 22480 W: 2434 L: 2251 D: 17795 Elo +2.83 Ptnml(0-2): 101, 1736, 7410, 1865, 128 LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea LLR: 2.93 (-2.94,2.94) <0.50,3.50> Total: 9776 W: 442 L: 333 D: 9001 Elo +3.87 Ptnml(0-2): 5, 295, 4180, 402, 6 closes https://github.com/official-stockfish/Stockfish/pull/3646 bench: 5189338 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Joost VandeVondele
Date: Thu Aug 5 16:41:07 2021 +0200 Timestamp: 1628174467 Revert futility pruning patches reverts 09b6d28391cf582d99897360b225bcbbe38dd1c6 and dbd7f602d3c7622df294f87d7239b5aaf31f695f that significantly impact mate finding capabilities. For example on ChestUCI_23102018.epd, at 1M nodes, the number of mates found is nearly reduced 2x without these depth conditions: sf6 2091 sf7 2093 sf8 2107 sf9 2062 sf10 2208 sf11 2552 sf12 2563 sf13 2509 sf14 2427 master 1246 patched 2467 (script for testing at https://github.com/official-stockfish/Stockfish/files/6936412/matecheck.zip) closes https://github.com/official-stockfish/Stockfish/pull/3641 fixes https://github.com/official-stockfish/Stockfish/issues/3627 Bench: 5467570 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: VoyagerOne
Date: Thu Aug 5 16:32:07 2021 +0200 Timestamp: 1628173927 SEE simplification Simplified SEE formula by removing std::min. Should also be easier to tune. STC: LLR: 2.95 (-2.94,2.94) <-2.50,0.50> Total: 22656 W: 1836 L: 1729 D: 19091 Elo +1.64 Ptnml(0-2): 54, 1426, 8267, 1521, 60 https://tests.stockfishchess.org/tests/view/610ae62f2a8a49ac5be79449 LTC: LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 26248 W: 806 L: 744 D: 24698 Elo +0.82 Ptnml(0-2): 6, 668, 11715, 728, 7 https://tests.stockfishchess.org/tests/view/610b17ad2a8a49ac5be79466 closes https://github.com/official-stockfish/Stockfish/pull/3643 bench: 4915145 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: SFisGOD
Date: Thu Aug 5 08:52:07 2021 +0200 Timestamp: 1628146327 Update default net to nn-46832cfbead3.nnue SPSA 1: https://tests.stockfishchess.org/tests/view/6100e7f096b86d98abf6a832 Parameters: A total of 256 net weights and 8 net biases were tuned (output layer) Base net: nn-56a5f1c4173a.nnue New net: nn-ec3c8e029926.nnue SPSA 2: https://tests.stockfishchess.org/tests/view/610733caafad2da4f4ae3da7 Parameters: A total of 256 net biases were tuned (hidden layer 2) Base net: nn-ec3c8e029926.nnue New net: nn-46832cfbead3.nnue STC: LLR: 2.98 (-2.94,2.94) <-0.50,2.50> Total: 50520 W: 3953 L: 3765 D: 42802 Elo +1.29 Ptnml(0-2): 138, 3063, 18678, 3235, 146 https://tests.stockfishchess.org/tests/view/610a79692a8a49ac5be793f4 LTC: LLR: 2.94 (-2.94,2.94) <0.50,3.50> Total: 57256 W: 1723 L: 1566 D: 53967 Elo +0.95 Ptnml(0-2): 12, 1442, 25568, 1589, 17 https://tests.stockfishchess.org/tests/view/610ac5bb2a8a49ac5be79434 Closes https://github.com/official-stockfish/Stockfish/pull/3642 Bench: 5359314 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Stefan Geschwentner
Date: Thu Aug 5 08:47:33 2021 +0200 Timestamp: 1628146053 Simplify new cmh pruning thresholds by using directly a quadratic formula. This decouples also the stat bonus updates from the threshold which creates less dependencies for tuning of stat bonus parameters. Perhaps a further fine tuning of the now separated coefficients for constHist[0] and constHist[1] could give further gains. STC: LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 78384 W: 6134 L: 6090 D: 66160 Elo +0.20 Ptnml(0-2): 207, 5013, 28705, 5063, 204 https://tests.stockfishchess.org/tests/view/6106d235afad2da4f4ae3d4b LTC: LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 38176 W: 1149 L: 1095 D: 35932 Elo +0.49 Ptnml(0-2): 6, 1000, 17030, 1038, 14 https://tests.stockfishchess.org/tests/view/6107a080afad2da4f4ae3def closes https://github.com/official-stockfish/Stockfish/pull/3639 Bench: 5098146 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: VoyagerOne
Date: Thu Aug 5 08:44:38 2021 +0200 Timestamp: 1628145878 Futile pruning simplification Remove CMH conditions in futile pruning. STC: LLR: 2.94 (-2.94,2.94) <-2.50,0.50> Total: 93520 W: 7165 L: 7138 D: 79217 Elo +0.10 Ptnml(0-2): 222, 5923, 34427, 5982, 206 https://tests.stockfishchess.org/tests/view/61083104e50a153c346ef8df LTC: LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 59072 W: 1746 L: 1706 D: 55620 Elo +0.24 Ptnml(0-2): 13, 1562, 26353, 1588, 20 https://tests.stockfishchess.org/tests/view/610894f2e50a153c346ef913 closes https://github.com/official-stockfish/Stockfish/pull/3638 Bench: 5229673 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: VoyagerOne
Date: Sat Jul 31 15:29:19 2021 +0200 Timestamp: 1627738159 CMH Pruning Tweak replace CounterMovePruneThreshold by a depth dependent threshold STC: LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 35512 W: 2718 L: 2552 D: 30242 Elo +1.62 Ptnml(0-2): 66, 2138, 13194, 2280, 78 https://tests.stockfishchess.org/tests/view/6104442fafad2da4f4ae3b94 LTC: LLR: 2.96 (-2.94,2.94) <0.50,3.50> Total: 36536 W: 1150 L: 1019 D: 34367 Elo +1.25 Ptnml(0-2): 10, 920, 16278, 1049, 11 https://tests.stockfishchess.org/tests/view/6104b033afad2da4f4ae3bbc closes https://github.com/official-stockfish/Stockfish/pull/3636 Bench: 5848718 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Tomasz Sobczyk
Date: Fri Jul 30 17:15:52 2021 +0200 Timestamp: 1627658152 Avoid unnecessary stores in the affine transform This patch improves the codegen in the AffineTransform::forward function for architectures >=SSSE3. Current code works directly on memory and the compiler cannot see that the stores through outptr do not alias the loads through weights and input32. The solution implemented is to perform the affine transform with local variables as accumulators and only store the result to memory at the end. The number of accumulators required is OutputDimensions / OutputSimdWidth, which means that for the 1024->16 affine transform it requires 4 registers with SSSE3, 2 with AVX2, 1 with AVX512. It also cuts the number of stores required by NumRegs * 256 for each node evaluated. The local accumulators are expected to be assigned to registers, but even if this cannot be done in some case due to register pressure it will help the compiler to see that there is no aliasing between the loads and stores and may still result in better codegen. See https://godbolt.org/z/59aTKbbYc for codegen comparison. passed STC: LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 140328 W: 10635 L: 10358 D: 119335 Elo +0.69 Ptnml(0-2): 302, 8339, 52636, 8554, 333 closes https://github.com/official-stockfish/Stockfish/pull/3634 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: SFisGOD
Date: Thu Jul 29 07:35:13 2021 +0200 Timestamp: 1627536913 Update default net to nn-56a5f1c4173a.nnue SPSA 1: https://tests.stockfishchess.org/tests/view/60fd24efd8a6b65b2f3a796e Parameters: A total of 256 net biases were tuned (hidden layer 2) New best values: Half of the changes from the tuning run New net: nn-5992d3ba79f3.nnue SPSA 2: https://tests.stockfishchess.org/tests/view/60fec7d6d8a6b65b2f3a7aa2 Parameters: A total of 128 net biases were tuned (hidden layer 1) New best values: Half of the changes from the tuning run New net: nn-56a5f1c4173a.nnue STC: LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 140392 W: 10863 L: 10578 D: 118951 Elo +0.71 Ptnml(0-2): 347, 8754, 51718, 9021, 356 https://tests.stockfishchess.org/tests/view/610037e396b86d98abf6a79e LTC: LLR: 2.95 (-2.94,2.94) <0.50,3.50> Total: 14216 W: 454 L: 355 D: 13407 Elo +2.42 Ptnml(0-2): 4, 323, 6356, 420, 5 https://tests.stockfishchess.org/tests/view/61019995afad2da4f4ae3a3c Closes #3633 Bench: 4801359 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: SFisGOD
Date: Mon Jul 26 07:52:59 2021 +0200 Timestamp: 1627278779 Update default net to nn-26abeed38351.nnue SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891 New best values: Half of the changes from the tuning run. Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds) The rest is the same as described in #3593 The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers SFisGOD@94cd757#commitcomment-53324840 STC: LLR: 2.96 (-2.94,2.94) <-0.50,2.50> Total: 67448 W: 5241 L: 5036 D: 57171 Elo +1.06 Ptnml(0-2): 151, 4198, 24827, 4391, 157 https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e LTC: LLR: 2.93 (-2.94,2.94) <0.50,3.50> Total: 48752 W: 1504 L: 1358 D: 45890 Elo +1.04 Ptnml(0-2): 13, 1226, 21754, 1368, 15 https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9 Closes https://github.com/official-stockfish/Stockfish/pull/3630 Bench: 5124774 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Giacomo Lorenzetti
Date: Mon Jul 26 07:48:58 2021 +0200 Timestamp: 1627278538 Simplification in LMR This commit removes the `!captureOrPromotion` condition from ttCapture reduction and from good/bad history reduction (similar to #3619). passed STC: https://tests.stockfishchess.org/tests/view/60fc734ad8a6b65b2f3a7922 LLR: 2.97 (-2.94,2.94) <-2.50,0.50> Total: 48680 W: 3855 L: 3776 D: 41049 Elo +0.56 Ptnml(0-2): 118, 3145, 17744, 3206, 127 passed LTC: https://tests.stockfishchess.org/tests/view/60fce7d5d8a6b65b2f3a794c LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 86528 W: 2471 L: 2450 D: 81607 Elo +0.08 Ptnml(0-2): 28, 2203, 38777, 2232, 24 closes https://github.com/official-stockfish/Stockfish/pull/3629 Bench: 4951406 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: MichaelB7
Date: Sat Jul 24 18:04:59 2021 +0200 Timestamp: 1627142699 Update the default net to nn-76a8a7ffb820.nnue. combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets. Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue: The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer. Training data gen command (generates in chunks of 200k positions): generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence): python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data. Based on that Michael generated nn-76a8a7ffb820.nnue in the following way: The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack model.py modifications: loss = torch.pow(torch.abs(p - q), 2.6).mean() LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch. optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992) For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time. passed LTC https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4 LLR: 2.94 (-2.94,2.94) <0.50,3.50> Total: 53040 W: 1732 L: 1575 D: 49733 Elo +1.03 Ptnml(0-2): 14, 1432, 23474, 1583, 17 passed STC https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775 LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 37928 W: 3178 L: 3001 D: 31749 Elo +1.62 Ptnml(0-2): 100, 2446, 13695, 2623, 100. closes https://github.com/official-stockfish/Stockfish/pull/3626 Bench: 5169957 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Giacomo Lorenzetti
Date: Fri Jul 23 19:02:58 2021 +0200 Timestamp: 1627059778 Apply good/bad history reduction also when inCheck Main idea is that, in some cases, 'in check' situations are not so different from 'not in check' ones. Trying to use piece count in order to select only a few 'in check' situations have failed LTC testing. It could be interesting to apply one of those ideas in other parts of the search function. passed STC: https://tests.stockfishchess.org/tests/view/60f1b68dd1189bed71812d40 LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 53472 W: 4078 L: 4008 D: 45386 Elo +0.45 Ptnml(0-2): 127, 3297, 19795, 3413, 104 passed LTC: https://tests.stockfishchess.org/tests/view/60f291e6d1189bed71812de3 LLR: 2.92 (-2.94,2.94) <-2.50,0.50> Total: 89712 W: 2651 L: 2632 D: 84429 Elo +0.07 Ptnml(0-2): 60, 2261, 40188, 2294, 53 closes https://github.com/official-stockfish/Stockfish/pull/3619 Bench: 5185789 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: pb00067
Date: Fri Jul 23 18:53:03 2021 +0200 Timestamp: 1627059183 Simplify lowply-history scoring logic STC: https://tests.stockfishchess.org/tests/view/60eee559d1189bed71812b16 LLR: 2.97 (-2.94,2.94) <-2.50,0.50> Total: 33976 W: 2523 L: 2431 D: 29022 Elo +0.94 Ptnml(0-2): 66, 2030, 12730, 2070, 92 LTC: https://tests.stockfishchess.org/tests/view/60eefa12d1189bed71812b24 LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 107240 W: 3053 L: 3046 D: 101141 Elo +0.02 Ptnml(0-2): 56, 2668, 48154, 2697, 45 closes https://github.com/official-stockfish/Stockfish/pull/3616 bench: 5199177 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Vizvezdenec
Date: Fri Jul 23 18:47:30 2021 +0200 Timestamp: 1627058850 Prune illegal moves in qsearch earlier The main idea is that illegal moves influencing search or qsearch obviously can't be any sort of good. The only reason why initially legality checks for search and qsearch were done after they actually can influence some heuristics is because legality check is expensive computationally. Eventually in search it was moved to the place where it makes sure that illegal moves can't influence search. This patch shows that the same can be done for qsearch + it passed STC with elo-gaining bounds + it removes 3 lines of code because one no longer needs to increment/decrement movecount on illegal moves. passed STC with elo-gaining bounds https://tests.stockfishchess.org/tests/view/60f20aefd1189bed71812da0 LLR: 2.94 (-2.94,2.94) <-0.50,2.50> Total: 61512 W: 4688 L: 4492 D: 52332 Elo +1.11 Ptnml(0-2): 139, 3730, 22848, 3874, 165 The same version functionally but with moving condition ever earlier passed LTC with simplification bounds. https://tests.stockfishchess.org/tests/view/60f292cad1189bed71812de9 LLR: 2.98 (-2.94,2.94) <-2.50,0.50> Total: 60944 W: 1724 L: 1685 D: 57535 Elo +0.22 Ptnml(0-2): 11, 1556, 27298, 1597, 10 closes https://github.com/official-stockfish/Stockfish/pull/3618 bench 4709569 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Liam Keegan
Date: Fri Jul 23 18:16:05 2021 +0200 Timestamp: 1627056965 Add macOS and windows to CI - macOS - system clang - gcc - windows / msys2 - mingw 64-bit gcc - mingw 32-bit gcc - minor code fixes to get new CI jobs to pass - code: suppress unused-parameter warning on 32-bit windows - Makefile: if arch=any on macos, don't specify arch at all fixes https://github.com/official-stockfish/Stockfish/issues/2958 closes https://github.com/official-stockfish/Stockfish/pull/3623 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: VoyagerOne
Date: Tue Jul 13 17:35:20 2021 +0200 Timestamp: 1626190520 Don't save excluded move eval in TT STC: LLR: 2.93 (-2.94,2.94) <-0.50,2.50> Total: 17544 W: 1384 L: 1236 D: 14924 Elo +2.93 Ptnml(0-2): 37, 1031, 6499, 1157, 48 https://tests.stockfishchess.org/tests/view/60ec8d9bd1189bed71812999 LTC: LLR: 2.95 (-2.94,2.94) <0.50,3.50> Total: 26136 W: 823 L: 707 D: 24606 Elo +1.54 Ptnml(0-2): 6, 643, 11656, 755, 8 https://tests.stockfishchess.org/tests/view/60ecb11ed1189bed718129ba closes https://github.com/official-stockfish/Stockfish/pull/3614 Bench: 5505251 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Vizvezdenec
Date: Tue Jul 13 17:33:20 2021 +0200 Timestamp: 1626190400 Remove second futility pruning depth limit This patch removes futility pruning lmrDepth limit for futility pruning at parent nodes. Since it's already capped by margin that is a function of lmrDepth there is no need to extra cap it with lmrDepth. passed STC https://tests.stockfishchess.org/tests/view/60e9b5dfd1189bed71812777 LLR: 2.97 (-2.94,2.94) <-2.50,0.50> Total: 14872 W: 1264 L: 1145 D: 12463 Elo +2.78 Ptnml(0-2): 37, 942, 5369, 1041, 47 passed LTC https://tests.stockfishchess.org/tests/view/60e9c635d1189bed71812790 LLR: 2.96 (-2.94,2.94) <-2.50,0.50> Total: 40336 W: 1280 L: 1225 D: 37831 Elo +0.47 Ptnml(0-2): 24, 1057, 17960, 1094, 33 closes https://github.com/official-stockfish/Stockfish/pull/3612 bench: 5064969 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: pb00067
Date: Tue Jul 13 17:31:15 2021 +0200 Timestamp: 1626190275 SEE: simplify stm variable initialization Pull #3458 removed the only usage of pos.see_ge() moving pieces that don't belong to the side to move, so we can simplify this, adding an assert. closes https://github.com/official-stockfish/Stockfish/pull/3607 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Vizvezdenec
Date: Tue Jul 13 17:23:30 2021 +0200 Timestamp: 1626189810 Remove futility pruning depth limit This patch removes futility pruning depth limit for child node futility pruning. In current master it was double capped by depth and by futility margin, which is also a function of depth, which didn't make much sense. passed STC https://tests.stockfishchess.org/tests/view/60e2418f9ea99d7c2d693e64 LLR: 2.95 (-2.94,2.94) <-2.50,0.50> Total: 116168 W: 9100 L: 9097 D: 97971 Elo +0.01 Ptnml(0-2): 319, 7496, 42476, 7449, 344 passed LTC https://tests.stockfishchess.org/tests/view/60e3374f9ea99d7c2d693f20 LLR: 2.96 (-2.94,2.94) <-2.50,0.50> Total: 43304 W: 1282 L: 1231 D: 40791 Elo +0.41 Ptnml(0-2): 8, 1126, 19335, 1173, 10 closes https://github.com/official-stockfish/Stockfish/pull/3606 bench 4965493 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: SFisGOD
Date: Sat Jul 3 10:03:32 2021 +0200 Timestamp: 1625299412 Update default net to nn-9e3c6298299a.nnue Optimization of nn-956480d8378f.nnue using SPSA https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7 Same method as described in PR #3593 STC: LLR: 2.93 (-2.94,2.94) <-0.50,2.50> Total: 17792 W: 1525 L: 1372 D: 14895 Elo +2.99 Ptnml(0-2): 28, 1156, 6401, 1257, 54 https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19 LTC: LLR: 2.96 (-2.94,2.94) <0.50,3.50> Total: 36544 W: 1245 L: 1109 D: 34190 Elo +1.29 Ptnml(0-2): 12, 988, 16139, 1118, 15 https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22 closes https://github.com/official-stockfish/Stockfish/pull/3601 Bench: 4687476 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Paul Mulders
Date: Sat Jul 3 09:51:03 2021 +0200 Timestamp: 1625298663 Allow passing RTLIB=compiler-rt to make Not all linux users will have libatomic installed. When using clang as the system compiler with compiler-rt as the default runtime library instead of libgcc, atomic builtins may be provided by compiler-rt. This change allows such users to pass RTLIB=compiler-rt to make sure the build doesn't error out on the missing (unnecessary) libatomic. closes https://github.com/official-stockfish/Stockfish/pull/3597 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: candirufish
Date: Sat Jul 3 09:44:05 2021 +0200 Timestamp: 1625298245 no cut node reduction for killer moves. stc: LLR: 2.95 (-2.94,2.94) <-0.50,2.50> Total: 44344 W: 3474 L: 3294 D: 37576 Elo +1.41 Ptnml(0-2): 117, 2710, 16338, 2890, 117 https://tests.stockfishchess.org/tests/view/60d8ea673beab81350ac9eb8 ltc: LLR: 2.93 (-2.94,2.94) <0.50,3.50> Total: 82600 W: 2638 L: 2441 D: 77521 Elo +0.83 Ptnml(0-2): 38, 2147, 36749, 2312, 54 https://tests.stockfishchess.org/tests/view/60d9048f3beab81350ac9eed closes https://github.com/official-stockfish/Stockfish/pull/3600 Bench: 5160239 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: xoto10
Date: Sat Jul 3 09:26:58 2021 +0200 Timestamp: 1625297218 Simplify lazy_skip. Small speedup by removing operations in lazy_skip. STC 10+0.1 : LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 55088 W: 4553 L: 4482 D: 46053 Elo +0.45 Ptnml(0-2): 163, 3546, 20045, 3637, 153 https://tests.stockfishchess.org/tests/view/60daa2cb3beab81350aca04d LTC 60+0.6 : LLR: 2.93 (-2.94,2.94) <-2.50,0.50> Total: 46136 W: 1457 L: 1407 D: 43272 Elo +0.38 Ptnml(0-2): 10, 1282, 20442, 1316, 18 https://tests.stockfishchess.org/tests/view/60db0e753beab81350aca08e closes https://github.com/official-stockfish/Stockfish/pull/3599 Bench 5122403 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Windows 32 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Stéphane Nicolet
Date: Sat Jul 3 09:25:16 2021 +0200 Timestamp: 1625297116 Simplify format_cp_aligned_dot() closes https://github.com/official-stockfish/Stockfish/pull/3583 No functional change see source |