Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 14 10:46:13 2024 +0100 Timestamp: 1705225573 Add ignoreRevsFile to CONTRIBUTING.md closes https://github.com/official-stockfish/Stockfish/pull/4980 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 14 10:46:13 2024 +0100 Timestamp: 1705225573 Remove the dependency on a Worker from evaluate Also remove dead code, `rootSimpleEval` is no longer used since the introduction of dual net. `iterBestValue` is also no longer used in evaluate and can be reduced to a local variable. closes https://github.com/official-stockfish/Stockfish/pull/4979 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 14 10:46:13 2024 +0100 Timestamp: 1705225573 Fix UCI options Fixes the type for 'Clear Hash' and uses MAX_MOVES for 'MultiPV' as we had before. No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 14 00:30:06 2024 +0100 Timestamp: 1705188606 Remove unused method init() is no longer used, and was previously replaced by the clear function. fixes https://github.com/official-stockfish/Stockfish/issues/4981 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: mstembera
Date: Sat Jan 13 19:40:53 2024 +0100 Timestamp: 1705171253 Simplify bad quiets The main difference is that instead of returning the first bad quiet as a good one we fall through. This is actually more correct and simpler to implement. Non regression STC: https://tests.stockfishchess.org/tests/view/659bbb3479aa8af82b964ec7 LLR: 2.93 (-2.94,2.94) <-1.75,0.25> Total: 150944 W: 38399 L: 38305 D: 74240 Elo +0.22 Ptnml(0-2): 485, 18042, 38298, 18188, 459 Non regression LTC: https://tests.stockfishchess.org/tests/view/659c6e6279aa8af82b9660eb LLR: 2.96 (-2.94,2.94) <-1.75,0.25> Total: 192060 W: 47871 L: 47823 D: 96366 Elo +0.09 Ptnml(0-2): 144, 21912, 51845, 22010, 119 The cutoff is now -8K instead of -7.5K. -7.5K failed. https://tests.stockfishchess.org/tests/view/659a1f4b79aa8af82b962a0e This was likely a false negative. closes https://github.com/official-stockfish/Stockfish/pull/4975 Bench: 1308279 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: FauziAkram
Date: Sat Jan 13 19:40:53 2024 +0100 Timestamp: 1705171253 Remove threatenedByPawn term for queen threats Passed STC: https://tests.stockfishchess.org/tests/view/659d614c79aa8af82b9677d0 LLR: 2.93 (-2.94,2.94) <-1.75,0.25> Total: 151776 W: 38690 L: 38597 D: 74489 Elo +0.21 Ptnml(0-2): 522, 17841, 39015, 18042, 468 Passed LTC: https://tests.stockfishchess.org/tests/view/659d94d379aa8af82b967cb2 LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 91908 W: 23075 L: 22924 D: 45909 Elo +0.57 Ptnml(0-2): 70, 10311, 25037, 10470, 66 closes https://github.com/official-stockfish/Stockfish/pull/4977 Bench: 1266493 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sat Jan 13 19:40:53 2024 +0100 Timestamp: 1705171253 Refactor global variables This aims to remove some of the annoying global structure which Stockfish has. Overall there is no major elo regression to be expected. Non regression SMP STC (paused, early version): https://tests.stockfishchess.org/tests/view/65983d7979aa8af82b9608f1 LLR: 0.23 (-2.94,2.94) <-1.75,0.25> Total: 76232 W: 19035 L: 19096 D: 38101 Elo -0.28 Ptnml(0-2): 92, 8735, 20515, 8690, 84 Non regression STC (early version): https://tests.stockfishchess.org/tests/view/6595b3a479aa8af82b95da7f LLR: 2.93 (-2.94,2.94) <-1.75,0.25> Total: 185344 W: 47027 L: 46972 D: 91345 Elo +0.10 Ptnml(0-2): 571, 21285, 48943, 21264, 609 Non regression SMP STC: https://tests.stockfishchess.org/tests/view/65a0715c79aa8af82b96b7e4 LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 142936 W: 35761 L: 35662 D: 71513 Elo +0.24 Ptnml(0-2): 209, 16400, 38135, 16531, 193 These global structures/variables add hidden dependencies and allow data to be mutable from where it shouldn't it be (i.e. options). They also prevent Stockfish from internal selfplay, which would be a nice thing to be able to do, i.e. instantiate two Stockfish instances and let them play against each other. It will also allow us to make Stockfish a library, which can be easier used on other platforms. For consistency with the old search code, `thisThread` has been kept, even though it is not strictly necessary anymore. This the first major refactor of this kind (in recent time), and future changes are required, to achieve the previously described goals. This includes cleaning up the dependencies, transforming the network to be self contained and coming up with a plan to deal with proper tablebase memory management (see comments for more information on this). The removal of these global structures has been discussed in parts with Vondele and Sopel. closes https://github.com/official-stockfish/Stockfish/pull/4968 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Linmiao Xu
Date: Mon Jan 8 18:34:36 2024 +0100 Timestamp: 1704735276 Update default main net to nn-baff1edbea57.nnue Created by retraining the previous main net nn-b1e55edbea57.nnue with: - some of the same options as before: ranger21 optimizer, more WDL skipping - adding T80 aug filter-v6, sep, and oct 2023 data to the previous best dataset - increasing training loss for positions where predicted win rates were higher than estimated match results from training data position scores ```yaml experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28 training-dataset: # https://github.com/official-stockfish/Stockfish/pull/4782 - /data/S6-1ee1aba5ed.binpack - /data/test80-aug2023-2tb7p.v6.min.binpack - /data/test80-sep2023-2tb7p.binpack - /data/test80-oct2023-2tb7p.binpack early-fen-skipping: 28 start-from-engine-test-net: True nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q num-epochs: 1000 lr: 4.375e-4 gamma: 0.995 start-lambda: 1.0 end-lambda: 0.7 ``` Training data can be found at: https://robotmoon.com/nnue-training-data/ Training loss was increased by 10% for positions where predicted win rates were higher than suggested by the win rate model based on the training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a variant of experiments from Sopel's NNUE training & experimentation log: https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY Experiment 302 - increase loss when prediction too high, vondele’s idea Experiment 309 - increase loss when prediction too high, normalize in a batch Passed STC: https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 148320 W: 37960 L: 37475 D: 72885 Elo +1.14 Ptnml(0-2): 542, 17565, 37383, 18206, 464 Passed LTC: https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 55188 W: 13955 L: 13592 D: 27641 Elo +2.29 Ptnml(0-2): 34, 6162, 14834, 6535, 29 closes https://github.com/official-stockfish/Stockfish/pull/4972 Bench: 1219824 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Mon Jan 8 18:33:38 2024 +0100 Timestamp: 1704735218 Cleanup Evalfile handling This cleans up the EvalFile handling after the merge of #4915, which has become a bit confusing on what it is actually doing. closes https://github.com/official-stockfish/Stockfish/pull/4971 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 7 21:41:52 2024 +0100 Timestamp: 1704660112 Prefix abs with std:: see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Linmiao Xu
Date: Sun Jan 7 21:20:15 2024 +0100 Timestamp: 1704658815 Update smallnet to nn-baff1ede1f90.nnue with wider eval range Created by training an L1-128 net from scratch with a wider range of evals in the training data and wld-fen-skipping disabled during training. The differences in this training data compared to the first dual nnue PR are: - removal of all positions with 3 pieces - when piece count >= 16, keep positions with simple eval above 750 - when piece count < 16, remove positions with simple eval above 3000 The asymmetric data filtering was meant to flatten the training data piece count distribution, which was previously heavily skewed towards positions with low piece counts. Additionally, the simple eval range where the smallnet is used was widened to cover more positions previously evaluated by the big net and simple eval. ```yaml experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip training-dataset: - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack wld-fen-skipping: False start-from-engine-test-net: False nnue-pytorch-branch: linrock/nnue-pytorch/L1-128 engine-test-branch: linrock/Stockfish/L1-128-nolazy engine-base-branch: linrock/Stockfish/L1-128 num-epochs: 500 start-lambda: 1.0 end-lambda: 1.0 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Binpacks interleaved at training time with: https://github.com/official-stockfish/nnue-pytorch/pull/259 FT weights permuted with 10k positions from fishpack32.binpack with: https://github.com/official-stockfish/nnue-pytorch/pull/254 Data filtered for high simple eval positions (v4) with: https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675 Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move of L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data: nn-epoch319.nnue : -241.7 +/- 3.2 Passed STC vs. 36db936: https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 21920 W: 5680 L: 5381 D: 10859 Elo +4.74 Ptnml(0-2): 82, 2488, 5520, 2789, 81 Passed LTC vs. DualNNUE #4915: https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 147606 W: 36619 L: 36063 D: 74924 Elo +1.31 Ptnml(0-2): 98, 16591, 39891, 17103, 120 closes https://github.com/official-stockfish/Stockfish/pull/4919 Bench: 1438336 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Linmiao Xu
Date: Sun Jan 7 21:15:52 2024 +0100 Timestamp: 1704658552 Dual NNUE with L1-128 smallnet Credit goes to @mstembera for: - writing the code enabling dual NNUE: https://github.com/official-stockfish/Stockfish/pull/4898 - the idea of trying L1-128 trained exclusively on high simple eval positions The L1-128 smallnet is: - epoch 399 of a single-stage training from scratch - trained only on positions from filtered data with high material difference - defined by abs(simple_eval) > 1000 ```yaml experiment-name: 128--S1-only-hse-v2 training-dataset: - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-1k.binpack - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-1k.binpack - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-1k.binpack - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack # T80 2022 - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-1k.binpack - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-1k.binpack - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-1k.binpack # T80 2023 - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-1k.binpack - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-1k.binpack - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-1k.binpack start-from-engine-test-net: False nnue-pytorch-branch: linrock/nnue-pytorch/L1-128 engine-test-branch: linrock/Stockfish/L1-128-nolazy engine-base-branch: linrock/Stockfish/L1-128 num-epochs: 500 lambda: 1.0 ``` Experiment yaml configs converted to easy_train.sh commands with: https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py Binpacks interleaved at training time with: https://github.com/official-stockfish/nnue-pytorch/pull/259 Data filtered for high simple eval positions with: https://github.com/linrock/nnue-data/blob/32d6a68/filter_high_simple_eval_plain.py https://github.com/linrock/Stockfish/blob/61dbfe/src/tools/transform.cpp#L626-L655 Training data can be found at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move of L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data: nn-epoch399.nnue : -318.1 +/- 2.1 Passed STC: https://tests.stockfishchess.org/tests/view/6574cb9d95ea6ba1fcd49e3b LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 62432 W: 15875 L: 15521 D: 31036 Elo +1.97 Ptnml(0-2): 177, 7331, 15872, 7633, 203 Passed LTC: https://tests.stockfishchess.org/tests/view/6575da2d4d789acf40aaac6e LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 64830 W: 16118 L: 15738 D: 32974 Elo +2.04 Ptnml(0-2): 43, 7129, 17697, 7497, 49 closes https://github.com/official-stockfish/Stockfish/pulls Bench: 1330050 Co-Authored-By: mstembera <5421953+> see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: mstembera
Date: Sun Jan 7 13:41:50 2024 +0100 Timestamp: 1704631310 Introduce BAD_QUIET movepicker stage Split quiets into good and bad as we do with captures. When we find the first quiet move below a certain threshold that has been sorted we consider all subsequent quiets bad. Inspired by @locutus2 idea to skip bad captures. Passed STC: https://tests.stockfishchess.org/tests/view/6597759f79aa8af82b95fa17 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 138688 W: 35566 L: 35096 D: 68026 Elo +1.18 Ptnml(0-2): 476, 16367, 35183, 16847, 471 Passed LTC: https://tests.stockfishchess.org/tests/view/6598583c79aa8af82b960ad0 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 84108 W: 21468 L: 21048 D: 41592 Elo +1.73 Ptnml(0-2): 38, 9355, 22858, 9755, 48 closes https://github.com/official-stockfish/Stockfish/pull/4970 Bench: 1336907 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Sun Jan 7 13:38:55 2024 +0100 Timestamp: 1704631135 Add .git-blame-ignore-revs Add a `.git-blame-ignore-revs` file which can be used to skip specified commits when blaming, this is useful to ignore formatting commits, like clang-format #4790. Github blame automatically supports this file format, as well as other third party tools. Git itself needs to be told about the file name to work, the following command will add it to the current git repo. `git config blame.ignoreRevsFile .git-blame-ignore-revs`, alternatively one has to specify it with every blame. `git blame --ignore-revs-file .git-blame-ignore-revs search.cpp` Supported since git 2.23. closes https://github.com/official-stockfish/Stockfish/pull/4969 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Michael Chaly
Date: Sun Jan 7 13:37:28 2024 +0100 Timestamp: 1704631048 Tweak usage of correction history Instead of using linear formula use quadratic one. Maximum impact of correction history is doubled this way, it breaks even with previous formula on half of maximum value. Passed STC: https://tests.stockfishchess.org/tests/view/659591e579aa8af82b95d7e8 LLR: 2.93 (-2.94,2.94) <0.00,2.00> Total: 225216 W: 57616 L: 57019 D: 110581 Elo +0.92 Ptnml(0-2): 747, 26677, 57201, 27198, 785 Passed LTC: https://tests.stockfishchess.org/tests/view/6596ee0b79aa8af82b95f08a LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 73314 W: 18524 L: 18125 D: 36665 Elo +1.89 Ptnml(0-2): 41, 8159, 19875, 8524, 58 closes https://github.com/official-stockfish/Stockfish/pull/4967 Bench: 1464785 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Miguel Lahoz
Date: Sun Jan 7 13:37:12 2024 +0100 Timestamp: 1704631032 Remove unneeded operator overload macros Only Direction type is using two of the enable overload macros. Aside from this, only two of the overloads are even being used. Therefore, we can just define the needed overloads and remove the macros. closes https://github.com/official-stockfish/Stockfish/pull/4966 No functional change. see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: FauziAkram
Date: Thu Jan 4 15:56:53 2024 +0100 Timestamp: 1704380213 Remove redundant int cast Remove a redundant int cast in the calculation of fwdOut. The variable OutputType is already defined as std::int32_t, which is an integer type, making the cast unnecessary. closes https://github.com/official-stockfish/Stockfish/pull/4961 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Thu Jan 4 15:54:23 2024 +0100 Timestamp: 1704380063 Use type aliases instead of enums for Value types The primary rationale behind this lies in the fact that enums were not originally designed to be employed in the manner we currently utilize them. The Value enum was used like a type alias throughout the code and was often misused. Furthermore, changing the underlying size of the enum to int16_t broke everything, mostly because of the operator overloads for the Value enum, were causing data to be truncated. Since Value is now a type alias, the operator overloads are no longer required. Passed Non-Regression STC: https://tests.stockfishchess.org/tests/view/6593b8bb79aa8af82b95b401 LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 235296 W: 59919 L: 59917 D: 115460 Elo +0.00 Ptnml(0-2): 743, 27085, 62054, 26959, 807 closes https://github.com/official-stockfish/Stockfish/pull/4960 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: RainRat
Date: Thu Jan 4 15:51:56 2024 +0100 Timestamp: 1704379916 Fix typo in tbprobe.cpp closes https://github.com/official-stockfish/Stockfish/pull/4959 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Thu Jan 4 15:51:04 2024 +0100 Timestamp: 1704379864 Change the Move enum to a class This changes the Move enum to a class, this way all move related functions can be moved into the class and be more self contained. closes https://github.com/official-stockfish/Stockfish/pull/4958 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Viren6
Date: Thu Jan 4 15:49:33 2024 +0100 Timestamp: 1704379773 Modify ttPV reduction This patch modifies ttPV reduction by reducing 1 more unless ttValue is above alpha. Inspired from @pb00068 https://tests.stockfishchess.org/tests/view/658060796a3b4f6202215f1f Passed STC: https://tests.stockfishchess.org/tests/view/6591867679aa8af82b958328 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 37856 W: 9727 L: 9407 D: 18722 Elo +2.94 Ptnml(0-2): 99, 4444, 9568, 4672, 145 Passed LTC: https://tests.stockfishchess.org/tests/view/6591d9b679aa8af82b958a6c LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 128256 W: 32152 L: 31639 D: 64465 Elo +1.39 Ptnml(0-2): 64, 14364, 34772, 14851, 77 closes https://github.com/official-stockfish/Stockfish/pull/4957 Bench: 1176235 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: FauziAkram
Date: Thu Jan 4 15:47:37 2024 +0100 Timestamp: 1704379657 Simplification of partial_insertion_sort formula. Passed STC: https://tests.stockfishchess.org/tests/view/6590110879aa8af82b9562e9 LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 134880 W: 34468 L: 34355 D: 66057 Elo +0.29 Ptnml(0-2): 476, 16060, 34220, 16243, 441 Passed LTC: https://tests.stockfishchess.org/tests/view/659156ca79aa8af82b957f07 LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 60780 W: 15179 L: 14996 D: 30605 Elo +1.05 Ptnml(0-2): 27, 6847, 16464, 7020, 32 closes https://github.com/official-stockfish/Stockfish/pull/4955 Bench: 1338331 see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Thu Jan 4 15:47:10 2024 +0100 Timestamp: 1704379630 Update copyright year closes https://github.com/official-stockfish/Stockfish/pull/4954 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Disservin
Date: Thu Jan 4 15:45:33 2024 +0100 Timestamp: 1704379533 Silence security alert warning about possible infinite loop As some have noticed, a security alert has been complaining about a for loop in our TB code for quite some now. Though it was never a real issue, so not of high importance. A few lines earlier the symlen vector is resized `d->symlen.resize(number<uint16_t, LittleEndian>(data));` while this code seems odd at first, it resizes the array to at most (2 << 16) - 1 elements, basically making the infinite loop issue impossible to occur. closes https://github.com/official-stockfish/Stockfish/pull/4953 No functional change see source |
Windows x64 for Haswell CPUs Windows x64 for modern computers + AVX2 Windows x64 for modern computers Windows x64 + SSSE3 Windows x64 Linux x64 for Haswell CPUs Linux x64 for modern computers + AVX2 Linux x64 for modern computers Linux x64 + SSSE3 Linux x64 | Author: Joseph Huang
Date: Thu Jan 4 15:45:03 2024 +0100 Timestamp: 1704379503 Lower MultiPV max to MAX_MOVES Link max value of MultiPV to that of MAX_MOVES which is 256 closes https://github.com/official-stockfish/Stockfish/pull/4951 No functional change see source |