Stockfish Development Versions are build automatically if there are changes on the master branch in the git repository (https://github.com/official-stockfish/Stockfish). Use it at your own risk.
They are compiled with gcc/mingw 7.3 on Ubuntu 18.04.
Read the FAQ or leave a comment.

!! latest version !!


Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: SFisGOD
Date: Mon Jul 26 07:52:59 2021 +0200
Timestamp: 1627278779

Update default net to nn-26abeed38351.nnue

SPSA: https://tests.stockfishchess.org/tests/view/60fba335d8a6b65b2f3a7891

New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593

The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@94cd757#commitcomment-53324840

STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171 Elo +1.06
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/60fd50f2d8a6b65b2f3a798e

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890 Elo +1.04
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/60fd7bb2d8a6b65b2f3a79a9

Closes https://github.com/official-stockfish/Stockfish/pull/3630

Bench: 5124774
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Giacomo Lorenzetti
Date: Mon Jul 26 07:48:58 2021 +0200
Timestamp: 1627278538

Simplification in LMR

This commit removes the `!captureOrPromotion` condition from ttCapture reduction and from good/bad history reduction (similar to #3619).

passed STC:
https://tests.stockfishchess.org/tests/view/60fc734ad8a6b65b2f3a7922
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 48680 W: 3855 L: 3776 D: 41049 Elo +0.56
Ptnml(0-2): 118, 3145, 17744, 3206, 127

passed LTC:
https://tests.stockfishchess.org/tests/view/60fce7d5d8a6b65b2f3a794c
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 86528 W: 2471 L: 2450 D: 81607 Elo +0.08
Ptnml(0-2): 28, 2203, 38777, 2232, 24

closes https://github.com/official-stockfish/Stockfish/pull/3629

Bench: 4951406
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: MichaelB7
Date: Sat Jul 24 18:04:59 2021 +0200
Timestamp: 1627142699

Update the default net to nn-76a8a7ffb820.nnue.

combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.

Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-d8609abe8caf.nnue:

The initial net nn-d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.

Training data gen command (generates in chunks of 200k positions):

generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack

PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):

python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val

See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.

Based on that Michael generated nn-76a8a7ffb820.nnue in the following way:

The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-d8609abe8caf.pt

This run is thus started from Segio Vieri's net nn-d8609abe8caf.nnue

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created

The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.

passed LTC
https://tests.stockfishchess.org/tests/view/60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733 Elo +1.03
Ptnml(0-2): 14, 1432, 23474, 1583, 17

passed STC
https://tests.stockfishchess.org/tests/view/60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749 Elo +1.62
Ptnml(0-2): 100, 2446, 13695, 2623, 100.

closes https://github.com/official-stockfish/Stockfish/pull/3626

Bench: 5169957
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Giacomo Lorenzetti
Date: Fri Jul 23 19:02:58 2021 +0200
Timestamp: 1627059778

Apply good/bad history reduction also when inCheck

Main idea is that, in some cases, 'in check' situations are not so different from 'not in check' ones.
Trying to use piece count in order to select only a few 'in check' situations have failed LTC testing.
It could be interesting to apply one of those ideas in other parts of the search function.

passed STC:
https://tests.stockfishchess.org/tests/view/60f1b68dd1189bed71812d40
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 53472 W: 4078 L: 4008 D: 45386 Elo +0.45
Ptnml(0-2): 127, 3297, 19795, 3413, 104

passed LTC:
https://tests.stockfishchess.org/tests/view/60f291e6d1189bed71812de3
LLR: 2.92 (-2.94,2.94) <-2.50,0.50>
Total: 89712 W: 2651 L: 2632 D: 84429 Elo +0.07
Ptnml(0-2): 60, 2261, 40188, 2294, 53

closes https://github.com/official-stockfish/Stockfish/pull/3619

Bench: 5185789
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: pb00067
Date: Fri Jul 23 18:53:03 2021 +0200
Timestamp: 1627059183

Simplify lowply-history scoring logic

STC:
https://tests.stockfishchess.org/tests/view/60eee559d1189bed71812b16
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 33976 W: 2523 L: 2431 D: 29022 Elo +0.94
Ptnml(0-2): 66, 2030, 12730, 2070, 92

LTC:
https://tests.stockfishchess.org/tests/view/60eefa12d1189bed71812b24
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 107240 W: 3053 L: 3046 D: 101141 Elo +0.02
Ptnml(0-2): 56, 2668, 48154, 2697, 45

closes https://github.com/official-stockfish/Stockfish/pull/3616

bench: 5199177
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Vizvezdenec
Date: Fri Jul 23 18:47:30 2021 +0200
Timestamp: 1627058850

Prune illegal moves in qsearch earlier

The main idea is that illegal moves influencing search or
qsearch obviously can't be any sort of good. The only reason
why initially legality checks for search and qsearch were done
after they actually can influence some heuristics is because
legality check is expensive computationally. Eventually in
search it was moved to the place where it makes sure that
illegal moves can't influence search.

This patch shows that the same can be done for qsearch + it
passed STC with elo-gaining bounds + it removes 3 lines of code
because one no longer needs to increment/decrement movecount
on illegal moves.

passed STC with elo-gaining bounds
https://tests.stockfishchess.org/tests/view/60f20aefd1189bed71812da0
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 61512 W: 4688 L: 4492 D: 52332 Elo +1.11
Ptnml(0-2): 139, 3730, 22848, 3874, 165

The same version functionally but with moving condition ever earlier
passed LTC with simplification bounds.
https://tests.stockfishchess.org/tests/view/60f292cad1189bed71812de9
LLR: 2.98 (-2.94,2.94) <-2.50,0.50>
Total: 60944 W: 1724 L: 1685 D: 57535 Elo +0.22
Ptnml(0-2): 11, 1556, 27298, 1597, 10

closes https://github.com/official-stockfish/Stockfish/pull/3618

bench 4709569
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Liam Keegan
Date: Fri Jul 23 18:16:05 2021 +0200
Timestamp: 1627056965

Add macOS and windows to CI

- macOS
- system clang
- gcc
- windows / msys2
- mingw 64-bit gcc
- mingw 32-bit gcc
- minor code fixes to get new CI jobs to pass
- code: suppress unused-parameter warning on 32-bit windows
- Makefile: if arch=any on macos, don't specify arch at all

fixes https://github.com/official-stockfish/Stockfish/issues/2958

closes https://github.com/official-stockfish/Stockfish/pull/3623

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: VoyagerOne
Date: Tue Jul 13 17:35:20 2021 +0200
Timestamp: 1626190520

Don't save excluded move eval in TT

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17544 W: 1384 L: 1236 D: 14924 Elo +2.93
Ptnml(0-2): 37, 1031, 6499, 1157, 48
https://tests.stockfishchess.org/tests/view/60ec8d9bd1189bed71812999

LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 26136 W: 823 L: 707 D: 24606 Elo +1.54
Ptnml(0-2): 6, 643, 11656, 755, 8
https://tests.stockfishchess.org/tests/view/60ecb11ed1189bed718129ba

closes https://github.com/official-stockfish/Stockfish/pull/3614

Bench: 5505251
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Vizvezdenec
Date: Tue Jul 13 17:33:20 2021 +0200
Timestamp: 1626190400

Remove second futility pruning depth limit

This patch removes futility pruning lmrDepth limit for futility pruning at parent nodes.
Since it's already capped by margin that is a function of lmrDepth there is no need to extra cap it with lmrDepth.

passed STC
https://tests.stockfishchess.org/tests/view/60e9b5dfd1189bed71812777
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 14872 W: 1264 L: 1145 D: 12463 Elo +2.78
Ptnml(0-2): 37, 942, 5369, 1041, 47

passed LTC
https://tests.stockfishchess.org/tests/view/60e9c635d1189bed71812790
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 40336 W: 1280 L: 1225 D: 37831 Elo +0.47
Ptnml(0-2): 24, 1057, 17960, 1094, 33

closes https://github.com/official-stockfish/Stockfish/pull/3612

bench: 5064969
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: pb00067
Date: Tue Jul 13 17:31:15 2021 +0200
Timestamp: 1626190275

SEE: simplify stm variable initialization

Pull #3458 removed the only usage of pos.see_ge() moving pieces that
don't belong to the side to move, so we can simplify this, adding an assert.

closes https://github.com/official-stockfish/Stockfish/pull/3607

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Vizvezdenec
Date: Tue Jul 13 17:23:30 2021 +0200
Timestamp: 1626189810

Remove futility pruning depth limit

This patch removes futility pruning depth limit for child node futility pruning.
In current master it was double capped by depth and by futility margin, which is also a function of depth, which didn't make much sense.

passed STC
https://tests.stockfishchess.org/tests/view/60e2418f9ea99d7c2d693e64
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 116168 W: 9100 L: 9097 D: 97971 Elo +0.01
Ptnml(0-2): 319, 7496, 42476, 7449, 344

passed LTC
https://tests.stockfishchess.org/tests/view/60e3374f9ea99d7c2d693f20
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 43304 W: 1282 L: 1231 D: 40791 Elo +0.41
Ptnml(0-2): 8, 1126, 19335, 1173, 10

closes https://github.com/official-stockfish/Stockfish/pull/3606

bench 4965493
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: SFisGOD
Date: Sat Jul 3 10:03:32 2021 +0200
Timestamp: 1625299412

Update default net to nn-9e3c6298299a.nnue

Optimization of nn-956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60da2bf63beab81350ac9fe7

Same method as described in PR #3593

STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895 Elo +2.99
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/60deffc59ea99d7c2d693c19

LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190 Elo +1.29
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/60df11339ea99d7c2d693c22

closes https://github.com/official-stockfish/Stockfish/pull/3601

Bench: 4687476
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Paul Mulders
Date: Sat Jul 3 09:51:03 2021 +0200
Timestamp: 1625298663

Allow passing RTLIB=compiler-rt to make

Not all linux users will have libatomic installed.
When using clang as the system compiler with compiler-rt as the default
runtime library instead of libgcc, atomic builtins may be provided by compiler-rt.
This change allows such users to pass RTLIB=compiler-rt to make sure
the build doesn't error out on the missing (unnecessary) libatomic.

closes https://github.com/official-stockfish/Stockfish/pull/3597

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: candirufish
Date: Sat Jul 3 09:44:05 2021 +0200
Timestamp: 1625298245

no cut node reduction for killer moves.

stc:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 44344 W: 3474 L: 3294 D: 37576 Elo +1.41
Ptnml(0-2): 117, 2710, 16338, 2890, 117
https://tests.stockfishchess.org/tests/view/60d8ea673beab81350ac9eb8

ltc:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 82600 W: 2638 L: 2441 D: 77521 Elo +0.83
Ptnml(0-2): 38, 2147, 36749, 2312, 54
https://tests.stockfishchess.org/tests/view/60d9048f3beab81350ac9eed

closes https://github.com/official-stockfish/Stockfish/pull/3600

Bench: 5160239
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: xoto10
Date: Sat Jul 3 09:26:58 2021 +0200
Timestamp: 1625297218

Simplify lazy_skip.

Small speedup by removing operations in lazy_skip.

STC 10+0.1 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 55088 W: 4553 L: 4482 D: 46053 Elo +0.45
Ptnml(0-2): 163, 3546, 20045, 3637, 153
https://tests.stockfishchess.org/tests/view/60daa2cb3beab81350aca04d

LTC 60+0.6 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 46136 W: 1457 L: 1407 D: 43272 Elo +0.38
Ptnml(0-2): 10, 1282, 20442, 1316, 18
https://tests.stockfishchess.org/tests/view/60db0e753beab81350aca08e

closes https://github.com/official-stockfish/Stockfish/pull/3599

Bench 5122403
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Stéphane Nicolet
Date: Sat Jul 3 09:25:16 2021 +0200
Timestamp: 1625297116

Simplify format_cp_aligned_dot()

closes https://github.com/official-stockfish/Stockfish/pull/3583

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Joost VandeVondele
Date: Sat Jul 3 09:20:06 2021 +0200
Timestamp: 1625296806

Restore development version

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Joost VandeVondele
Date: Fri Jul 2 14:53:30 2021 +0200
Timestamp: 1625230410

Stockfish 14

Official release version of Stockfish 14

Bench: 4770936

---

Today, we have the pleasure to announce Stockfish 14.

As usual, downloads will be freely available at https://stockfishchess.org

The engine is now significantly stronger than just a few months ago,
and wins four times more game pairs than it loses against the previous
release version [0]. Stockfish 14 is now at least 400 Elo ahead of
Stockfish 7, a top engine in 2016 [1]. During the last five years,
Stockfish has thus gained about 80 Elo per year.

Stockfish 14 evaluates positions more accurately than Stockfish 13 as
a result of two major steps forward in defining and training the
efficiently updatable neural network (NNUE) that provides the evaluation
for positions.

First, the collaboration with the Leela Chess Zero team - announced
previously [2] - has come to fruition. The LCZero team has provided a
collection of billions of positions evaluated by Leela that we have
combined with billions of positions evaluated by Stockfish to train the
NNUE net that powers Stockfish 14. The fact that we could use and combine
these datasets freely was essential for the progress made and demonstrates
the power of open source and open data [3].

Second, the architecture of the NNUE network was significantly updated:
the new network is not only larger, but more importantly, it deals better
with large material imbalances and can specialize for multiple phases of
the game [4]. A new project, kick-started by Gary Linscott and
Tomasz Sobczyk, led to a GPU accelerated net trainer written in
pytorch.[5] This tool allows for training high-quality nets in a couple
of hours.

Finally, this release features some search refinements, minor bug
fixes and additional improvements. For example, Stockfish is now about
90 Elo stronger for chess960 (Fischer random chess) at short time control.

The Stockfish project builds on a thriving community of enthusiasts
(thanks everybody!) that contribute their expertise, time, and resources
to build a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the fishtest
testing framework and programmers to contribute to the project on
github [6].

Stay safe and enjoy chess!

The Stockfish team

[0] https://tests.stockfishchess.org/tests/view/60dae5363beab81350aca077
[1] https://nextchessmove.com/dev-builds
[2] https://stockfishchess.org/blog/2021/stockfish-13/
[3] https://lczero.org/blog/2021/06/the-importance-of-open-data/
[4] https://github.com/official-stockfish/Stockfish/commit/e8d64af1
[5] https://github.com/glinscott/nnue-pytorch/
[6] https://stockfishchess.org/get-involved/
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Brad Knox
Date: Tue Jun 29 10:24:54 2021 +0200
Timestamp: 1624955094

Update Top CPU Contributors

closes https://github.com/official-stockfish/Stockfish/pull/3595

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: SFisGOD
Date: Mon Jun 28 21:31:58 2021 +0200
Timestamp: 1624908718

Update default net to nn-3475407dc199.nnue

Optimization of eight subnetwork output layers of Michael's nn-190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/60d5510642a522cc50282ef3

Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)

STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360 Elo +1.98
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/60d8ea123beab81350ac9eb6

LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909 Elo +0.96
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/60d8f0393beab81350ac9ec6

closes https://github.com/official-stockfish/Stockfish/pull/3593

Bench: 4770936
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: MichaelB7
Date: Mon Jun 28 21:20:05 2021 +0200
Timestamp: 1624908005

Make net nn-956480d8378f.nnue the default

Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from a previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed STC:
https://tests.stockfishchess.org/tests/view/60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216 Elo +3.05
Ptnml(0-2): 67, 1225, 6464, 1407, 57

passed LTC:
https://tests.stockfishchess.org/tests/view/60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035 Elo +0.82
Ptnml(0-2): 48, 2581, 41076, 2814, 41

passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135 Elo +1.15
Ptnml(0-2): 14, 1097, 18981, 1238, 14.

closes https://github.com/official-stockfish/Stockfish/pull/3592

Bench: 4906727
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Joost VandeVondele
Date: Mon Jun 28 21:13:30 2021 +0200
Timestamp: 1624907610

Update WDL model for NNUE

This updates the WDL model based on the LTC statistics in June this year (10M games),
so from pre-NNUE to NNUE based results.

(for old results see, https://github.com/official-stockfish/Stockfish/pull/2778)

As before the fit by the model to the data is quite good.

closes https://github.com/official-stockfish/Stockfish/pull/3582

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: bmc4
Date: Mon Jun 28 21:12:04 2021 +0200
Timestamp: 1624907524

Simplify Reductions Initialization

passed

STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 45032 W: 3600 L: 3518 D: 37914 Elo +0.63
Ptnml(0-2): 111, 2893, 16435, 2957, 120
https://tests.stockfishchess.org/tests/view/60d2655d40925195e7a6c527

LTC:
LLR: 3.00 (-2.94,2.94) <-2.50,0.50>
Total: 25728 W: 786 L: 722 D: 24220 Elo +0.86
Ptnml(0-2): 5, 650, 11494, 706, 9
https://tests.stockfishchess.org/tests/view/60d2b14240925195e7a6c577

closes https://github.com/official-stockfish/Stockfish/pull/3584

bench: 4602977
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Stéphane Nicolet
Date: Tue Jun 22 11:51:03 2021 +0200
Timestamp: 1624355463

Detect fortresses a little bit quicker

In the so-called "hybrid" method of evaluation of current master, we use the
classical eval (because of its speed) instead of the NNUE eval when the classical
material balance approximation hints that the position is "winning enough" to
rely on the classical eval.

This trade-off idea between speed and accuracy works well in general, but in
some fortress positions the classical eval is just bad. So in shuffling branches
of the search tree, we (slowly) increase the thresehold so that eventually we
don't trust classical anymore and switch to NNUE evaluation.

This patch increases that threshold faster, so that we switch to NNUE quicker
in shuffling branches. Idea is to incite Stockfish to spend less time in fortresses
lines in the search tree, and spend more time searching the critical lines.

passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 47872 W: 3908 L: 3720 D: 40244 Elo +1.36
Ptnml(0-2): 122, 3053, 17419, 3199, 143
https://tests.stockfishchess.org/tests/view/60cef34b457376eb8bcab79d

passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 73616 W: 2326 L: 2143 D: 69147 Elo +0.86
Ptnml(0-2): 21, 1940, 32705, 2119, 23
https://tests.stockfishchess.org/tests/view/60cf6d842114332881e73528

Retested at LTC against lastest master:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 18264 W: 642 L: 532 D: 17090 Elo +2.09
Ptnml(0-2): 6, 479, 8055, 583, 9
https://tests.stockfishchess.org/tests/view/60d18cd540925195e7a6c351

closes https://github.com/official-stockfish/Stockfish/pull/3578

Bench: 5139233
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: MichaelB7
Date: Mon Jun 21 23:16:55 2021 +0200
Timestamp: 1624310215

Make net nn-190f102a22c3.nnue the default net.

Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch

python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_17 --resume-from-model ./pt/nn-75980ca503c6.pt

This run is thus started from the previous master net.

all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack

passed LTC
https://tests.stockfishchess.org/tests/view/60d09f52b4c17000d679517f
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 32184 W: 1100 L: 970 D: 30114 Elo +1.40
Ptnml(0-2): 10, 878, 14193, 994, 17

passed STC
https://tests.stockfishchess.org/tests/view/60d086c02114332881e7368e
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11360 W: 1056 L: 906 D: 9398 Elo +4.59
Ptnml(0-2): 25, 735, 4026, 853, 41

closes https://github.com/official-stockfish/Stockfish/pull/3576

Bench: 4631244
see source

next page >