Difference between revisions of "Search Progression"

From Chessprogramming wiki
Jump to: navigation, search
(search progression)
 
(Improved Connorpasta: Add lmp link)
(One intermediate revision by the same user not shown)
Line 27: Line 27:
 
* [[History Heuristic|Butterfly history heuristic]]
 
* [[History Heuristic|Butterfly history heuristic]]
 
* [[Killer Heuristic|Killer moves]]
 
* [[Killer Heuristic|Killer moves]]
* [[Late Moves Pruning]]
+
* [[Futility Pruning#MoveCountBasedPruning|Late Moves Pruning]]
 
* [[Futility Pruning]]
 
* [[Futility Pruning]]
 
* [[Internal Iterative Reduction]] (IIR)
 
* [[Internal Iterative Reduction]] (IIR)
Line 54: Line 54:
 
Additionally, should be a healthy amount of parameter tweaking after each addition.
 
Additionally, should be a healthy amount of parameter tweaking after each addition.
 
There are other minor features that top engines have, but these will constitute the majority of the elo you will find in them.
 
There are other minor features that top engines have, but these will constitute the majority of the elo you will find in them.
 +
 +
[[Category:Search]]

Revision as of 10:31, 5 July 2024

When creating a new engine from scratch, it is often perplexing to choose which feature to implement first. Written below are some advice from authors of top chess engines.

Connorpasta

A message sent by Seer author Connor McMonigle in OpenBench discord. It got the name "Connorpasta" due to its prevalence of use in computer chess Discord servers.

A reasonable search feature progression (starting with vanilla TT (sorting TT move first), PVS, QS and aspiration windows which are all pretty fundamental) imo is: NMP, LMR (log formula is most principled ~ there are a number of adjustments you can experiment with), static NMP (aka RFP), 
butterfly history heuristic, LMP, futility pruning, CMH+FMH, QS SEE pruning, PVS SEE pruning (captures and quiets), QS delta pruning, history pruning, capture history heuristic, singular extensions, multicut (using singular search result).
(with a healthy amount of parameter tweaking after each addition)
Idk if I'm missing anything major. Those search heuristics constitute the vast majority of the Elo you'll find in any top engine, though the details of the implementation are very important.

Improved Connorpasta

Written by Integral author Aron Petkovski. It is an improvement to Connorpasta that aims to include more features and covers a wider range of topics.

A reasonable search feature progression assuming you have the fundamentals i.e. negamax and alpha/beta pruning (ideally in a fail-soft framework)

There are also time management adjustments that can be done at any point after adding iterative deepening. Ideally you have:

  • Hard bound (applies to the entire search)
  • Soft bound (checked on each new depth in the ID loop)

For the soft bound the progression can go something like this

  • Node-based scaling
  • Best move stability
  • Eval stability

Additionally, should be a healthy amount of parameter tweaking after each addition. There are other minor features that top engines have, but these will constitute the majority of the elo you will find in them.